Artificial "Intelligence"
"A computer can never be held accountable
"Therefore, a computer must never make a management decision."
I. Interview
I know a guy who landed an internship at some startup after pasting the entire job description into ChatGPT, copying the model's tailored answers, and tweaking minor wordings. For the technical interview, he had AI tools pulled up, ready to answer any questions he was asked. Two weeks later, the recruiter praised his "refreshingly precise communication" and "succinct yet technical answers." The deception was small and frictionless. To me, that made it feel bigger than any headline about deepfakes. This was ordinary dishonesty wearing a mask, and it arrived without much. It seemed almost… normal.
II. Ghostwriters
People have always cheated. Whether on tests or on each other, cheating is probably as old as humanity itself. We all know someone who never studied, yet somehow managed a 100 on every test. From getting answers from a friend or writing notes on your hand, there are a million ways people have cheated. Language models simply automate the art. "I did not lie," applicants tell themselves, "the software generated those lines." It becomes ownerless, floating between human and machine.
When a student asks ChatGPT for a college essay that quotes both Derrida and Drake, the result arrives fully formed, complete with citations that look authentic even when half are imaginary. The student can claim detachment: the system hallucinated, not me. They may ask, "Who is it really hurting?" They will have access to the ghostwriter on the job, just like we all have a calculator in our pockets.
III. Hallucination
The technical term for when an AI invents facts is "hallucination". The real danger is not misinformation in isolation; it is the manufactured confidence that wraps up each claim. I asked a model who invented the paper clip, and it stated "Johan Vaaler, 1899" with serene certainty. Unless the reader already knows better or pauses to verify, the answer sticks like gum. I don't think the average cheater really cares to verify these things, do you?
Scale that dynamic. A grant proposal drafted overnight by Claude Sonnet or ChatGPT o3 looks immaculate, with footnotes gleaming. The review board, exhausted, approves without granular checks. Months later, auditors discover that three references never existed. Whose fault is that? The researcher who trusted the tool, the model that spun fiction, or the committee that privileges polish over patience?
IV. Incentives
AI cheating is a symptom, not a rupture in human virtue. Institutions reward fluency, speed, and bold claims more than they reward slow accuracy. Large language models embody those incentives perfectly: lightning quick, unfailingly confident, beautifully formatted. Students and job seekers adopt them because the hidden syllabus tells them to value those traits above all. I coach teenagers for competitive programming contests. Last year, one student turned in a solution packed with some obscure function far beyond his usual level; hell, I barely understood it! I confronted him about it, and he just shrugged. "ChatGPT found it and it passes the tests. Isn't using the best tools part of programming?" In his eyes, contests and programming as a whole measure resourcefulness, not originality or thought. To a student motivated only by the end result of a "win," employing it feels like a smart strategy, not some betrayal of values.
V. The Line
Complete prohibition is impossible; banning language models would be like confiscating calculators in calculus class. We've opened Pandora's box; we have to deal with the consequences. Unchecked permissiveness is equally horrible because every assignment, report, or design mock-up becomes Schrödinger's work product: genuine or generated, impossible to tell. You begin to ask, What's the point?
One common solution is disclosure. A note saying "assisted by GPT-4" could become as routine as "edited with Lightroom." Proper attribution removes the illusion that work springs fully grown from a lone genius and lets evaluators adjust their expectations. But still, just as watermarks can be cropped out, so too can text. Over time, institutions must value demonstrations that resist automation: live collaborative problem solving, oral exams, and real-time whiteboard walk-throughs. When proficiency has to be proved in real time, the temptation to outsource fades.
VI. Reputation
Cheats eventually face a quieter cost than lawsuits or expulsions, namely the slow erosion of credibility. An executive who submits quarterly reviews written by a bot may dazzle a team for a year, but the moment a hallucinated data point slips through, every prior paragraph is suspect. Once you have the perception of a cheater, it is hard to rid yourself of it. In competitive fields, trust accrues atom by atom and shatters all at once.
For individuals, cultivating a clear signal of personal voice is the best defense. Readers can spot the difference between glossy template prose and a sentence that reveals genuine thought. That difference may be subtle, but it is sticky, and it cannot be faked by current models because it lives in the writer's small risks: an unexpected metaphor, a deliberate pause, a question left open. I think the biggest difference is exposed in those risks; AI rarely ever makes a point for itself, simply parroting consensus with consensus disagreements. To argue something new is proof of genuineness, regardless of how silly the claim itself may be.
VII. Accountability
Language models imitate sincerity, but they cannot feel the weight of a promise or the flush of being caught. Humans can. That fragile capacity for shame and wonder is the last advantage we possess. Keep the model for busywork, sure. Summarizing long texts or writing redundant code. But think for yourself. The point is not to silence synthetic text, it is to ensure that when words shape decisions, they carry a line we can trace and an accountability we can shoulder.