Tech
Ars Fires Reporter For Accidentally Using Fake AI Quotes
from the I’m-sorry-I-can’t-do-that,-Dave dept
Last month we reported on a strange story in two strange parts: first, a coder had his AI agent create an entire smear campaign against a coding repository volunteer because he rejected AI code. Second, an Ars Technica journalist named Benj Edwards used a bunch of quotes made up by ChatGPT in a story about the saga without fact-checking whether or not they were actually true.
Edwards was also very up front in terms of explaining and taking direct ownership of the screw up, noting he was sick with the flu at the time he wrote the story. Ars was also refreshingly up front about it, issuing an editor’s note apologizing for the error.
Edwards says he first tried to use Claude to scrape some quotes from the engineer’s website, but that was blocked by site code. He then turned to ChatGPT to farm quotes from the site, but ChatGPT decided to just make up a whole bunch of stuff the engineer never said (this is a pretty common issue).
Just cutting and pasting quotes probably would have saved the journalist a lot of time and headaches. And his job, apparently, since Ars has since decided to fire Edwards, something Ars doesn’t seem interested in talking about:
“As of February 28, Edwards’ bio on Ars was changed to past tense, according to an archived version of the webpage. It now reads that Edwards “was a reporter at Ars, where he covered artificial intelligence and technology history.”
Futurism reached out to Ars, Condé Nast, and Edwards to inquire about the reporter’s employment status. Neither the publication nor its owner replied. Edwards said he was unable to comment at this time.”
There are several interesting layers here. The biggest being that AI isn’t an excuse to simply turn your brain off and no longer do rudimentary fact checking.
At the same time, this can’t really be unwound from the fact that media ownership rushed to tightly integrate often under-cooked LLM models into an already very broken journalism industry with the obvious and primary goal of cutting corners and undermining already-struggling labor.
The pressure at most outlets for journalists to generate an endless parade of content without adequate compensation or time off creates in increased likelihood of error. The overloading (or elimination of) editors (with or without AI replacement) compounds those errors. That the end product isn’t living up to anybody’s standards for ethical journalism really shouldn’t surprise anybody.
Filed Under: ai, ai agent, automation, benj edwards, code, fabrications, journalism, llm, media
Companies: ars technica, conde nast