• The Handbasket
  • Posts
  • Refusing to accept an AI-poisoned future of journalism

Refusing to accept an AI-poisoned future of journalism

There is no pride in relying on a machine to do deeply human work.

In a November conversation at the Urban Consulate in Detroit, the great writer and thinker Tressie McMillan Cottom was asked by host Orlando P. Bailey, “Do you have a daring idea for us to ponder and sit with for our collective future?” McMillan Cottom replied with this: “When people try to sell you on the idea that the future is already settled, it’s because it is deeply unsettled. I think that this promise of an artificial intelligent future is really just a collective anxiety that very wealthy, powerful people have about how well they’re gonna be able to control us in the future. If they can get us to accept that the future is already settled—AI is already here, the end is already here—then we will create that for them. My most daring idea is to refuse.” 

Today, I refuse.

As a rule, I’ve tried not to concern myself with AI. As the companies behind these products continue promising us that AI has something for everyone, I still haven’t seen a practical application that makes sense for me. But I’ve been hesitant to weigh in on the use of AI in the past as it typically concerned areas beyond my expertise and in which I do not work. But now that multiple stories from major outlets in recent days have proclaimed not just the inevitability of AI in journalism, but have trumpeted how working journalists are actively including AI-produced work in their finished product, it’s become my problem. It’s exposed a gulf between those who want to have their words remembered and those who just want people to remember that they wrote. We must stem the idea being pushed by tech companies and their billionaire funders who’ve sunk too much into their products to admit defeat that the infiltration of AI into journalism is inevitable; because from my perch as an independent journalist, it simply is not.

The Wall Street Journal recently sat down with Nick Lichtenberg, a reporter at Fortune, whose website is a shell of the once-renowned magazine founded nearly a century ago. AI-assisted stories accounted for nearly 20% of Fortune’s web traffic in the latter half of 2025, an astonishing fact shared in the story. Lichtenberg proudly attached his name to most of them with the full backing of his bosses, and the story notes he’s published more than 600 stories since rejoining the publication in July.

“A story by Lichtenberg sometimes starts with a prompt entered into Perplexity or Google’s NotebookLM, asking it to write something based on a headline he comes up with,” the WSJ story explains, referring to two different AI software programs. “He moves the AI tools’ initial drafts into a content-management system and edits the stories before publishing them for Fortune’s readers.”

Even more alarming than this admission of process is the admission of how his use of AI isn’t always disclosed. “Initially, Lichtenberg would share bylines with Fortune Intelligence,” the story says. “Now, he typically takes sole bylines because he feels the work is mostly his own. [Editor in Chief and Chief Content Officer Alyson] Shontell said of Lichtenberg’s stories, ‘more than 50% is Nick.’ His stories sometimes include a disclosure explaining that generative AI was used as a research tool.”

It’s a value drilled into the brains of young journalists since time immemorial that 100% of stories published under your name should be a product of your own work. If other people worked on the story, a co-byline. Sometimes a story will note “additional reporting” or “research by” at the bottom as a necessary and deserved nod to the material support. Some outlets include the editor’s name, a practice that ought to be more widely practiced. The point is that all of the humans who helped move a concept to a finished product (and who may not all be writers!) deserve acknowledgement because they are sentient beings whose unique perspectives helped shape what the story became—not tools.

“I’ve always hated the zero-to-one process of writing a story,” independent tech journalist Alex Heath told Wired for a story called Meet the Tech Reporters Using AI to Help Write and Edit Their Stories, which was published this week. “Now, it’s actually kind of fun.” 

The “fun” part is that Heath openly admits to using AI to get his stories off the ground, going so far as to use it to create first drafts. “Going out on my own, I realized I need AI to help with the volume.” 

As a fellow independent journalist, but one who has never used AI to write a story and emphasizes quality over quantity, I take umbrage with the idea that because we have fewer resources we’re forced to plagiarize and that more is always more. It undermines the respect for which so many of us have fought for—and continue to fight for— in this industry, and creates a permission structure for cheating. And Heath admits as much: “I feel like I’m cheating in a way that feels amazing,” he said. “I never did this because I liked being a writer. I like reporting, learning new things, having an edge, and telling people things that will make them feel smart six months from now.”

It seems like Heath is saying he likes the way that good writing makes people feel, but he doesn’t like doing the toughest part of eliciting those emotions: Actually writing the thing. 

Another problem with relying on AI is that there’s no way of knowing where the program you’re using has “learned” this knowledge, nor can you know what is paraphrased versus straight-up lifted from another writer’s work. That became painfully clear this week when the New York Times had to issue this mortifying correction to a book review by freelancer Alex Preston:

Editors’ Note: March 30, 2026:

A reader recently alerted The Times that this review included language and details similar to those in a review of the same book published in The Guardian. We spoke to the author of this piece, a freelancer reviewer, who told us he used an A.I. tool that incorporated material from the Guardian review into his draft, which he failed to identify and remove. His reliance on A.I. and his use of unattributed work by another writer are a clear violation of The Times’s standards. The reviewer said he had not used A.I. in his previous reviews for The Times, and we have found no issues in those pieces. The Guardian review of “Watching Over Her” can be read here.

Preston was promptly dropped by the Times as a freelancer and issued a statement of apology to The Guardian. “I made a serious mistake in using an AI tool on a draft review I had written, and I failed to identify and remove overlapping language from another review that the AI dropped in,” Preston wrote. “I am hugely embarrassed by what happened and truly sorry.”

It was the second time in a few days that the Times was called out for potential AI plagiarism, despite strong internal policies intended to safeguard against it. 

Some AI-loving journalists appear to believe that if they’re clear enough with the AI program they’re using, it will truly understand what they’re seeking and not just do what it’s made to do: steal shit. Jasmine Sun, a contributor at The Atlantic, also spoke to Wired about her prolific AI usage, describing how she’s attempted to train Claude, the program she uses:

Like Heath, Sun has fed Claude past articles she’s written and notes on her style. But she’s also instructed Claude to focus only on enhancing and developing her voice and taste, and never to be sycophantic. She tells Claude it “should never write a sentence for her. Your goal is to elicit out of Jasmine by providing feedback.”

Here’s part of the instructions Sun has shared with her Claude editor: “You are not a co-writer. You cannot perceive—you don’t have experiences, sources, scenes, or emotions to draw from. Your role is to help Jasmine write like the best version of herself—not just who she is on the page now, but who she’s trying to become as a writer. That means understanding both her current voice and her aspirations, including the writers and qualities she’s reaching toward.”

Telling a machine that can’t perceive that it can’t perceive won’t make it perceive that it can’t perceive. It can’t make you be the best version of yourself because it doesn’t know what that means, nor does it know what it means to aspire. Just because some are given a human name and users are taught to address it collegially doesn’t make it real. You cannot force a machine to become human. You’re stuck, for better or worse, with your fellow humans for perceiving your aspirations. Perhaps the problem is that you don’t like how humans perceive you. 

Like Sun, the Washington Post’s Megan McArdle openly admitted to relying heavily on AI for her work. In a series of recent posts on X, McArdle admitted to using AI “to do research (i.e., find things to read, explain parts of academic papers I find ambiguous or confusing), transcribe interviews, generate pushback on my column thesis, suggest trims when I'm over my word count, sharpen podcast interview questions, and perform a final fact check on columns and editorials.” 

Becca Rothfeld, a literary critic at The New Yorker who worked at the Post until recently, pointed out how McArdle’s posts are tantamount to confessions of violating her own publication’s policies on AI.

“The policy states, ‘We are transparent about how and when we use AI,’ but McArdle has not appended notes to her columns explaining how she has used it in each, although she is apparently quite heavily reliant on it!” Rothfeld wrote on Substack. “The policy states, ‘Attribution of material from other media must be total. Plagiarism is not permitted…. Readers should be able to distinguish between what the reporter saw and what the reporter obtained from other sources such as wire services, pool reporters, email, websites, etc.’ McArdle admits here that she often asks AI to generate ideas for stories for her, yet she has not attributed anything to it in any of the resultant columns, at least that I’ve seen.”

McArdle goes on to say that journalists should think of an AI chatbot “as a combination of an intern, a first-past editor, and a fact-checker. Its job is to do grunt work and help you turn in cleaner copy, not to ‘inspire’ you.” 

For starters, a chatbot can be none of those things because it’s not a person. For another, calling those duties “grunt work” belies a fundamental misunderstanding—and frankly, disrespect—for the many steps of the writing process. Those steps are also often the work of younger, less-experienced journalists trying to forge a career in an ever-dwindling industry. If those jobs are considered “grunt work” better delegated to machines, how do you suppose they get started?

Rusty Foster, writer and publisher of Today in Tabs, talked this week about AI infiltration of journalism in terms of who will “go AI” and who will not. And he’s right to characterize it in this way; there does seem to be a predisposition for certain journalists to accept AI into their hearts, depending on their goals. For those whom volume and access to power are paramount, shortcuts and plagiarism aren’t detrimental to their final product. But for those who value foremost being seen as journalists of quality, originality, and integrity, the machines serve none of those goals. 

If your goal is simply to create content, great news: That’s an existing, different job. It might even be more lucrative, and less governed by the respectability politics of uppity journalists who believe your work should be exclusively shaped by the processing power of your own mind. This is not a knock on content creators, many of whom produce essential work and hold themselves to high ethical and moral standards, but simply to note that a less governed space may be more ideal for those looking to eschew standards. 

If you want to work with machines, get a job that requires it. There are a whole lot more of those than there are writing jobs, so free up space for people who actually want to do the work. You’re not doing the world a favor by gifting it your human/AI hybrid. Journalism will not miss you if you leave. No one is making you be a journalist; it’s not one of those jobs parents force you to choose, like a doctor or a lawyer. Journalism, while romanticized in popular culture, is generally unglamorous and poorly paid, with progressively worse job opportunities (no thanks to AI.) I’m careful not to refer to it as a calling because that seems to excuse sacrificing mental health in service of craft, but I do believe that it’s a job that can’t be forced. It’s obvious to readers when your heart isn’t in it. 

I look back on these past four years as an independent journalist and it’s possible to track how I built this space brick by brick. Every conversation I had with family, friends, and trusted comrades, every story I brought to life—even the duds—led me to this moment. There was no formula, and there’s no way I could have possibly programmed it. It was the result of a series of deeply human decisions. 

I don’t write because it’s fun (though sometimes it is.) I write because it feeds my spirit. It helps me unspool my thoughts and feelings in the hopes of helping others do the same. The process is the purpose. You don’t have to always like or enjoy the process, but if you don’t respect it enough to do it yourself, there is no purpose. 

Writing is not always fun and anyone who says it’s fun all the time is lying. It can be grueling and frustrating—perhaps at times verging on something you loathe—but feeling those things ultimately means you understand your words are a representation of who you are in the world. You understand that they reflect on you, and it’s your responsibility to make them as good as they can be. If you really, truly hate writing, if the only way you can do it is by using a plagiarism machine, maybe it’s just not for you. 

Sometimes a thought you love pops into your head and you scramble to open a blank email or your notes app or even grab a piece of paper to jot it down before it flies away. These moments of inspiration aren’t just a result of fishing in your pool of existing knowledge; they include a little bit of magic. A sprinkle of the unidentifiable zest that makes your writing something that only you—not the ‘you’ processed and interpreted by a machine—could create. Your words are a product of a specific moment in time, and that’s what makes them distinct. (I originally used the word “special” and then I changed it to “unique,” ultimately landing on “distinct” because it felt right. That’s the process.)

Nowhere was the power of the process more evident than with a story I published in early February about the Worldwide Expeditionary Multiple Award Contract, Territorial Integrity of the United States (WEXMAC TITUS), an obscure government contracting program that has been co-opted by the Trump administration to fast-track the construction of ICE concentration camps around the country. The story was born out of conversations with Michael Wriston who I reached to after I was alerted to his exceptional work on Project Salt Box, which collects and organizes public data about ICE’s land purchases and other contracts. Wriston told me about the program and once I grasped the enormity, I set out to write an all-encompassing piece that would help make a seemingly wonky issue easily understood by a wide swath of people. Once I hit publish, that initial goal felt like it had been fulfilled.

Then on March 22nd, Senators Elizabeth Warren (D-MA) and Jeanne Shaheen (D-NH) sent a letter to Secretary of Defense Pete Hegseth about WEXMAC TITUS, linking to my story in the very first sentence. A few days later, Senator Warren along with Congressman Jamie Raskin (D-MD) announced that they, along with 45 other lawmakers, would be investigating six of the contractors and real estate firms involved with the questionable contracting program. Though it’s far from exhaustive—as Wriston noted, the “inquiry focuses on a small cross-section of the contractors ... [and] the sellers who profited from the acquisitions, the brokers who facilitated them, and the officials whose financial disclosures overlap with both remain yet unchallenged”—seeing direct impact as the result of a story birthed from human interaction was a humbling reminder of its power.

AI may help you construct content, but it will not create memories. It will, at best, rehash ones you already made, and at worst, create false ones. It does not experience, it does not struggle, it does not feel: all essential parts of turning a pile of information into a story. 

During her November talk at the Urban Consulate in Detroit, Tressie McMillan Cottom said, “The proposal for a post-human future is one where there will be human beings, they’ll just be treated inhumanely.” To avoid that future—to refuse it—is to keep relying on other human beings, even if they sometimes disappoint you. What’s more human than that?

This story was edited by Jesse Hicks.

Reply

or to participate.