The revered sports magazine was caught publishing pieces generated by machines—but with seemingly ‘human’ bylines. The scandal raised an important question: Will future reporters be machines? More importantly, will it improve the business of news—or make it much worse?
Editor’s note: When there isn’t a big headline making news, we often pick a Big Story on a topic that we think will be interesting to you. We’d be just as happy to take requests from you. Do write to us at email@example.com. We’d also love to hear what you think of our leads on these kinds of less-newsy stories on the complicated truth about narcissism, ‘philanthropy’ of the very rich and the problem with pandas.
Researched by: Rachel John & Anannya Parekh
What happened with Sports Illustrated?
Meet Drew Ortiz: Late last month, Futurism published an explosive investigation that revealed many of the ‘writers’ on the SI website weren’t, well, human. For example, Drew Ortiz—whose bio read:
Drew has spent much of his life outdoors, and is excited to guide you through his never-ending list of the best products to keep you from falling to the perils of nature. Nowadays, there is rarely a weekend that goes by where Drew isn't out camping, hiking, or just back on his parents' farm.
Futurism discovered the guy’s profile photo on sale at a website that sells AI-generated portraits. Its sources inside the publication revealed there were many such fake writers pretending to be real—often churning out gibberish:
The AI authors' writing often sounds like it was written by an alien; one Ortiz article, for instance, warns that volleyball "can be a little tricky to get into, especially without an actual ball to practice with."
What SI said: The publication’s parent company Arena claimed that it had outsourced at least part of its content to a company called AdVon Commerce—but continued to insist the pieces were written by humans—using pseudonyms (?!):
AdVon has assured us that all of the articles in question were written and edited by humans… However, we have learned that AdVon had writers use a pen or pseudo name in certain articles to protect author privacy — actions we don't condone — and we are removing the content while our internal investigation continues and have since ended the partnership.
Key point to note: This is not the first time that an Arena property ran into this kind of trouble. Its biz offering, TheStreet also had writers with “highly specific biographies detailing seemingly flesh-and-blood humans with specific areas of expertise—but with profile photos traceable to that same AI face website.” Most egregiously: TheStreet boasts its team "is comprised of a well-rounded group of people who bring varying backgrounds and experiences to the table."
Not just Sports Illustrated: The SI scandal is just the most shocking given its hallowed reputation. But there have been plenty of respected names caught using AI without disclosures. Example: CNET—which used the misleading ‘CNET Money Staff’ as a byline. But, hey, fake humans!
So they were caught lying…
The lying is one part of the story—and there are good reasons for it. In a recent survey, 75% of respondents say AI-generated articles are a “bad thing.” They want full disclosure—content to be labelled as machine-generated—but then are likely to trust it less. Funnily enough, people who distrust the media because it’s “biased” are just as likely to distrust a machine that churns out the news.
The bigger story: The AI newsroom story is bigger than the odd scandal. A US survey showed that 90% of newsrooms are using some form of artificial intelligence in news production, 80% in news distribution and 75% in news gathering. Many publications are now using machine-generated content—and are open about it.
Examples: USA Today publishes sports articles by ‘LedeAI’. Reuters uses AI to compose and test headlines performance—while Bloomberg News is building a platform called BloombergGPT system. The New York Times recently posted a job listing for an editor to serve as "newsroom generative AI lead." Buzzfeed went one step further—boasting about its AI shift to readers: “We’ll be developing content that is AI-native — cool new things that you couldn’t do at all without AI — and things that are enhanced by AI but created by humans.”
So what’s wrong with that?
The immediate problem: is accuracy and quality. Gannett had to pause its AI experiment at USA Today and other publications because of serious errors. We all know that AI chatbots “hallucinate”—i.e make up stuff. But presumably the machines will get better as the technology improves.
The very serious problem: is the financial crisis in the media industry–which is driving a desperate pivot toward AI. All media—digital or print—is in serious trouble–and they are getting rid of their human employees:
In the United States, Gannett plans to cut its news division by a further 6%, losing around 200 staff. The Washington Post is discontinuing its 60-year-old print magazine, CNN is laying off hundreds of people, and NPR is preparing to make significant cuts as a result of a financial black hole.
Buzzfeed axed 180 employees—after rounds of brutal layoffs—and Morning Brew is laying off 14% of its workforce. Think The Guardian is doing well? Think again. It announced an operating loss for the last financial year of £69 million ($121 million)—and has shed more than 260 jobs this year.
The big picture problem: The old business model no longer works. Most people prefer to get their news for free—and on social media platforms. So new outlets don’t make money either through subscriptions—or with ad revenue—most of which is gobbled up by Insta, X etc. This is true of publications that are doing very well:
And that's the rub: in the digital world large numbers of hits no longer necessarily equate to large profits. The Guardian is a case in point. Despite its financial troubles, the company's websites in the UK, Australia and the United States continue to grow in popularity, by some estimates garnering around 42 million visits a month. But that's still not enough.
Quote to note: Right now, news businesses have no viable model that supports a decent-sized newsroom—with even 30-40 human employees. As one proprietor despairs:
Commercial journalism and the media industry in the western world is in the middle of a tsunami, and there's no weather forecast that suggests that that tsunami is going to end any time soon. It may never end.
The unseen problem: Media ownership. As even good publications struggle, they are sold, then resold—and sold again—until they end up in the hands of some conglomerate that has zero investment in journalism. Take Sports Illustrated, for example:
It repeatedly downsized, switched from a weekly to a monthly publication schedule and was sold by its owner, Time Inc., to a company called Authentic Brands Group, or ABG, which is in the business of inking lucrative licensing deals. ABG then sold a 10-year license to publish Sports Illustrated to our new friends at Arena Group.
These companies have no interest in any product—of any kind, as New York Times notes:
Authentic Brands’ business model mostly involves buying fashion brands that are down on their luck or in bankruptcy — Brooks Brothers, Aéropostale, Forever 21 — and then shedding legacy commitments, cutting costs and operating the brand while banking on its name recognition.
That’s exactly what the Authentic has done with SI—slapping its name on nutritional supplements, even a casino. That side of the business is roaring. And the publication end is being sucked dry by Arena—turned into “hundreds of sites dedicated to individual teams — helmed by non-staff writers paid small sums — were created with little oversight.”
Enter, the AI writer: This business model is all about churning vast volumes of content to game the SEO algorithm—while slashing costs. No one does that better than a machine—it offers “speed, scale and spread” at a fraction of the price.
Brian Merchant explains this succinctly in the LA Times—and he is worth quoting at length:
And here’s where the AI comes in. Not as a tool deployed by forward-looking executives eager to embrace the future, but as a last-ditch effort to extract the final bits of value from the pieces of something that’s already broken. Sports Illustrated has already slashed full-time staff, spun up a content mill with freelancers pumping out content for a fraction of the price, and let editorial standards sink into the gutter. The AI play is an arrow out of the same quiver.
It’s increasingly clear that to those in the content generation business… AI has become popular as a relatively cheap, wholly unimaginative way to attempt to generate value with the lowest amount of effort or investment.
The bottomline: The villain here is not AI–which can help humans produce brilliant journalism. The problem lies in an industry driven by mindless monetisation. The humans who will suffer most will be those most replaceable—junior to mid-level employees. Many will be forced out—while others will have to work for terrible pay and in dismal conditions. The star reporters, editors, celebrity writers will be just fine.
But what happens to an industry that crushes future generations of journalists? Or as Samantha Floreani asks: “What happens when the web becomes dominated by so much AI generated content that new models are trained not on human-made material, but on AI outputs? Will we be left with some kind of cursed digital ouroboros eating its own tail?”
Samantha Floreani in The Guardian and Brian Merchant in Los Angeles Times have the most incisive takedowns of the current business models—and their effects. Khaled Diab in Al Jazeera and Axios look at the big picture—the good and bad that AI will bring. Neiman Labs has a good roundup of the great ways machines can help reporters—rather than destroying their futures. The Futurism scoop on Sports Illustrated is worth a read—as is the Reuters Institute’s predictions on the future of news. ABC News Australia does an excellent job summing up the financial crisis.