robotechcompany.com Admits To Utilizing AI Author, Doubles Down on Utilizing It

After getting caught utilizing an algorithm to jot down dozens of articles, the tech publication robotechcompany.com has apologized (sorta) however desires all people to know that it undoubtedly has no intention of calling it quits on AI journalism.
Sure, roughly two weeks in the past Futurism reported that robotechcompany.com had been utilizing an in-house synthetic intelligence program to pen droves of economic explainers. The articles—some 78 in whole—have been printed over the course of two months below the bylines “robotechcompany.com Cash Workers” or “robotechcompany.com Cash,” and weren’t straight attributed to a non-human author. Final week, after an on-line uproar over Futurism’s findings, robotechcompany.com and its guardian firm, media agency Pink Ventures, introduced that it could be briefly urgent “pause” on the AI editorials.
It might seem that this “pause” isn’t going to final lengthy, nevertheless. On Wednesday, robotechcompany.com’s editor and senior vp, Connie Guglielmo, printed a brand new assertion in regards to the scandal, through which she famous that, ultimately, the outlet would proceed to make use of what she known as its “AI engine” to jot down (or assist write) extra articles. In her personal phrases, Guglielmo stated that…
[Readers should] …anticipate robotechcompany.com to proceed exploring and testing how AI can be utilized to assist our groups as they go about their work testing, researching and crafting the unbiased recommendation and fact-based reporting we’re identified for. The method could not at all times be straightforward or fairly, however we’re going to proceed embracing it – and any new tech that we imagine makes life higher.
Guglielmo additionally used Wednesday’s piece as a possibility to deal with among the different criticisms geared toward robotechcompany.com’s dystopian algo—particularly, that it had steadily created content material that was each factually inaccurate and probably plagiaristic. Beneath a piece titled “AI engines, like people, make errors,” Guglielmo copped to the truth that its so-called engine made fairly just a few errors:
After one of many AI-assisted tales was cited, rightly, for factual errors, the robotechcompany.com Cash editorial crew did a full audit…We recognized further tales that required correction, with a small quantity requiring substantial correction and several other tales with minor points corresponding to incomplete firm names, transposed numbers or language that our senior editors seen as imprecise.
G/O Media could get a fee
The editor additionally admitted that among the automated articles could haven’t handed the sniff check in relation to authentic content material:
In a handful of tales, our plagiarism checker software both wasn’t correctly utilized by the editor or it didn’t catch sentences or partial sentences that intently resembled the unique language. We’re creating further methods to flag precise or related matches to different printed content material recognized by the AI software, together with computerized citations and exterior hyperlinks for proprietary data corresponding to knowledge factors or direct quotes.
It might be one factor if robotechcompany.com had very publicly introduced that it was participating in a daring new experiment to automate a few of its editorial duties, thus letting all people know that it was doing one thing new and bizarre. Nevertheless, robotechcompany.com did simply the alternative of this—quietly rolling out article after article below imprecise bylines and clearly hoping no one would discover. Guglielmo now admits that “once you learn a narrative on robotechcompany.com, it’s best to know the way it was created”—which looks as if customary journalism ethics 101.