Medical journals are being flooded with shoddy AI-generated work, a growing threat to the scientific community that could undermine the value and trustworthiness of potentially life-saving health research. Papers citing hallucinated journals and studies have quickly become a common fixture, raising major concerns among those tasked with weeding through a flood of new submissions.

In a high profile new gaffe, the reputable New England Journal of Medicine (NEJM) was forced to retract a paper by two Beijing-based researchers about a man in China developing “bronchial casts” in his lungs following a wildfire, after it was discovered that the authors had used an AI tool to manipulate a photograph in the piece.

The offending photo shows almost pitch-black, particle-filled bronchial tissues that were cryogenically removed from the patient’s lungs. As MedPage Today reported, an 87-year-old man had been brought to the emergency department at the Beijing Tsinghua Changgung Hospital after extensive fire smoke inhalation, requiring the removal of bronchial tissues that were entirely plugged with smoke particulate matter, an extremely dangerous obstruction of the airway. (MedPage later pointed out the retraction in an editor’s note.)

However, what appears to be a metric measuring tape above the tissues in the photo raises immediate red flags, with the numbers along the scale following a nonsensical sequence — a classic hallmark of the use of an unsophisticated AI image generator.

The authors said the slip-up was a careless accident.

In a retraction note, they wrote that “we were unaware of Journal policies on image manipulation and had altered our submission by using an artificial intelligence (AI) tool to move the ruler to the top of the image.”

“We therefore wish to retract our image and case report,” the note reads.

The blunder should give researchers pause. If simply moving a ruler results in this kind of AI-generated carnage, what other manipulations, both intentional or unintentional, are falling through the cracks?

Some users on social media also questioned the validity of the rest of the offending image, pointing out that there were too many segments of the senior patient’s lungs in the photo, raising the possibility that the image had been manipulated by AI in other ways.

Reached for comment, the authors provided a more detailed explanation of the snafu, but declined to send the original image for comparison:

The authors then added additional context:

In an appended editor’s note, the NEJM issued a stark reminder: “Authors are required to disclose any use of AI tools and any changes made to images.”

The journal’s editorial policies state that any use of “large language models, chatbots, or image creators” must be disclosed “at submission.”

“Authors should carefully review and edit all materials produced through the use of AI, to prevent the submission of authoritative-sounding output that is incorrect, incomplete, or biased,” the policy warns.

Meanwhile, editors across the scientific world are bracing themselves for an onslaught of slop.

“Science’s increased vigilance against corruption of the literature has become one more component in science and scientific publishing’s relentless pursuit of the truth,” the journal Science wrote in a January editorial. “Publishing carefully edited papers subjected to the judgment of multiple humans — and the retraction and correction of papers when the humans involved make mistakes — has never been more important.”

More on AI slop and academics: Top Medical Journal Publishes Searing Article Warning Against Medical AI

The post New England Journal of Medicine Retracts Paper Because Photo of Patient’s Insides Was Garbled by AI appeared first on Futurism.