Concerns raised over autism prediction paper | Spectrum | Autism Research News

Ambiguous algorithm: The study’s authors claimed that their model could distinguish between autistic and non-autistic people, but did not state which variables the model used.

Photography by Richard Drury

A machine-learning study that claims to predict autism diagnoses contains cloudy methodology and vague language, according to researchers who expressed concerns about the paper.

The paper describes an algorithm that can distinguish between the brains of autistic and non-autistic people with an accuracy of nearly 90 percent based on MRI data from the Autism Brain Imaging Data Exchange. Yet it does not include which variables were used to make that prediction, says Allison Jack, assistant professor of psychology at George Mason University in Fairfax, Virginia.

“They didn’t provide a description that anyone could really use to replicate their model. It was completely unclear what they were doing,” Jack says. “What is this model using to make the determination between the autistic group and non-autistic group? The accuracy of the predictions, too, seems suspiciously high to me.”

Dorothy Bishop, emeritus professor of developmental neuropsychology at the University of Oxford, shared her concerns with the paper earlier this month in a comment on PubPeer, a website where scientists comment on research papers.

Bishop flagged “rather vague” language that appears throughout the paper, including phrases such as “A scale-free network is called a neuronal connection between neurons, as it changes with enhancement,” and “autism is one of the heterogeneous and psychological growth disorders.”

Other language in the paper is offensive and outdated, Jack says, such as describing autism as a “disease” and referencing Asperger’s syndrome and pervasive developmental disorder, which are no longer their own diagnoses.

The researchers also use the terms “validation” and “testing” interchangeably, even though the terms refer to two distinct stages in machine-learning research, says Martin Styner, professor of psychiatry and computer science at the University of North Carolina in Chapel Hill. “This is pretty basic, and any review would have pointed this out,” he says. “So I doubt any major peer review has been done on this paper.”

In addition, the investigators did not include their code in the paper, which is standard practice for machine-learning research, Styner says. “Without seeing your code, I don’t trust your results.”

Spectrum reached out to the lead investigator of the study, Muhammad Shuaib Qureshi, assistant professor of computer science at the University of Central Asia in Bishkek, Kyrgyzstan, twice by email for comment but has not heard back.

Another study investigator, Junaid Asghar, has had two other papers critiqued by researchers on PubPeer: one for the use of tortured phrases like “bosom disease” instead of “breast cancer,” and another for potential image duplication. Asghar also did not respond to Spectrum’s request for comment.

The study was published on 10 July in the Journal of Healthcare Engineering, a Hindawi title that shut down on 2 May alongside three other titles because they were overrun by paper mills — organizations that sell fraudulent research papers — according to an announcement from Hindawi.

Papers submitted before the closure can still be published if the authors wish, but the journal is closed to new submissions, according to a spokesperson from Wiley, the parent company of Hindawi. The journal was one of the 19 Hindawi titles removed from Clarivate’s Web of Science index in March.

The autism prediction study appeared in a special issue titled “Advances in Feature Transformation based Medical Decision Support Systems for Health Informatics.” So far, 23 of the 90 articles in the issue have been retracted.

Wiley is currently investigating the paper in accordance with guidelines from the Committee on Publishing Ethics, according to a spokesperson.

“We are responding to concerns within this special issue and are continuing to issue retractions, where appropriate,” a Wiley spokesperson told Spectrum in an email.

Wiley has retracted over 500 papers and plans to retract 1,200 more because of the paper mill infiltration, which was largely orchestrated by “fraudulent” guest editors of special issues, according to a Scholarly Kitchen blog post written by Jay Flynn, executive vice president and general manager, research at Wiley.

Special issues are often the target of paper mills because nefarious guest editors can accept fake papers en masse, according to a report by the Committee on Publishing Ethics. Yet some publishers curate large numbers of special issues in order to collect more author processing charges, says Guillaume Cabanac, professor of computer science at the University of Toulouse in France. “It’s like a Ponzi scheme.”

The lead guest editor of the special issue, Liaqat Ali, assistant professor of electrical engineering at the University of Science & Technology, Bannu, in Pakistan did not respond to Spectrum’s two requests for comment.

Despite Bishop’s concerns with the paper, “I don’t think this is too serious a problem for autism researchers — the papers would not get picked up by serious academics I think,” Bishop told Spectrum in an email. “They are far too vague and outdated and they are published in outlets that aren’t widely read.”

Jack agrees. The Journal of Healthcare Engineering is “certainly not one that I regularly follow for my hard-hitting autism research,” she says.

This content was originally published here.


Posted

in

by

Tags: