Science These Algorithms Look at X-Rays—and Somehow Detect Your Race - A study raises new concerns that AI will exacerbate disparities in health care. One issue? The study’s authors aren’t sure what cues are used by the algorithms.

  • 🏰 The Fediverse is up. If you know, you know.
  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account


MILLIONS OF DOLLARS are being spent to develop artificial intelligence software that reads x-rays and other medical scans in hopes it can spot things doctors look for but sometimes miss, such as lung cancers. A new study reports that these algorithms can also see something doctors don’t look for on such scans: a patient’s race.

The study authors and other medical AI experts say the results make it more crucial than ever to check that health algorithms perform fairly on people with different racial identities. Complicating that task: The authors themselves aren’t sure what cues the algorithms they created use to predict a person’s race.

Evidence that algorithms can read race from a person’s medical scans emerged from tests on five types of imagery used in radiology research, including chest and hand x-rays and mammograms. The images included patients who identified as Black, white, and Asian. For each type of scan, the researchers trained algorithms using images labeled with a patient’s self-reported race. Then they challenged the algorithms to predict the race of patients in different, unlabeled images.


Radiologists don’t generally consider a person’s racial identity—which is not a biological category—to be visible on scans that look beneath the skin. Yet the algorithms somehow proved capable of accurately detecting it for all three racial groups, and across different views of the body.

For most types of scan, the algorithms could correctly identify which of two images was from a Black person more than 90 percent of the time. Even the worst performing algorithm succeeded 80 percent of the time; the best was 99 percent correct. The results and associated code were posted online late last month by a group of more than 20 researchers with expertise in medicine and machine learning, but the study has not yet been peer reviewed.



The results have spurred new concerns that AI software can amplify inequality in health care, where studies show Black patients and other marginalized racial groups often receive inferior care compared to wealthy or white people.


Machine-learning algorithms are tuned to read medical images by feeding them many labeled examples of conditions such as tumors. By digesting many examples, the algorithms can learn patterns of pixels statistically associated with those labels, such as the texture or shape of a lung nodule. Some algorithms made that way rival doctors at detecting cancers or skin problems; there is evidence they can detect signs of disease invisible to human experts.


Judy Gichoya, a radiologist and assistant professor at Emory University who worked on the new study, says the revelation that image algorithms can “see” race in internal scans likely primes them to also learn inappropriate associations.

“We have to educate people about this problem and research what we can do to mitigate it.”
—JUDY GICHOYA, RADIOLOGIST AND ASSISTANT PROFESSOR, EMORY UNIVERSITY

Medical data used to train algorithms often bears traces of racial inequalities in disease and medical treatment, due to historical and socioeconomic factors. That could lead an algorithm searching for statistical patterns in scans to use its guess at a patient’s race as a kind of shortcut, suggesting diagnoses that correlate with racially biased patterns from its training data, not just the visible medical anomalies that radiologists look for. Such a system might give some patients an incorrect diagnosis or a false all-clear. An algorithm might suggest different diagnoses for a Black person and white person with similar signs of disease.

“We have to educate people about this problem and research what we can do to mitigate it,” Gichoya says. Her collaborators on the project came from institutions including Purdue, MIT, Beth Israel Deaconess Medical Center, National Tsing Hua University in Taiwan, University of Toronto, and Stanford.

Previous studies have shown that medical algorithms have caused biases in care delivery, and that image algorithms may perform unequally for different demographic groups. In 2019, a widely used algorithm for prioritizing care for the sickest patients was found to disadvantage Black people. In 2020, researchers at the University of Toronto and MIT showed that algorithms trained to flag conditions such as pneumonia on chest x-rays sometimes performed differently for people of different sexes, ages, races, and types of medical insurance.

Paul Yi, director of the University of Maryland’s Intelligent Imaging Center, who was not involved in the new study showing algorithms can detect race, describes some of its findings as “eye opening,” even “crazy.”

Radiologists like him don’t typically think about race when interpreting scans, or even know how a patient self-identifies. “Race is a social construct and not in itself a biological phenotype, even though it can be associated with differences in anatomy,” Yi says.

Frustratingly, the authors of the new study could not figure out how exactly their models could so accurately detect a patient’s self-reported race. They say that will likely make it harder to pick up biases in such algorithms.

Follow-on experiments showed that the algorithms were not making predictions based on particular patches of anatomy, or visual features that might be associated with race due to social and environmental factors such as body mass index or bone density. Nor did age, sex, or specific diagnoses that are associated with certain demographic groups appear to be functioning as clues.

The fact that algorithms trained on images from a hospital in one part of the US could accurately identify race in images from institutions in other regions appears to rule out the possibility that the software is picking up on factors unrelated to a patient’s body, says Yi, such as differences in imaging equipment or processes.


----------------------------------------------------------------------------------------------------------------------------------------------------

Really? Antiracist researchers "cannot figure out" how this is possible since “race is not a biological category."? This is a level of science denial that makes me want to punch a wall. How does anyone defend such an assertion? They will say: "White" and "black" — racial categories — aren't terms that have biological meaning. They're purely "social constructs." Obviously they're not just social constructs. They're also biological realities.
 
It's pathetic how they can't just admit race is not literally skin deep. Instead their own tools are wrong and need to be taught to ignore race in a medical context. Astounding.
 
*x-ray has a pack of newports and a bottle of malt liquor clearly visible*

"We can't figure out how it knows what the race is"
 
It's pathetic how they can't just admit race is not literally skin deep. Instead their own tools are wrong and need to be taught to ignore race in a medical context. Astounding.
If we had this technology 30 years ago, it would be hailed as groundbreaking. Innovative. A new way to personalize medical care.

It's horrifying that today, the more advanced we get, the dumber people are. They're throwing away critical science for politics. There's nothing racist about the technology, but still the scientists lie and pretend that these realities don't exist. Even when it's staring them in the face.
 
Bone size/structure/density is racist. Machines need to be taught that all bones are the same, despite what they detect.
 
This is basic information you learn in medical school. The three types of human skull variations, Caucasoid, Negroid, and Mongoloid, are used to differentiate between human skulls from those of European, African, and East Asian humans respectively. Identifying the race of the person being X rayed will only provide more information to assist in the treating said person, as there are variations in blood types, hemoglobin counts, ect, in everyone. Doesn't mean we aren't the same species, doesn't mean we aren't all equal legally and morally speaking.

These people should really get a life instead of bitching about inane shit all the time.

Edit: Here's an episode of Forensic files where the investigators use advance computers (Edit: Sorry, they used the physical facial reconstruction with clay. It's been a while since I've seen it. Even with that it shows the differences in human bone structure in general. We can tell they're diffrences with older techniques,) to determine the race of an unknown person's skeleton, finding out who she was and therefore finding who killed her. Obviously fucking racist technology.
 

Attachments

Last edited:
Bruh is it really considered racist to acknowledge that physical differences are real?
 
"somehow"
Dm5-W58W0AEOKse.jpg-large.jpeg
 
Bone size/structure/density is racist. Machines need to be taught that all bones are the same, despite what they detect.
We're entering Dark Ages 2 Woke Boogaloo, progress will be stifled or outright stopped, hell, we'll move backwards, not because muh God but because muh Feels.

It's honestly terrifying how fucking retarded our species can be.
 
What's interesting is that using another AI to analyze how this AI works, is likely the only way to figure out what it is picking up on. I can almost guarantee a human will be unable to figure it out since AI's don't think like humans. The associations it makes will likely only make sense to an AI.

As AI's advance we'll eventually need AI's that are dedicated to bridging the machine-human gap in understanding by acting as translators.
 
And then there will be endless hand-wringing over the question of whether or not the translator program is secretly racist and not ratting out the others.
Either the data is good, And potentially useful, Or it isn't. The motives of who/whatever produced it don't matter if the result is correct and can be checked.
 
How do we apply this to analysing ancient skeletons such as those found at Pompeii & Herculaneum or Mungo Man? It would certainly expedite the work of archaeologists in determining the hereditary origins of some specimens, like Otzi.
 
If we had this technology 30 years ago, it would be hailed as groundbreaking. Innovative. A new way to personalize medical care.

It's horrifying that today, the more advanced we get, the dumber people are. They're throwing away critical science for politics. There's nothing racist about the technology, but still the scientists lie and pretend that these realities don't exist. Even when it's staring them in the face.
This is critical science, though. Critical Science Theory.
 
Tards in the thread need to fuck off with the cringe ass skull meme. Just because you can be dumb as fuck on here doesn't mean you should.

And tards writing shit articles need to realize that hospitals are THE place where giving a shit about race is going to be important most often.
 
What's interesting is that using another AI to analyze how this AI works, is likely the only way to figure out what it is picking up on. I can almost guarantee a human will be unable to figure it out since AI's don't think like humans. The associations it makes will likely only make sense to an AI.

As AI's advance we'll eventually need AI's that are dedicated to bridging the machine-human gap in understanding by acting as translators.
Weren't the SJWs the original ARTIFICIAL intelligence anyway? So, it seems about right that they now need SJWI to seek out those racist AI programs.
 
Back
Top Bottom