As AI is deployed in clinical centers across the U.S., one important consideration is to assure that models are fair and perform equally across patient groups and populations. To better understand the fairness of medical imaging AI, a team of researchers from Massachusetts Institute of Technology (MIT) and Emory University trained over 3,000 models spanning multiple model configurations, algorithms, and clinical tasks. Their analysis of these models reinforced some previous findings about bias in AI algorithms and uncovered new insights about deployment of models in diverse settings.
Read more about their major takeaways from their study, published recently in Nature Medicine.