This week in AI & Machine Learning: Reorganization at Meta, MIT research into ML bias, and the most exciting computer vision use case of them all.  

A Note from the Author 

Tuesday was Autonomous Car Day, an occasion to celebrate what is perhaps the most exciting of all computer vision use cases. Check out our blog on the subject to learn more about how object detection algorithms combine with advanced cameras and sensors to take autonomous vehicles from science fiction to real-world roads.

Artificial Intelligence News

Changes for Meta’s AI Wing

It’s been a busy news week for Facebook’s parent company. On Wednesday, June 1st, Sheryl Sandberg announced that she’ll be stepping down as COO this year after nearly a decade and a half with the organization. The next day, Meta announced a number of additional organizational changes related to its dedicated AI function. Meta’s current VP of AI, Jerome Pesenti, is leaving this month and the organization will decentralize its approach to AI. Rather than operating a dedicated AI sub-organization, Meta will spread its thousands of AI resources across various business units and product groups. Facebook AI Research (FAIR), for example, will remain structurally and philosophically unchanged while becoming part of Reality Labs Research. Read the full statement from CTO Andrew Bosworth for additional details on the reorganization at Meta. 

Researchers Identify Bias in ML Explanation Methods

It’s not just engineers, researchers, and AI enthusiasts who deploy machine learning models. ML is valuable for helping people and organizations in a range of industries make important decisions. Since the reasoning behind a model’s decision making is often challenging for even experts to comprehend, explanation methods are a useful resource for experts and novices alike. 

Explanation methods effectively mimic larger ML models, creating simplified approximations of their predictions. An admissions officer for a law school might use an explanation method to approximate the predictions of an ML model designed to determine which applicants are most likely to pass the bar exam. A good explanation model has fidelity, ie. closeness to the original model’s predictions. But what happens when fidelity is stronger for one group of people than another? Explanation methods can wind up encouraging biased decision making. 

A team at MIT’s Computer Science and Artificial Intelligence Laboratory led by Apartna Balagopalan have identified considerable fidelity gaps across a number of common explanation models used for high-stakes predictions, such as whether or not an ICU patient is expected to live. Balagopalan and her team were able to reduce some of these gaps using various machine learning approaches. When possible, they used datasets with equal numbers of samples from each subgroup and focused extra attention on areas that were prone to gaps. They could not eliminate these gaps altogether and concluded their research with a reminder to choose explanation models with care.  

Predicting Opioid Overdoses with Machine Learning

Data from the Centers for Disease Control and Prevention (CDC) show that more than 75,000 Americans died from opioid overdoses between April of 2020 and 2021, up 28.5% from the previous year. Policymakers, health systems, and organizations from coast to coast agree that opioid abuse and addiction represents a major and growing crisis. A new study published in Lancet Digital Health explains how a team of researchers from several universities developed and validated a machine learning model capable of accurately predicting the overdose risk of Medicare patients in Arizona and Pennsylvania. Prior to this research it was unclear whether single-state data from the past could provide for accurate predictions years later or in a new location. The success of this research offers hope for more wide-reaching efforts in the future.  

Join our Community

See you next week! Until then, keep the conversation going on Plainsight’s AI for All Slack Channel

About the Author & Plainsight

Bennett Glace is a B2B technology content writer and cinephile from Philadelphia. He helps Plainsight in its mission to make vision AI accessible to entire enterprise teams.

Plainsight’s vision AI platform streamlines and optimizes the full computer vision lifecycle. From data annotation through deployment, customers can quickly create and successfully operationalize their own vision AI applications to solve highly diverse business challenges.

 View All Blogs