This week in AI and ML news: AI plays a role in studying representation on TV, assessing heart disease risk, and more.
Author’s Note
There’s been plenty to talk about in the worlds of artificial intelligence and machine learning this year, but generative AI has proven perhaps the single hottest topic. Solutions like DALL-E 2 have wowed people everywhere and raised all sorts of questions about the nature of creativity and the future of art.
Plainsight’s Co-Founder and CTO, Logan Spears, joined me and co-host Paul Davenport to discuss DALL-E 2, the tech behind it, and its potential implications for the art world in the first AI in Plainsight conversation. Listen to the chat below.
AI News
Studying Representation Across 12 Years of Scripted TV
See It, Be It: What Families See on TV, a new study co-sponsored by Google Research, the Geena Davis Institute (GDI), and the University of Southern California’s Signal Analysis and Interpretation Laboratory (SAIL), shares findings from analyzing the nation’s top-ten scripted TV shows for each year from 2010 to 2021. The study is a new milestone in Google’s pioneering efforts to study on-screen representation with AI.
Davis, an Academy Award-winning actress, founded her namesake organization in 2004 to take a closer look at age, race, and gender representation in film and television. GDI earned her an honorary Oscar this year and made a fitting partner for Google Research’s Media Understanding for Social Exploration (MUSE) project. MUSE similarly focuses on representation as a way of promoting equity and diversity on screens.
MUSE researchers developed an AI-powered solution capable of analyzing the full dataset, around 440 hours of video, in less than 24 hours. Altogether, ML models encountered more than 12 million faces. The study’s findings suggested progress, but also points to significant representation gaps. Women of color, for example, occupy far more speaking roles than a decade ago, but are still far less likely than white men to speak on screen. Google hopes to collaborate with USC and GDI researchers again to examine media from across the globe. Learn more about the project on Google’s AI blog.
Can Doctors Assess Heart Attack Risk from a Single Chest X-Ray?
Research presented at this week’s meeting of the Radiological Society of North America suggests that artificial intelligence has the potential to support diagnosticians in assessing patients’ risk of cardiovascular disease. Developed using nearly 150,000 chest images from more than 40,000 patients, the solution has proven itself capable of recognizing warning signs that the naked eye may miss. Lead researcher, Dr. Jakob Weiss, works with both Massachusetts General Hospital and Harvard’s AI in Medicine program. He notes that fully understanding AI-derived diagnoses and explaining them to patients remains a challenge.
A patient’s likelihood of developing cardiovascular disease is determined using the atherosclerotic cardiovascular disease (ASCVD) risk score, introduced in 2013 and refined in 2018 to take medical history into account. Dr. Donald Lloyd-Jones, who helped introduce the scoring system, spoke highly of the new study and AI’s impressive potential. “This is exactly the kind of application that artificial intelligence is best for,” he noted, “we need to continue to do things like this to really understand if we can find, particularly, patients who would otherwise slip through the cracks.”
Check out a press release offering an overview of the research so far and read more about AI’s potential for healthcare applications.
MTA Deploys AI to Keep Buses on the Road
The Big Apple’s Mass Transit Authority (MTA) has spent the last two years testing out an AI solution whose creators have described it as effectively a high-tech version of the check engine light. When it comes to auto repair, early detection is essential for keeping costs and damage to a minimum. MTA faces an uncertain financial future and detecting equipment issues earlier could mean significant savings on repairs. It should also mean smoother commutes and fewer riders stuck on broken-down buses. While the solution is currently focused on emissions systems alone, next steps include taking a closer look at both HVAC systems and engines.
New Yorkers should encounter AI-equipped vehicles in action soon. MTA plans to introduce it to about a quarter of its 6,000 buses this month. The organization is also currently in the process of replacing its entire fleet with greener electric buses, which it hopes to complete by 2040. Learn more about MTA’s collaboration with Preteckt and the AI technology powering other impressive vehicles.
About the Author & Plainsight
Bennett Glace is a B2B technology content writer and cinephile from Philadelphia. He helps Plainsight in its mission to make vision AI success repeatable, scalable and traceable for enterprises across industries.
Plainsight provides the unique combination of AI strategy, a vision AI platform, and deep learning expertise to develop, implement, and oversee transformative computer vision solutions for enterprises. Through the widest breadth of managed services and a vision AI platform for centralized processes and standardized pipelines, Plainsight makes computer vision repeatable and accountable across all enterprise vision AI initiatives. Plainsight solves problems where others have failed and empowers businesses across industries to realize the full potential of their visual data with the lowest barriers to production, fastest value generation, and monitoring for long-term success. For more information, visit plainsight.ai.