This week in AI & Machine Learning: A deep learning model that could revolutionize drug discovery, AI that learns like an infant, and impressive, ML-powered headphones

A Note from the Author 

I’m back! Big thanks to Paul Davenport for handling these weekly updates in my absence. As you’ve all seen, it’s been an exciting couple of weeks for Plainsight with blogs and interviews sharing insights from our team and showcasing some of our work. The good news continues this week, as Plainsight just announced it  expanded its relationship with Google Cloud. Read our press release to learn more about what this means for our team and our customers.

Our latest blog looks at the difference between data-centric and model-centric AI and suggests that enterprises don’t really need to choose one or the other. Check it out to see how Plainsight enables impressive results by combining both approaches. 

Here I am on my recent trip to the Pacific Northwest looking out over beautiful Crater Lake, the nation’s deepest! And, now to some deep thoughts on deep learning…

Artificial Intelligence News

Deep Learning for Speedier Drug Discovery

It’s impossible to quantify the number of total molecules in the known universe, but scientists have an estimate as to how many of these molecules could have the potential to fight and prevent disease: novemdecillion. That’s one followed by 60 zeroes. The Milky Way Galaxy includes a comparatively paltry 100 million stars. The unfathomable number of potentially life-saving molecules extends the processes for discovering and developing drugs and can make them incredibly costly. Since 90% of pharmaceuticals ultimately fail upon reaching human trials, drug companies often recoup these costs by raising drug prices for patients. 

To date, there simply haven’t been drug-designing computational models capable of quickly dealing with so much data – until now. MIT researchers have created a deep learning model, EquiBind, that identifies pharmaceutical-like molecules and binds them with proteins more than 1,000 times faster than even the best existing models. Learn more about how a process called “blind docking” makes EquiBind so revolutionary ahead of the team’s upcoming presentation at the International Conference on Machine Learning.

Teaching AI Basic Physics to Understand How Infants Learn

Young children learn fundamental concepts about how the world works through years of experience. The first time a parent disappears behind their hands in a game of peek-a-boo, the baby may wonder where their loved one has gone. Through future iterations of the game, they’ll gradually develop object permanence and understand what’s really going on. It’s just one example of how humans learn “intuitive physics” during their early development. Later in life, this understanding allows us to predict how objects will react to natural forces and interact with one another in specific scenarios. A ball, for example, will always roll down a hill and two solid objects will never pass through one another.

Inspired by studies on how infants learn, a team of London-based researchers led by Luis Piloto have trained a new neural network on videos of cubes, balls, and other simple objects rolling and colliding with one another. The model, called Physics Learning through Auto-Encoding and Tracking Objects (PLATO), learned concepts like continuity and solidity much the same way a human child does. It could soon make accurate predictions based on its lessons, reflecting an understanding of intuitive physics.

Next, Piloto and team tested how PLATO responded to physically impossible scenarios, such as objects suddenly disappearing or passing through one another. When confronted with such events, PLATO expressed surprise by providing a measure of the difference between what it saw and what it predicted to see. The researchers are hopeful that their findings will prove useful in both AI and infant research going forward. 

Machine Learning and Noise-Neutralizing Earbuds

Anyone who’s ever operated out of a home office or classroom (or even just tried to get some peace and quiet in a crowded home) can attest to the need for quality noise-canceling headphones or earbuds. Even the best options on the market often fall short, however, leaving users contending with muffled audio and inescapable background noise. With at-home work the new normal for many professionals around the globe, demand for a better option continues to rise.

A group of researchers at the University of Washington applied both their machine learning know-how and their experience as roommates to develop ClearBuds, which both amplify the speaker’s voice and suppress outside sounds. It comes down to two key innovations: a dual microphone array and a noise-canceling neural network algorithm. The team presented their findings at a conference last month and are continuing to refine their algorithm and explore additional use cases. Read this short summary of their work so far. 

Join our Community

See you next week! Until then, keep the conversation going on Plainsight’s AI Slack Channel

About the Author & Plainsight

Bennett Glace is a B2B technology content writer and cinephile from Philadelphia. He helps Plainsight in its mission to make vision AI accessible to entire enterprise teams.

Plainsight’s vision AI platform streamlines and optimizes the full computer vision lifecycle. From project strategy, through model deployment, and ongoing monitoring, Plainsight helps customers successfully operationalize vision AI applications to solve highly diverse business challenges.

 View All Blogs