This week in AI and Machine Learning: Talking to animals with help from machine learning, object detection research from Meta, and more. 

A Note from the Author

I’m not a huge sports fan, but there’s no denying the stadium experience has become an attraction unto itself. Stadium amenities are not only more varied, but more convenient and personalized than ever – and computer vision has played a big role. 

To celebrate the start of the NFL Preseason last night, we took a closer look at the impressive tech sports fans can expect to see at arenas like SoFi and Allegiant Stadiums this year. Check it out on our blog. While you’re there, read our beginner’s guide to deep learning too

Artificial Intelligence News

Can Machine Learning Help Us Speak to Animals?

The Earth Species Project (ESP), a non-profit based in California and founded in 2017, hopes to leverage machine learning to translate animal “speech” for humans. A past study used AI to analyze the emotions behind pig grunts and Project CETI is currently working to translate sperm whale calls with machine learning’s help. ESP, however, has set a far more ambitious goal. The organization hopes it can translate. Co-founder and President Aza Raskin notes that certain species will prove easier to translate than others, but describes ESP as “species agnostic.” 

Raskin acknowledges that there’s no guarantee ESP will achieve its goals, likening the project to the moon landing. Other experts are skeptical as well. The University of Pennsylvania’s Rovert Seyfarth, for example, believes AI alone is insufficient without field observations. “You’ve got to go out there and watch the animals,” he notes. Check out The Guardian’s look at ESP to hear more from both Raskin and other researchers and learn more about some of the non-profit’s smaller, near-term goals. 

Major Advancements in Digital Manufacturing

Researchers can often discover and develop new materials rapidly, but actually creating them is typically more challenging. It’s not uncommon for experts to spend considerable time and money determining the appropriate parameters for material development through frustrating trial and error. An MIT-based team has now developed a machine learning and machine vision system capable of analyzing manufacturing processes to correct errors and quickly, accurately set 3D printing parameters. Next, they plan to deploy their approach in new scenarios and work on developing ML-driven controllers for additional manufacturing processes. Read more about the team’s research

Meta’s Contribution to Computer Vision Research

Object detection is one of the fundamental computer vision tasks, at the heart of powerful use cases related to autonomous vehicles, augmented reality, the fight against climate change, and more. It’s crucial for powerful object detection models to recognize even unfamiliar, rarely seen objects to perform as effectively as possible and continue supporting more ambitious projects. To this end, Meta AI is sharing code and other training resources from its ongoing research into Vision Transformers and object detection. Meta’s approach, which it calls ViTDet, has performed better than existing alternatives on the organization’s Large Vocabulary Instance Segmentation dataset. Learn more about how ViTDet works, how it differs from other object detection models, and what Meta’s efforts could mean for the future of computer vision.

Join our Community

See you next week! Until then, keep the conversation going on Plainsight’s AI Slack Channel

About the Author & Plainsight

Bennett Glace is a B2B technology content writer and cinephile from Philadelphia. He helps Plainsight in its mission to make vision AI accessible to entire enterprise teams.

Plainsight’s vision AI platform streamlines and optimizes the full computer vision lifecycle. From project strategy, through model deployment, and ongoing monitoring, Plainsight helps customers successfully create and operationalize vision AI applications to solve highly diverse business challenges.

 View All Blogs