This week in AI and Machine Learning: Google launches Vertex AI, Deloitte reports on the state of enterprise AI, and more.
Quick Service Restaurants attract repeat customers thanks to dependability. Hungry diners know they can always count on fast, fresh favorites. Computer vision is helping QSRs in the kitchen, making it simpler to assess operations and enforce standards for quality, cleanliness, and customer service. Monday, October 24th is National Food Day, the perfect opportunity to check our latest blog and learn more about how computer vision solutions are supporting restaurants and serving customers.
Artificial Intelligence News
Google Unveils Vertex AI Vision
During last week’s Google Cloud Next conference, the tech giant introduced Vertex AI Vision, its new computer vision platform. The platform enables Google Cloud customers to centralize their computer vision projects and build end-to-end pipelines with ease. Vertex’s no-code environment enables enterprise users of all experience levels to quickly put their visual data to use for the creation of high-impact models. A number of pre-trained models make it even easier for users to quickly see the value of computer vision. Together, Plainsight is enabling enterprises with computer vision with Vertex AI Vision.
Deloitte’s Latest ‘State of AI in the Enterprise’ Report
Deloitte’s fifth annual overview of AI in enterprise includes findings from a survey of more than 2,600 business leaders from around the globe. Of the enterprises surveyed, a whopping 94% consider AI critical for their next five years and 79% they have fully deployed three or more AI-powered solutions in the last three years. At the same time, “outcomes are lagging” and just 27% of served organizations qualify as “Transformers.”
The top challenges facing organizations include building a compelling business case, securing executive commitment, and picking the right solutions. Deloitte opens with four key recommendations for successfully harnessing AI’s power and avoiding common obstacles: invest in culture and leadership, transform operations, orchestrate tech and talent, and select high value business cases. Read the full report.
Translating a Primarily Spoken Language with AI
More than 3,500 of the world’s languages don’t have robust, widely recognized writing systems. These languages, however, count millions of speakers who have historically been underserved by AI-driven translation initiatives. Meta made one primarily oral language, Hokkien, the subject of a recent effort as part of its Universal Speech Translator project.
Though millions of people (mostly located in China, Taiwan, and Malaysia) speak Hokkien, the lack of a formal writing system made transcription-based translation possible. Meta instead relied on novel approaches to speech-to-speech translation, supported by Mandarin text. In addition to continuing work on the model, the company has released a collection of speech-to-speech translation resources called SpeechMatrix. SpeechMatrix is intended to support researchers in building on Meta’s work and developing translation solutions of their own for primarily spoken languages.
Join our Community
See you next week! Until then, keep the conversation going on Plainsight’s AI Slack Channel.
About the Author & Plainsight
Bennett Glace is a B2B technology content writer and cinephile from Philadelphia. He helps Plainsight in its mission to make vision AI accessible to entire enterprise teams.
Plainsight’s vision AI platform streamlines and optimizes the full computer vision lifecycle. From project strategy, through model deployment, and ongoing monitoring, Plainsight helps customers successfully create and operationalize vision AI applications to solve highly diverse business challenges.