This week in AI & Machine Learning: Improving disability employment, wisdom from The women leading the AI industry, Nvidia’s dynamic super resolution image scaling, and more!
Zammo and Microsoft are using AI to Improve Disability Employment
If you don’t have a disability it can be easy to take online navigation and communication for granted. Zammo is already changing information accessibility for airports and OurAbility, but is looking to scale it’s AI powered voice technology to any company with a job board. Read how Microsoft and Zammo are increasing job accessibility here.
Wisdom From The Women Leading The AI Industry, With Elizabeth Spears of Plainsight
Plainsight’s Co-Founder & CPO, Elizabeth Spears met with Authority Magazine, to share her experiences, insights, and practical use cases for computer vision in industries.
“Our first computer vision project as a company took a very practical task — counting cattle as they passed a camera — and turned that into a 40 million dollar yearly savings for our customer, just by doing that one task extremely accurately.”
Nvidia’s AI-Powered Dynamic Super Resolution Image Scaling Technology (DLDSR)
NVIDIA announced it’s new AI-powered graphics rendering technology called DLDSR (Deep Learning Dynamic Super Resolution), which allows for a more detailed resolution using the Tensor cores of an RTX GPU. It’s always exciting to see more details come to our virtual worlds! Read more about the DLDSR announcement here.
No-Code and Low-Code Machine Learning Platforms Still Require People
With the rise in no-code and low-code machine learning platforms, such as Plainsight making machine learning more accessible, it’s important to remember that you should understand the problem you’re solving, creating unbiased datasets, and continue to monitor the results. Read about why no-code machine learning platforms still require people here.
Interpretable Machine Learning — with Serg Masís
Serg Masís, a data scientist Syngenta joins the Super Data Science Podcast to discuss interpretable machine learning, how to avoid interpreting models incorrectly, and why it’s important. See the full show notes here.