Happy National Pet Day! Whether you’re a cat or dog person, all animal lovers agree that our furry friends are part of the family. Just like humans, pets have their own habits and quirks that range from adorable to downright exasperating. But one thing is for certain: We love to watch our cats and dogs in action.
What if instead of just watching adorable footage of our pets for fun, however, we could use it to learn more about them? Well, that is exactly what we set out to do with our latest vision AI pet project.
The Purrfect Use Case
Jenny Lewis and Phoebe Bridgers might be famous singer/songwriters but they also happen to be the names of my two cats. I spend a lot of time with Jenny and Phoebe because I work from home, so when I’m away on the weekends, I tend to worry.
For peace of mind, I have a Zmodo camera fixed on the shelf above the cats’ automatic feeder. This allows me to check the live camera from the app on my phone when I’m away to make sure everything is working like it should be.
While the cats share the feeder, it’s not large enough for both of them to use at once. I have often wondered if one of the cats (ahem, Jenny) is eating more than the other (Phoebe).
To get to the bottom of this, I pay for one of the cloud subscription options provided by Zmodo. Although it does come with intelligent alerts like motion and pet detection, it doesn’t have the ability to distinguish which cat is which—that is, not unless I want to manually review all the footage and keep track of it myself. Instead I decided to use Plainsight’s vision AI platform to build a computer vision model to automate that process for me.
Creating The Cata-set
Every computer vision model starts with a dataset. Luckily, my Zmodo subscription comes with seven days of cloud storage, so after downloading the footage to my laptop, I quickly upload the video footage into my project folder.
Using Plainsight’s in-app video editing features I was able to clip my .mp4 file to isolate the relevant footage and specify how many frames per second I needed for labeling.
Next, I defined my label type. Because I want the model to tell the difference between the two cats, I created two different labels; one for ‘Phoebe Cat’ and one for ‘Jenny Cat.’
As the subject matter expert on my own pets, telling the difference between the two of them was easy. While the sisters have similar coloring, the length and pattern of their fur make it possible to tell them apart: Jenny has a bushy, gray tail and Phoebe’s is sleek with a black tip.
Going frame by frame, I was able to quickly create almost 400 labels using Plainsight’s AI-powered labeling tool, TrackForward, which effectively auto-labels an object as it moves through the video frames. At worst, I only had to adjust the TrackForward labels slightly. That was much easier than manually drawing a new label on each frame.
The Cat’s Meow of Model Training
After locking my dataset version and setting my training time for one hour there was nothing to do but wait. Plainsight’s SmartML model training feature removed a huge roadblock that would normally keep a non-technical user like me from successfully training a model on my own.
SmartML provides the framework needed to feed a model’s learning algorithm with labeled data until the model “learns” instead of relying on the user to define those fields (although technical users have the option to adjust hyperparameters).
Curiosity Caught the Cat
The Phoebe Cat/Jenny Cat model took nineteen minutes to train with just under 400 labels. The custom SmartML model was able to detect the difference between Phoebe and Jenny with 97% accuracy.
With Plainsight’s vision AI platform I was able to create and label a dataset and then train a highly accurate personalized computer vision model without a single line of code. Cat-tastic!
See More with SmartML
With SmartML, users can build a searchable library of custom models tailored to highly specific challenges. Automated training ensures that models are always learning to help organizations continually innovate to solve problems. Even if you don’t have a dataset of your own or a specific challenge in mind, you can still try out SmartML and other Plainsight features today.
Ready to unlock the insights hidden in your visual data? We’ll show you how with a free demonstration.
About the Author & Plainsight
Ashley Greenwood is a B2B technology content writer and hiker from Fresno, California. She helps Plainsight in its mission to make vision AI accessible to entire enterprise teams–the ML power users, as well as the subject matter experts and non-technical users.
Plainsight’s vision AI platform streamlines and optimizes the full computer vision lifecycle. From data annotation through deployment, customers can quickly create and successfully operationalize their own vision AI applications to solve highly diverse business challenges.