Image Classification In iOS Apps for dummies

These days the hype for Machine Learning is real. Everyone just wants a piece of it in their product development. Let it be a spam filter or just a cookie machine. However the problem is that not everybody can just go in all guns blazing and develop intelligent systems. It requires shenanigans with specialized knowledge to actually create and train and make a system mature. You can follow the tutorials online but they just go over the top of the bun and never give you an idea that the patty is drier than the Sahara Desert. So, the problem is there, people who aren’t expert at building intelligent systems using Machine Learning techniques — how do they make their applications and products intelligent?

Recently Apple acquired Turi and released Turi Create module for python. Turi Create is a blessing for people who want to make their products smart without thinking too much about AI delicacies. So here, we’re going to explore some magic through creating a simple image classifier application for iOS.

  • Python 2.7+ (Since Python3 support isn’t there for Turi Create yet)
  • XCode 9.2 (9.1 is also fine)
  • macOS 13.12+
  • You can also use Turi Create on Windows and Linux but for developing the application you’ll need Xcode, unless you’re using Xamarin.

So, let’s get started.

First, let’s decide on what kind of image classifier we’re going to develop. The popular examples around the web will tell you to make Cats Vs Dogs classifiers. Enough of cats and doggos. Let’s make it a bit better. Our classifier will classify flowers.

Let’s create our project in the following directory structure.

Directory Structure

They say a machine learning model is as good as the data it’s been trained with. So how are we going to get the data? Let’s get the flower image dataset available with Tensorflow example repository which is there at

http://download.tensorflow.org/example_images/flower_photos.tgz

If you’re on a Linux Distro or macOS, you can use curl to download and unzip inside training/images folder.

Extract!

We have images of daisy, dandelion, roses, sunflowers and tulips in the dataset. We’ll use flower names as categories.

Multiple categories! Nom Nom!

We got the data ready, so it’s time to train. Create a virtualenv for python, then install turi create using pip.

Alternatively you can create an anaconda env and install turi create using pip.

First we need to load all the image data inside a dataframe to use with our model later. To save data to dataframe, we will be using the following script:

Run the python script and we have the dataframe. You should see the following output on terminal and Visualization from TuriCreate Visualizer.

Next we create our model, the one that will be giving us the result on what kind of flower it is when we show our app an image.

Train the model using the code above. The model has an accuracy somewhere around 89% which isn’t bad.

CNF Martrix

We’ve trained our model but how can we use it with on an iOS app? To do so we need to convert it to a CoreML model using the following python script.

Now we’re ready to add it to our iOS application.

Let’s open up Xcode and create a single view application. Then drag and drop the coreml model inside the project navigator. Xcode will create the references automatically. To keep things organized you can create a group named Models and drag drop the core ml model there.

Create the pictured interface.

Now add a new file to the project named

CGImagePropertyOrientation+UIImageOrientation.swift and add the following code inside.

Now edit ViewController.swift and add the following code.

We’re using Vision and CoreML frameworks from iOS SDK to use the trained model. Remember the model is the key component that does all the work. Here your app will take a picture, either from camera or galley and send it to the model. The model will process the image and return the result. The app will send the image as a request, which will be handled by Vision framework and sent to CoreML model via CoreML framework.

The app is now ready to run, you can either run it inside the emulator or use an actual device.

We’ll be testing here inside the simulator. So, to get images, we press go to home on the simulator, open Safari and collect a rose and a sunflower image from Google Image Search and put them to test. Here are the screenshots containing results from the simulator.

Now if you want, you can load the app on an actual device and test on real life flowers (flowers should be of the categories trained on, else you’ll get random results, that’s how intelligent systems work. They only know what you trained them for)

That was a pretty long post. You may’ve got bored, (I got bored writing it as well). Image classification isn’t the only thing you can do with CoreML, there are a lot of other wonderful things to do as well. I’d suggest digging the official docs.

  1. https://developer.apple.com/documentation/vision/classifying_images_with_vision_and_core_ml
  2. https://github.com/apple/turicreate

This article was previously published here and has been slightly modified.

Metalhead, Apple Sheep, Horrible Guitar Player, and, I use tabs instead of spaces. P.S. — I hate Vim and Emacs.