Using Azure Cognitive Services with Processing

Processing demo code to sent images to the Microsoft Azure Cognitive Services Face API. It takes a picture from your webcam and it will return an analysis of all the faces found.

For an upcoming exhibition I wanted to detect some face attributes like age,  gender and emotion. This is something that’s not directly included in OpenCV (OpenCV library for Processing). To me it seemed most easy to use an API, so I looked into Google Vision and Microsoft Cognitive Services. The latter had the most convincing examples and a good documentation. With the free version you can make 20 calls a minute and up to 30.000 calls a month. Like most API’s you sent a REST call and you’ll get the data back as JSON.

Emotion Detection with Azure and Processing

Make sure you request the Azure Face API subscription key. On this page you can make a trial API key, which works from the West-US server. However you can also create an Azure Account and add the Face API over there (use the search function, because to interface is not really straight forward). That would give you the benefit of all the Azure servers and (I think) the free version is not limited to 30 days.  

Process

I’ve based my Processing sketch on this Quickstart example for Java. It uses the Apache HTTP client library (from the Apache HTTPComponents project). There is an implementation for Processing called the HTTP Requests for Processing library. However not everything is implemented, so I decided to include the HTTPClient library directly in the Sketch. It’s actually pretty easy to include compiled Java (*.jar) code in Processing. I’ve created a folder named “code” in the Sketch folder and copied the files in there. 

Java code in Processing Sketch

You can download the HttpClient binary over here. However it’s already included in my example. 

The Quickstart code shows how to upload an online image, however I needed to upload a local file. This StackOverflow post showed how to do that with the FileEntity class. 

The Processing demo

The demo code creates a picture out of the webcam stream (press space) and uploads this to the Azure Face API. It receives the data string in JSON format, which we then parse with the Processing JSON functions (JSONArray and JSONObject). A PGraphics canvas is created to directly overlay parts of the output on our screenshot picture. I noticed this goes slower after the first data comes in (no idea why), however after that it goes kind of realtime. I didn’t parse all the data, check the reference to see all the data you’ll get back. 

I’ve implemented a timer to limit the calls to the API (to stay within the 20 calls per minute). The FaceAnalysis class runs as a separate thread. This is necessary otherwise the draw-loop would wait until the Azure service (or any web service) would sent back the requested information. There is the thread() function in Processing, however that doesn’t work in classes. Luckily there was some basic information on how to use Threads from Daniel Shiffman on the Processing Wiki (now only accessible through the Wayback Machine). 

Get the code on Github

 

 

Buy Me a Coffee at ko-fi.comWas this article useful to you? Buy me a coffee!