Emotion recognition w/ raspberry pi


#1

Hi all – would like to implement emotion recognition using the Raspberry Pi’s camera module, specifically recognizing smiles and frowns. I have some simple face detection going on using OpenCV and Python 3.5, but am having a hard time making the jump to emotion recognition. Initial searches yield results involving topics such as optical flow, affective computing, etc, which has so far been intimidating and hard to understand. Anyone have any suggestions for resources to turn to or next steps I can follow to start implementing this?

Thanks!


Sensing emotion with openFrameworks
#2

Emotion recognition is tricky and best done with huge neural nets :slight_smile:

You might check out Google Cloud Vision (an openFrameworks adonn I wrote here https://github.com/bakercp/ofxCloudPlatform/blob/master/example_cloud_vision/src/ofApp.cpp).

Of course there are Python libraries too.

Check out sentiment analysis here https://cloud.google.com/vision/.

If you want to try it out, I’ve got a bunch of student codes for an upcoming class.


#3

@bakercp you’re a lifesaver as always! Would love to try out code from the class. Is that your Interactive Art / Creative Code one?

Also, tips on using Cloud Vision with the Pi? Looked through it briefly and can only imagine I’ll have to jump through a few hoops before I can get it going with the Pi.


#4

following the tutorials on the cloud vision documentation – will report back!


#5

Update: managed to implement face detection using Google Cloud Vision. Results:


but am having trouble obtaining readable output of the annotations. following along with the face detection tutorial (https://cloud.google.com/vision/docs/face-tutorial), then adding print(faces) to main gives me:

Ideally I would like the annotations to be formatted similar to the sample they have on https://cloud.google.com/vision/ where you can provide an image to be analyzed (result below):

Not sure if Google Cloud Vision provides an easier way to visualize these numbers in this manner?

EDIT:

am able to use the JSON response from the API to help guide what I should be looking for so I’m moving closer … just have to review Python lists and dictionaries and figure out how to access those elements!


#6

Yeah, it’s returning a big pile of JSON, and that includes all of the stuff you’ll need. The resulting JSON is well-documented on their website, so I’d start there.

Keep us posted!


#7

Spent some time working on it today and here’s what I managed on my own:

It is printing the ‘joyLikelihood’ element for every face! Also added some text onto the image to distinguish which face is being referred to. Should be all I need to continue in the project! Off to send this output to Arduino via serial and see what can be done. Next steps involve integrating this with live video feed from the Raspberry Pi camera. I read somewhere that it can’t be done with Cloud Vision. Haven’t tested that but @bakercp, or anyone else, have you had any experience with it?

Thinking about alternatives if that’s the case: probably taking still images at short intervals and feeding that to the Python script. But do the images get stored on the SD card or are temporary copies made (so that I don’t have to worry about the SD card running out of memory?). Guess I’ll also have to look into file I/O and how to integrate Python modules together.


#8

Yeah, you can’t stream live video to CloudVision, but you can certainly take snapshots at opportune moments. My first impulse (and the way I use it for video) is to wait for moments when the video is relatively stable for some amount of time (lot’s of ways to do this … frame differencing is an easy method). Then I check to see if there is even face in the frame with a standard face finder (e.g. https://realpython.com/blog/python/face-detection-in-python-using-a-webcam/ … first google hit for python face finder using haar / opencv) – then if there’s a face, I’ll send it on to google for sentiment analysis.

As for images stored, managing those is totally up to you :slight_smile: You can delete them after they’ve been analyzed or, if you’ve got a big SD card (mine are usually 16GB) you could probably save them for later use if you wanted to.


#9

Hadn’t even considered checking for stable frames – thanks for that, Chris. I think I’ve got a solid direction for how to proceed. Will update when progress is made!