Face Reader

Focused on encouraging expression recognition learning with autistic kids. The Face reader app is a video-image base facial expressions translator, giving both self and 3rd person perspective on what a face may tell.

One of the symptoms of autism is the lack of understanding facial gesture or the inability to interpret the meaning of facial expressions. Unlike many apps which design with the focus of diagnosing autism, the face reader app designed as a tool for people with autism to use whenever they encounter a face they are not sure about, the app will make an attempt to make that translation for them.

face-reader-home   face-reader-happy

 

The face reader app may also be used by anyone who wants to get a better understanding of facial expressions, including self awareness – what type of facial expression I make when I talk to people. I recently heard a story from a teacher who whenever a student used to ask her a question, she will try to concentrate and understand what they mean, while at the same time conduct a facial expression with her eyebrows which looked as if she was angry, obviously this did not result with satisfying the student questions, as they felt that she was annoyed with them, when she realized and changed the facial expression she started receiving very different engagement in her classes.  This is a classic case where facial reader can come handy.

face-reader-angry face-reader-confused

Smile detection

The first experiment I wanted to do with developing the app is a smile detection, to get better understanding of the level of happiness in an image.

First we define cascade classifier for the face and smile, then we load the right haar xml file into them, the cascade file we loaded will be used to detect the face first and if a face was found smile signals inside of it. to find faces we use the getFaces function. we start with converting the image to grayscale and equalizing its histogram, then we try to detect a face, if a face was not found we try to flip the image and look again.

 The getFaces function calls the getGrayscale function, which is simply a conversion to a gray matrix for faster calculation

 the detectSmile method is the core of the detection, first we look for faces, then we loop through the one we found looking for smile signals while calculating the level of intensity from 0-1 based on the amount of smile signals we found, then we draw bars visualization and outline the facial features.

 This will result with something that looks like this

smile-level

 Emotion Training

another approach to identifying facial expressions is to train an algorithm with a sequence of images and co-responding labels for each expression, this is very similar to the pet-recognition process, just that here we’ll be using images of faces with specific expressions.

first we’ll define some variables 

 we’ll use these methods for string conversion

and this method to convert our images to gray mats for faster processing

 To save the trained images we’ll use this function, change the path to the one of your project. as you note, first we’ll convert the image to gray and equalize its histogram so the facial features get more pronounce

 This will save images that looks like this

face_happy201    face_sad79    face_happy187

 once we collected enough images to train the algorithm we’ll call the trainData function. as you can see we run two tests to see if the contrib module is available with this version of opencv and if the model was created successfully. I choose to save the trained data hoping that it will save time with the app load but it feels to be almost the same effort to load the yml file.

now we are finally ready to check the recognize function (well almost). here we create a projection matrix and reconstruct our object based on it, then we compere the reconstructed object with the original face using the getSimilarity method, if we get a good ratio we as the model to predict the label.

 the getSimilarity function compere two matrixes based on calculating the square root of sum of squared errors of the two.

 Like in the smile detection example, here to first we need to know that we have a face to analyze its expression.

finally to detect expression we’ll use this method either on a static image or  a camera feed.

 This is what it looks like with my facial expression

happy-sad

Crowd source training

The accuracy level in this case is very high, since I trained the algorithm with my personal facial expression, a different person may experience lower rate, though ideally the app will be able to do live training to add more expressions, and the original database will include images of many faces – expressions.  I was looking to launch the app with the ability to recognize at least the 10 most common facial expressions: Confusion, Shame, Surprise, Focus, Exhaustion, Seduction, Anger, Fear, Sadness and Happiness.

Another cool feature of the app  is when a user save a face to their list, it is also added to the expressions database for future training.

face-reader-list

this feature was originally design with the thought that autistic kids may want to use the app not just to recognize something that happens in the moment but almost like a questions cards, so they can have a quick list of the faces to go back to, in some way it is like building a personalize data base and and training an algorithm to recognize them, I guess people been doing this for a while.

 

 

References

paul ekman – http://www.paulekman.com/