True Mobile

True Mobile is a facial gestures recognition and voice pattern detection polygraph app that attempt to estimate and alert when someone isn’t telling the truth. I’ve been playing around with the idea of a polygraph app based on facial gestures recognition, obviously I wouldn’t recommend using this as a final judgment, but at the same time identifying facial patterns could hint on a change in a behavior, and as usual computers may be a bit better at that then us humans.

True-Mobile-home   True-Mobile-Polygraph

 In order to be able to identify a moment when someone is not telling the truth, I was hopping to be able to be able to analyze a combination of the following symptoms: Micro-expressions, Mouth cover, Eye movement, imagining vs remembering, blinks, eye rubbing, Skin color change as a result of sweat, Nodding, Nervous movement, Throat movement – swallowing and lubricating the throat, breathing rate changes, Voice: high pitch, change in paste, Repeating words, Time off between gestures and words. 

While the final product will probably have to use non grid face tracking, and perhaps even have a live training mode where the app can take a few sec to learn the face of the current person who is being diagnosed. I wanted to check also how tracking of general movement, special in facial connotation take place, and if there is anything that could be learned from it.

Face detection 

Since I needed to know that I am dealing with a real face, The first step was to include some face detection into the app. the core of the idea behind recognizing faces when using OpenCV is to use a cascade classifier and load a cascade file with previously trained data to identify specific features. The core of the script is in the following lines of code

Obviously it gets a bit more complex when we use it in real life, since we want to get more details and optimize the performance of this, you can see the full code in the ObjectDetector class in the following url https://github.com/screename/Object-Detector

The function getface detect first the face as a whole, and then using the eyecascade checking for eyes, if both found, it defines the area of the face and eyes positions and return a cropped mat with the face

 if we are sure that we have a face, we can start checking for more specific movement inside it using the MotionTracker class

Motion Tracking

 This will give us tracking points which will look like the following, even from a quick look you can tell that I am not telling the truth in these gifs.

TrueMobile-full   TrueMobile-face   TrueMobile-Eye

using the lucas kanade algorithm we track the movement of the body as a whole and specific facial gesture in combination, looking to see if we can find patterns or mismatches behaviors. you can see the full MotionTracker class here https://github.com/screename/Motion-Tracker

The lucas kanade way

The core functionality of the tracker is with the track and initialize functions

 The initialize function set the starting points to start tracking form 

 Two additional function that the tracker has is the ability to add or remove points manually.

this is useful with the combination of mouse event

 so you can track a specific point in the face area and analyze its movement.

Kalman Filter

Another interesting way to track movement of points is by using a Kalman Filter, you can see the full class here – https://github.com/screename/Kalman-Filter-Tracker/

first we define the filter in the constructor of the class

 The init function set the identity and transformation matrix 

 The track function look for the points prediction and estimate

finally to draw the points on a matrix we run the following loop

so for tracking an eye movement this will look somewhat like this result.

True-Mobile-Kalman 

 Next steps

Obviously there is a lot of work to be done in translating the movement into meaningful gestures, this is where the app will really tap on the human aspects, as usual with a big task I like to start with small chunks, first understanding if an eye was closed, or a eye eyebrow was raised, then seeing if this happen in conjunction with another behavior, matching these symptoms will be the core of the app, trying to identify where something unnatural happened.

Resources  

https://github.com/screename/Motion-Tracker

 https://github.com/screename/Object-Detector

 http://milbo.org/muct/

http://opencv-code.com/tutorials/eye-detection-and-tracking/ 

 http://willpatera.com/pupil/Kassner_Patera_Pupil_Book.pdf

 http://hackaday.com/2012/05/30/opencv-knows-where-youre-looking-with-eye-tracking/

 http://www.morethantechnical.com/2011/06/17/simple-kalman-filter-for-tracking-using-opencv-2-2-w-code/

 https://code.ros.org/trac/opencv/browser/trunk/opencv/samples/cpp/lkdemo.cpp?rev=4118

http://stackoverflow.com/questions/11169146/opencv-vehicle-tracking-using-optical-flow

 https://github.com/gnebehay/OpenTLD

 http://personal.ee.surrey.ac.uk/Personal/Z.Kalal/

http://docs.opencv.org/modules/video/doc/motion_analysis_and_object_tracking.html

http://dasl.mem.drexel.edu/~noahKuntz/openCVTut9.html

http://stackoverflow.com/questions/9701276/opencv-tracking-using-optical-flow/9702540#9702540