OK Signals

oksignals is an emotion recognition matching system for the dating site okcupid. Using computer vision to identify attraction and computability signals based on user response to visuals, text and videos while browsing the profiles on the site.

I’m old enough to remember around 10 years ago that people would have found it odd if someone was using an online dating site, when I told this idea to my friends who are using okcupid, I somewhat received the same response, “you mean that an algorithm will watch people looking at the site and analyze their reactions?, that’s creepy”. I guess in some ways may be, but it will probably become a reality in just a bit, and we will all find it normal once we understand the benefits of it and that it is not about violating our privacy but about understanding our own emotions. obviously the state of computer vision on a web environment still needs many improvements, though with the right ideas and focus I’m sure that that will happen very quickly as well.

The idea is very simple and actually been tested in real life dating, there are clear signals of attraction between people, even though it may be harder for us to notice them when we are the one on the spot. There are many signals to check to get a clear idea of attraction, on a computer even a mouse movement may be an indication, but more so things like eye movements, leaning in, hear beat changes, flushing and blushing.

oksignals-flow

For the first test, knowing that most people have a simple web camera at home, I wanted to test the attraction level based on a skin color change (flushing or blushing), the thought behind it was that it is a simple change that any camera could pic since we are only relaying on a change from the same input, there is no real quality of image and details that are needed.  

Signal Processing

The process is very simple, first we take a look at a stream of image, for each of the images we see if we can detect a face, if a face was found, we run kmean algorithm to reduce the colors of it and get a sense of the most prominent colors, then we add the RGB values of each color found and compere it to a set of colors we found in earlier frames, if we see that there is a big color change, we assume that there is some kind of a reaction, note – that this may be related to a light change, since someone may even get closer  to the monitor or farther away from it which will change the color reflection on their face skin, we take that into consideration too.

 first we define a few variable, like the main image and the face matrix. A cascade classifier to load the haar cascade xml file to find if we have a face in the frame or not. 

 We’ll use the getGrayscale function to get a gray mat for faster processing when we are looking for a face, note that we will go back and crop a part of the original image once we found a face, since the face matrix will be a gray one.

To get the main colors from a face we’ll use the getColors function. here we start with copying the original mat and running the kmeans algorithm, the kmeans algorithm take the following arguments: 

Samples – the matrix we want to test.

clusters – how many clusters we want to find, how many groups we want to identify or in other words – how many colors we’ll like to get.

labels – an empty mat which will function as a container to correspond with the color values we found.

centers – the values of the color groups we found.

we also pass a termCriteria and the number of attempts we’ll like the algorithm to preform, as well as the type of cluster we’ll like to use. in our case we are looking for centers.

Once we run the kmeans we can set the color values on the picture itself and place it in the containers. finally at the end of the function we reduce the size of the containers to match the size of our frames history.

To find a face we use the traditional opencv haar cascade classifier. first we convert the original mat into a gray one for faster processing, then we equalize the histogram of it, so the features are more pronounce. then we call the detectMultiScale on our cascade classifier which will fill our faces vector with rectangle areas form a face found. if we did’t found any face, we’ll flip the image and repeat the process.

To detect a color change, we analyze each color channel individually. we’ll call this function 3 times to find out what were the color differences on our red, green and blue channels. the function loops through the color vector and look for the lowest and highest values and return the differences between the two. 

The flushCheck function is where it all comes together. here we first look for a face, then we loop through the faces found while calling the getColors function which will call the kmeans algorithm, looking at the color change values we found we try to make a decision if a change was significant enough.

 To run it all from our main loop function we’ll use this main function

 here is a quick sample of me flushing and moving closer and further away from the screen. you can see the how the color value stabilize once I stay in one place, but once I move around they start getting higher and higher.

flush   flush-color

Next Steps

Obviously there is much work from simply looking at these gifs and understanding who am I attracted to, but this is a start of understanding how I respond to my environment and the information I encounter while browsing a dating site, would be interesting to test this with other sites as well, like news site where you can get an idea of which news turns you on, or even the mail box. I guess this is the future of computing, rather then us staring at them, its time for them to start staring back.