I’m working on a project that focuses on people’s emotional reaction when watching videos.
The current ideal model would be a real-time installation where we collect imagery data of people’s faces while they’re watching videos (activated by using an infrared sensor). The installation would analyze the cumulative level of smiles throughout the video, and determine the level of light intensity in the same installation which people would be standing under. The audience would essentially visualize the current viewer’s average reaction as they watch the video. Think about it this way: The more people smile, the stronger the light would be at that time.
for that, I am using openFrameworks, but I have a few important questions about it
– How would you think is the best way to detect an audience entering the installation and henceforth activating the program?
_– We know we can pull a database with OF to determine whether or not a person is smiling. But how do we define a spectrum of smile levels and link that to the intensity of light? _
– How would you suggest us to code-in new audience’s data in real-time so that the next audience would be experiencing the average reactions of everyone before (in terms of light intensity)?
A few clarifying questions … not quite clear about the setup of the installation, what videos people will be watching? What is the IR sensor is for (or what you mean by IR sensor … there are lots of sensors that use IR light)? Who is the audience? And how does the audience relate to the “Current viewer”? Perhaps by "current viewer’s average reaction, you mean past viewer’s average reaction? Anyway, it’s not super important, but details will help me understand the setup a little better …
This depends on the physical layout of your installation. The more “constraints” you have (e.g. making someone sit down on a bench that has pressure sensors), the easier it is to detect a specific action. On the other hand, the more constraints you have, the more restrictive your installation might feel. Completely unconstrained presence detection is tricky (we’ll be talking about it a lot in my computer vision course next semester!).
I would say that since you are required to track a face in order to measure a smile, you should simply check for the presence of a face in your camera feed to determine if someone is present. Of course, this may not work if you want a visitor to participate without seeing their face. This depends on the design of your space / interaction. To find faces, see the face finder included in openFrameworks as a starting point:
First, see this previous question for some previous discussion:
To find a smile, and turn it into a number (say 0.0 - 1.0, where 0.0 is no smile, 1.0 is a huge smile and 0.5 is a half-smile), you might look at the following addons:
I would simply keep this as a single number. When there are smiles, you add some amount to this number. Think of it like charging a battery. Smiles “charge” the battery. There is a maximum charge (e.g. the number maxes out at 255 or something). When nobody “charges” the number by smiling (and thereby adding a little bit to the number), the number will slowly discharge (e.g. every update loop you subtract just a little bit). The number never goes below 0, but when it gets to zero the light’s brightness is 0.
This way, if you have a steady stream of smiles, your light will be bright. And if people don’t participate, the light will eventually fade out.