It’s a riot: the stressful AI simulation built to understand your emotions

Inspired by global unrest, Riot uses artificial intelligence, film and gaming technologies to help unpick how people react in stressful situations

An immersive film project is attempting to understand how people react in stressful situations by using artificial intelligence (AI), film and gaming technologies to place participants inside a simulated riot and then detecting their emotions in real time.

Called Riot, the project is the result of a collaboration between award winning multidisciplinary immersive filmmaker Karen Palmer and Professor Hongying Meng from Brunel University. The two have worked together previously on Syncself2, a dynamic interactive video installation.

Riot was inspired by global unrest, and was specifically inspired by Palmer’s experience of watching live footage of the Ferguson protests in 2015. “I felt a big sense of frustration, anger and helplessness. I needed to create a piece of work that would encourage dialogue around these types of social issues. Riots all over the world now seem to be [the] last form of [community] expression,” she said.

Whereas Syncself2 used an EEG headset to place the user in the action, with Riot Palmer wanted to try and achieve a more seamless interface. “Hongying and I discussed AI and facial recognition; the tech came from creating an experience which simulated a riot – it needed to be as though you were there.”

“Machine learning is the key technology for emotion detection systems. From the dataset collected from audiences, AI methods are used to learn from the data and build the computational model which can be integrated into the interactive film system and detect the emotions in real-time,” explained Meng.

The programme in development at Brunel can read seven emotions, but not all are appropriate for the experience created by the Riot team. Currently,Riot’s pilot interface can recognise three emotional states: fear, anger and calm.

I tried it along with Dr Erinma Ochu, a lecturer in science communication and future media at the University of Salford, whose PhD was in applied neuroscience.

Riot is played out on a large screen, with 3D audio sound surrounding us as a camera watches our facial expressions and computes in real time how we are reacting. Based on this feedback, the algorithm determines how the story unfolds.

We see looters, anarchists and police playing their parts and “interacting” directly with us . What happens next is up to us: our reactions and responses determine the story, and as the screen is not enclosed in a headset, but open for others to see, it also creates a public narrative.

Ochu reacted with jumps and gasps to what was happening around her and ultimately didn’t make it home. “It’s interesting to try something you wouldn’t do in real life so you can explore a part of your character that you might suppress if you were going to get arrested,” she said.

As a scientist and storyteller she felt Riot was ahead of the curve: “This has leapfrogged virtual reality,” she said.

According to the Riot team, virtual reality (VR) developers have struggled to create satisfying stories in an environment in which, unlike film, you can’t control where the user looks or what route they take through the narrative.

In order to overcome these issues and create a coherent, convincing storyline, the team from Brunel re-trained their software versions of facial recognition technology to work for Riot. “[This] provides a perfect platform to show our research and development. Art makes our work easier to understand. We have been doing research in emotion detection from facial expression, voice, body gesture, EEG, etc for many years,” said Meng. He hopes the project’s success will make people see the benefits of AI, leading to the development of smart homes, building and cities.

For now, the emotion detection tool being worked on at Brunel can be used in clinical settings to measure pain and emotional states such as depression in patients. Similar tech has already been used in a therapeutic setting; a study last year at the University of Oxford used VR to help those with persecutory delusions. Those who trialed real life scenarios combined with cognitive therapy saw significant improvement in their symptoms.

But can Riot’s current AI facial recognition tech work for everyone? People with Parkinson’s, sight or hearing issues might need an EEG headset and other physical monitors to gain the same immersive experience unless tech development rapidly catches up with Palmer’s ultimate vision of a 360 degree screen, which would also allow a group of participants to play together.

Perhaps Riot and its tech could herald a new empathetic, responsible and responsive future for storytelling and gaming in which the viewer or player is encouraged to bring about change both in the narrative and in themselves. After all, if you could truly see a story from the another person’s point of view what might you learn about them and yourself? How might you carry those insights into the real world to make a difference?

The V&A will be exhibiting Riot as part of the Digital Design Weekend September 2017. The project is currently shortlisted for the Sundance New Frontier Storytelling Lab.

This article appeared on the Guardian website, click here for original article

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.