Breaking the fourth wall in human-computer interaction: Really talking to each other

Breaking the fourth wall in human-computer interaction: Really talking to each other
Hold a conversation with Harry Potter! Interactive Systems Group, The University of Texas at El Paso, CC BY-ND

Have you ever talked to your computer or smartphone? Maybe you've seen a coworker, friend or relative do it. It was likely in the form of a question, asking for some basic information, like the location of the best nearby pizza place or the start time of tonight's sporting event. Soon, however, you may find yourself having entirely different interactions with your device – even learning its name, favorite color and what it thinks about while you are away.

It is now possible to interact with computers in ways that seemed beyond our dreams a few decades ago. Witness the huge success of applications as diverse as Siri, Apple's voice-response personal assistant, and, more recently, the Pokémon Go augmented reality video game. These apps, and many others, enable technology to enhance 's lives, jobs and recreation.

Yet the potential for future progress goes well beyond just the newest novelty game or gadget. When properly merged, computers can become virtual companions, performing many roles and tasks that require awareness of physical surroundings as well as human needs, preferences and even personality. In the near future, these technologies can help us create virtual teachers, coaches, trainers, therapists and nurses, among others. They are not meant to replace human beings, but to enhance people's lives, especially in places where real people who perform these roles are hard to find.

This is serious next-level augmented reality, allowing a machine to understand and react to you as you exist in the real physical world. My colleagues and I focus on breaking the fourth wall of human-computer interaction, letting you and computer talk to each other – about yourselves.

Bringing computers to life

Our goal was to help people build rapport with virtual characters and analyze the importance of "natural interaction" – without controllers, keyboard, mouse, text or additional screens.

To make the technology relatable, we created a Harry Potter "clone" by using IBM's Watson and our own in-house software. Through a microphone, you could ask our virtual Harry anything about his life, provided there was a reference for it in one of the seven books.

Since then we have also built a museum guide that helps to experience art. Our prototype character, named Sara, resides in a gallery in Queretaro, Mexico, where people can talk to her and ask about the artwork also on display.

We also created a "Jeopardy"-style game host, with whom you can play the popular trivia game filled with questions about our university. You talk to the character as if he were a real host, choosing the category you want to play and answering questions.

We even have our own virtual tour guide at the Interactive Research Group laboratory at UTEP. She answers any questions our hundreds of yearly visitors may have, or asks the researchers to help her out if it is a tough question.

Our most advanced project is a survival scenario where you need to talk, gesture and interact with a virtual character to survive on a deserted island for a fictional week (about an hour in real time). You befriend the character, build a fire, go fishing, find water and shelter, and escape other dangers until you get rescued, using just your voice and full-body gesture tracking.

Breaking the fourth wall in human-computer interaction: Really talking to each other
A researcher interacts through speech and gesture with Adriana, the jungle survival virtual character. Credit: Interactive Systems Group, The University of Texas at El Paso, CC BY-ND

Understanding humans

These projects are fun to "play" for a reason. When we build human-like characters, we have to understand people – how we move, talk, gesture and what it means when you put everything together. This doesn't happen in an instant. Our projects are fun and engaging to keep people interested in the interaction for a long time.

We try to make them forget that there are sensors and cameras hidden in the room helping our characters read body posture and listen to their words. While people interact, we analyze how they behave, and look for different reactions to controlled characters' personality changes, gestures, speech tones and rhythms, and even small things like breathing, blinking and gaze movement.

The next steps are clearly bringing these characters outside of their flat screens and virtual worlds, either to have people join them in their virtual environments through virtual reality, or to have the characters appear present in the real world through augmented reality.

We're building on functions – particularly graphic enhancements – that have been around for several years. Several GPS-based games, like Pokémon Go, are available for mobile devices. Microsoft's Kinect system for Xbox lets players try on different clothing articles, or adds an exotic location background to a video of the person, making it appear as if they were there.

More advanced systems can alter our perspective of the world more subtly – and yet more powerfully. For example, people can now touch, manipulate and even feel virtual objects. There are devices that can simulate smells, making visual scenes of beaches or forests far more immersive. Some systems even let a user choose how certain foods taste through a combination of visual effects and smell augmentation.

A sampling of our team’s developments in virtual characters.

A vast and growing potential

All these are but rough sketches of what augmented reality technology could one day allow. So far most work is still heavily centered in video games, but many fields – such as health care, education, military simulation and training, and architecture – are already using it for professional purposes.

For now, most of these devices operate independently from one another, rather than as a whole ecosystem. What would happen if we combined haptic (touch), smell, taste, visuals and geospatial (GPS) information at the same time? And then what if we add in a virtual companion to share the experience with?

Unfortunately, it's common for new technology to be met with fear, or portrayed as dangerous – as in movies like "The Matrix," "Her" or "Ex-Machina," where people live in a dystopian , fall in love with their computers or get killed by robots designed to be indistinguishable from humans. But there is great potential too.

One of the most common questions we get is about the potential misuse of our research, or if it is possible for the computers to attain a will of their own – think "I, Robot" and the "Terminator" movies, where the machines are actually built and operating in the physical world. I would like to think that our research as a community will be used to create incredible experiences, fun and engaging scenarios, and to help people in their daily lives. To that end, if you ask any of our characters if they are planning to take over the world, they will tease you and check their calendar out loud before saying, "No, I won't."

This article was originally published on The Conversation. Read the original article.
The Conversation

Citation: Breaking the fourth wall in human-computer interaction: Really talking to each other (2016, August 16) retrieved 19 March 2024 from https://techxplore.com/news/2016-08-fourth-wall-human-computer-interaction.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

3D animation with the stroke of a pen

6 shares

Feedback to editors