People open up more to smart speakers that listen actively, find researchers

A team of researchers suggest that smart speakers—like Alexa and Siri—that demonstrate active listening cues might be able to serve a therapeutic role for users. Credit: Brandon Romanchuk/Unsplash.

Adding random, short expressions of understanding in a conversation may turn smart speakers, such as Alexa and Siri, into robot therapists that allow people to open up more without violating their privacy, according to a team of researchers.

In a study, the team programmed a smart speaker to respond with backchannels—short words and non-words, such as “hmmm,” “right” and “uh-huh”—intermittently during chats with users. These utterances served as the sonic equivalent of head nods and facial gestures leading users to think that the virtual assistant was actually listening to them—or active listening—and prompted more emotional disclosure, said the researchers.

Because users perceived the smart speakers as good listeners and prompted more self-disclosure, the speakers could serve as emotional support devices for people, especially isolated people, said Saeed Abdullah, assistant professor of information sciences and technology and affiliate of the Institute of Computational and Data Sciences.

The users also tended to respond with more positive words when they interacted with the devices, which lends additional support to the idea that smart-speaker interactions could have a beneficial therapeutic effect, said Abdullah.

“This represents an initial step in terms of rethinking or reimagining the types of interactions that can be supported through smart speakers and other voice interfaces,” said Abdullah. “We are quite interested in figuring out if we can use smart speakers to provide some sort of therapeutic support for individuals with mental illnesses, or even individuals who might be socially isolated, which is a growing concern for mental health care professionals.”

According to the researchers, the ability to signal backchanneling at random is a win-win because it can promote the perception of active listening while maintaining the user’s privacy. Coding a smart-speaker tool that actually listens and responds to a person would require the device to take in information from the user, which might be uncomfortable for some users, according S. Shyam Sundar, James P. Jimirro Professor of Media Effects in the Donald P. Bellisario College of Communications and co-director of the Media Effects Research Laboratory at Penn State.

“In some ways, this is more socially responsible because we are doing it in a privacy-protected way,” said Sundar, who worked with Abdullah on the study. “We are not collecting any data, but still giving people the social cue that they need for them to continue with their disclosure and benefit from the cathartic aspect of the process.”

The approach could be both helpful and ethical as long as companies and designers avoid false advertising about its abilities, he added. For example, companies should not fool users into thinking the smart speaker is really listening to them, said Sundar, who is also director of Penn State Center for Socially Responsible Artificial Intelligence (CSRAI).

Random backchanneling also solves some significant technical issues, added Abdullah.

“One technical challenge is that it’s quite difficult to understand when to provide those backchanneling reactions because it requires lots of computational processing to understand the context,” said Abdullah.

The researchers recruited 40 participants for the study. The participants were randomly assigned to two different experimental conditions—one condition featured a smart speaker app programmed for random backchanneling and a control condition that did not offer any backchanneling. The participants were asked to interact with the smart speaker to express their thoughts regarding certain personal life matters.

Following the interaction, the participants completed questionnaires to determine their perception of active listening, emotional states, perceived emotional support from the speaker and the perceived quality of the smart speaker’s backchanneling ability.

To measure self-disclosure, the researchers recorded the time that users spent interacting with the smart speaker and the number of emotional words they used.

The researchers published their work in the Proceedings of the ACM on Human-Computer Interaction and presented it at the recent annual conference on Computer-Supported Cooperative Work And Social Computing.

More information:
Eugene Cho et al, Alexa as an Active Listener: How Backchanneling Can Elicit Self-Disclosure and Promote User Experience, Proceedings of the ACM on Human-Computer Interaction (2022). DOI: 10.1145/3555164

Provided by
Pennsylvania State University


Citation:
People open up more to smart speakers that listen actively, find researchers (2022, December 19)
retrieved 19 December 2022
from https://techxplore.com/news/2022-12-people-smart-speakers.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! TechNewsBoy.com is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.