AMES, IA — With the rapid emergence of artificial intelligence and spillover from Hollywood special effects, digital humans are entering the workforce. They’re sales assistants that never sleep, multilingual presenters and trainers, and social media influencers who always stay on-brand.
Lingyao (Ivy) Yuan, assistant professor of information systems and business analytics at Iowa State, has researched the emergence of digital humans over the last seven years. She says new technology offers companies several big advantages. Along with their ability to work 24/7; they never ask for a raise and "always follow company policy.”
Digital employees can also be used to provide consistent service and help people share sensitive information. Yuan pointed to a study that found military veterans were more willing to talk about symptoms with a digital human than real-life medical professionals.
But investing in digital humans costs money and may not be the right choice for certain companies or services, said Yuan. There are also a lot of ethical questions about its use.
“As we’re seeing with ChatGPT, new tech can be a disrupter,” said Yuan. “Companies need to discuss potential impacts and unintended outcomes before jumping into the decision of implementing digital humans. My colleagues and I want to be part of the discussion. We want to provide our insights on the future direction of AI.”
To reach industry leaders, Yuan and her colleagues wrote “AI with a Human Face: The case for – and against – digital employees” for Harvard Business Review. They drew from the latest research, including their own, and interviews with founders and CEOs of companies, like Pinscreen and EY, that have deployed digital employees. The article highlights four types of digital humans and offers guidance for companies as they consider investing in this area of AI.
“Even though digital employees are coming, is this the best time for companies to dive into it? It’s still in the early development stage and very expensive. Some companies that have used it have failed, while others have succeeded,” said Yuan.
Four categories of digital humans
- Virtual agents are for specific, one-time tasks. They provide many of the same benefits as chatbots but have a human-like appearance. Companies could use them as sales agents or for trainings. The University of Southern California’s Keck School of Medicine is researching how virtual agents could help future health care professionals practice identifying symptoms and medical conditions.
- Virtual assistants also help people with specific tasks, but similar to voice control assistants like Alexa and Siri, the relationship with the user is ongoing. The researchers point to Digital Domain as an early pioneer with this technology. The company is developing digital assistants for Zoom that could take notes during a meeting, provide a summary and arrange schedules. Other potential applications include personal shopping and physical therapy.
- Virtual influencers are similar to human influencers on social media. They promote brands and fashion trends by posting photos and video. Described as a 19-year-old robot living in L.A. on Instagram, Miquela (formerly Lil Miquela) has 2.8 million followers. The virtual influencer was featured in ads for Prada and a campaign with Calvin Klein, and currently has a deal with Pacsun, the teen retailer.
- Virtual companions provide emotional support and form personal relationships with the user. The researchers see this developing technology as having the greatest impact in elder care by reducing loneliness and helping people stay in their homes longer. Along with reminding people when they need to take their medications or go to a doctor’s appointment, virtual companions will have the ability to carry conversations and show empathy.
"Currently, virtual agents are the most prevalent of the four types of digital humans, but we believe the digital human’s fullest potential is as a virtual companion,” said Yuan.
The researchers provide a flow chart in the HBR article to help individual companies decide whether digital humans are the right choice. Questions include: Is there an emotional element to the interaction? Are users unsure of what they want? In many scenarios, using other technology offer better options.
Path to studying AI and digital humans
When Yuan started her Ph.D. in 2011, interest in AI was still relatively niche. She decided to focus her research on anthropomorphism (i.e., the attribution of human characteristics or behavior to non-human entities) after seeing a picture of three rocks with googly eyes during a presentation at a social psychology seminar.
“It triggered this idea that I was seeing a rock family, but I couldn’t find much literature on the topic from my field back then," said Yuan.
She finished her dissertation as AI pushed into the mainstream, and said it was a natural transition to shift her research focus.
“I believe the fundamental key to treating AI as a human equivalent is to evoke the process of anthropomorphism. Realistic human faces can be a strong stimulus for people to treat digital humans as if they were real, even if it’s not the only way. I believe the visual and intelligence both need to be there,” said Yuan, adding that more research is needed.
Many of her research projects have been in collaboration with Mike Seymour at the University of Sydney. Seymour was a special effects manager at Disney Studio for 20 years before switching to academia. Together, they've blended their areas of expertise and pulled in other faculty, Kai Riemer, University of Sydney; and Alan R. Dennis, Indiana University, to better understand how people perceive and behave with digital humans.
One of their recent studies found participants rated realistic-looking digital humans as more trustworthy compared to cartoon caricatures, especially in 3D virtual reality.
Several of their current research projects are focusing on virtual agents that look like celebrities.
"We're used to voice control assistants like Alexa and Siri,” said Yuan. “How would people feel about being served by AI customer service that has the face and voice of Hugh Jackman?”
Other projects center on virtual assistants in Zoom and financial settings, and whether the avatar’s appearance affects how people behave or perceive information. Another compares brain activity when people interact with a digital human compared to a real person.