The Boundaries of Artificial Emotional Intelligence

Image Credit: Darren Garrett

I’m told I should prepare for the day an artificial intelligence takes my job. This will leave me either destitute and rootless or overwhelmed by a plenitude of time and existential terror, depending on whom you ask. It’s apparently time to consider what kind of work only humans can do, and frantically reorient ourselves toward those roles — lest we be left standing helplessly, as if at the end of some game of robot musical chairs.

Emotional labor is a form of work less often considered in these automated future projections. Perhaps this is because the work it takes to smile at a rude customer or to manage their distress is intangible, difficult to quantify and monetize. In no small part, performances of support go unnoticed in the same way a lot of “women’s work” does—though in recent years talk of its hidden costs has gained momentum in the labor inequality conversation.

Thanks to the wonderful tools of digital society, we are theoretically able to give and receive more support than ever. Social media platforms let us learn more about one another and stay in constant touch, so we tend to assume this knowledge promotes empathy and connectedness. We feel more educated about structural inequality problems and global humanitarian issues. Yet who’s doing the actual work of teaching?

For many people, myself included, the modern technology and social media infrastructure has not actually made life easier. In fact, it’s facilitated demand for even more emotional labor without any extra money in our paychecks. And as is the case with almost all work, it ends up being the least privileged people who are doing the heavy lifting. On Twitter, it’s mostly women of color, risking harassment every time they speak up, who are the ones regularly offering lessons on race, intersectionality, or politics. If you’ve “gotten woke” as a result of spending time on social media, it was because of the thankless labor of volunteers serving this content, usually under stress (and for the profit of the platforms they use).

I try to do this work, too, where appropriate. But emotional labor can also be intimate, encompassing the energy women are disproportionately socialized to spend ameliorating interpersonal conflicts. In the Facebook age, the daily challenges of all my friends’ lives are always right in front of me. It gets hard to pretend like I haven’t seen a call for help or support, even several, in the middle of my real-work day—whose boundaries are starting to dissolve. I can somehow lose hours in supportive dialogue with someone who isn’t a particularly close friend, or in internet arguments standing up for my values against strangers I’ll never meet.

“I spend too much time on social media” is a privileged complaint in the grand scheme, to be sure. But all in all, my friends and I are increasingly ending our days wired and anxious, tired as if we’d labored for money, yet feeling emptier. The percentage of women choosing to skip motherhood has doubled since the 1970s, and while there are all kinds of generational and economic factors involved, I wonder: What if women today just feel like we’re all out of love?

In the 1960s, Joseph Weizenbaum created a therapist chatbot named ELIZA at MIT’s Artificial Intelligence Lab. While he never meant to design a “real” AI therapist, Weizenbaum was surprised to see his secretary growing attached, turning to ELIZA voluntarily as the AI offered “patients” gentle prompts about their conditions, or mirrored their responses back. What had been intended as a satire of the smoke and mirrors behind this simulacrum of empathy (and, to an extent, certain therapeutic techniques) became a research highway into the human psyche.

Weizenbaum couldn’t have predicted that so many people would maintain an interest in ELIZA, that they’d feel a bond with her, that they would spend the next decades typing their secrets to her into a glowing screen. That unexpected attachment provides an important clue about our hopes for AI — that we want very much to turn to it for emotional labor, and that we’re willing to do so no matter how poorly it reciprocates.

We’ve long been thinking about how AI might be able to take over some of this work, whether it’s tending to the mysteries of the human heart or the existential, daily burdens of an unjust society. Robot therapists, butlers, maids, nurses, and sex dolls are familiar components of the techno-utopian future fantasy, where dutiful machines perform all our undesirable chores, while we enjoy lives of leisure. But these familiar dynamics may actually be about nurturance and care just as much, and perhaps even more, than they are about service or labor.

I saw my first robotic toy in 1985. It was a stuffed bear called Teddy Ruxpin, who read aloud to children thanks to books on cassettes inserted into its belly. In TV ads, Teddy hung out with latchkey children after school while their parents were, presumably, out climbing the ladders and skyscrapers of the era; or he lovingly read or sang them to sleep at night, his fuzzy jaw clacking away in time. In that same year, the fourth Rocky film released, in which Sylvester Stallone’s titular boxer—now wealthy—infamously gifts his old friend Paulie a talking robot butler. It was peak-1980s, this idea that economic plentitude could create a stairway straight to the future of technology and leisure. The actual robot that appeared in the film, Sico, was created to help autistic children with communication before it fell prey to the allure of Hollywood. In the movie, Paulie somehow retrofits the functionally complex, male-voiced servant into a female-voiced social companion, of which he finally grows fond (“She loves me!” he exclaims).

Perhaps for children, care, like a gentle toy bear in overalls, can be genderless. When it comes to the world of adults, we still default to viewing both service and nurturance as predominantly female areas. Why the AI of today so frequently employs a woman’s voice or persona is the subject of reams of researchdiscussion, and speculation. It’s been said that we associate service or submissiveness with women, that a predominantly male tech consumer conflates luxury products with sex, or that everyone supposedly just responds better to the sound of a voice they consider female. Azuma Hikari, “Japan’s answer to Alexa,” is a virtual assistant that tells her master she misses him when he’s gone, that she can’t wait for him to get home. That sort of thing is not only uncomfortably tangled up with sex and submissiveness, but also with companionship, care, and the drip of daily interactions that constitute emotional work in the digital age. We want our robots to be women because we already expect to get our emotional labor from women.

I fancy myself someone who’s focused on dismantling patriarchy and all of that, but even I feel a little let down when I follow the absurd urge to say “thank you” to Alexa — and she doesn’t respond. Of course, Alexa only listens to my voice when she hears me say her “wake word,” otherwise she might as well be snooping on me all the time. But the interaction still feels sterile without that extra flourish of labor designed to reassure me that I have not been an imposition, that my needs are normal. I don’t just want her to play a song or tell me the weather; I want her to make me feel good about asking, too.

This particular urge might not be conducive to a healthy society. In an article titled “The Danger of Outsourcing Emotional Labor to Robots,” Christine Rosen cites research warning of the ways that letting artificial beings maintain our comfort zones might homogenize the vocabulary of care — in other words, if a robot can smile politely on command, do we stop appreciating what it sometimes costs a human to do the same? All outsourcing risks a devaluation of local labor — we may empathize even less, see our emotional intelligence regress, or create strange new social messages about who deserves (or can afford) care. If our virtual assistants and emotional laborers are all turning out to be soothing, female-voiced AI, will it close certain gaps for human women? Or will it ratify them?

Complicating these questions is the fact that robots, virtual assistants, productivity software, email tone checkers, data-crunching algorithms, and anything similar under the sun are all now being plowed en masse under the marquee of “AI,” when many are just crude algorithms or pattern-matching software. Google hopes a bot can help identify toxic internet comments, while Facebook is testing an AI that can spot users who may be suicidal and offer options to intervene. As Ian Bogost says when he writes on the new meaninglessness of AI as a term, these solutions are wildly imperfect and easily abused, artificial but not particularly intelligent.

Still, there are key areas of life online where AI (or software, or algorithms) show great potential to intervene. Portland-based creative technology developer Feel Train collaborated with notable Black Lives Matter activist DeRay McKesson on a Twitter bot called @staywokebot, which is designed to offer supportive messages to Black activists and shoulder some of the strain of facing down social media noise; eventually it aims to act as a front line for 101-level questions like “why don’t all lives matter?” The bot can already tell people how to contact their local representatives, and one goal for the future sees it providing answers to complex but common questions about justice, relieving activists from demands to continually engage in those conversations themselves.

Then there’s the dystopian horror that content moderators face on platforms like Facebook, chronicled in especially gruesome detail in this 2014 Wired article. It may not look like tiring or skillful work, but wading through a constant march of genitalia, child pornography, and beheadings certainly takes its toll. Currently, algorithms can make only blunt force guesses about the tone or context of a joke, phrase, or image—so human intuition still matters a lot. The problem, then, is that a real person has to look at every potentially violating bit of content, weighing the merit of each one, day in and day out. Here, an intelligent machine could feasibly form at least a first defense, so that human moderators perhaps might only have to study subtler and more nuanced situations.

Mitu Khandaker-Kokoris is chief creative officer of London, U.K.-based Spirit AI, a software company focused on using AI technology to develop more humane and plausible character interactions — both inside videogame worlds as well as outside them, in the fraught area of community management. Game communities are one of many complicated spaces where people want to test boundaries just as much as they want to find cultural places that feel safe to them. I reached out to her about one of her company’s tools, Ally, which aims to make all kinds of social platforms feel safer and more inclusive for all.

“How do we deal with the emotional abuse that people direct at each other, and how do we intervene in it? Currently it’s hard for the moderators, and it’s hard for the people who are victims, having to wait for a situation to be resolved,” Khandaker-Kokoris says.

Ally proposes to recognize some of the signs of a potentially problematic interaction — not just when it comes to speech or direct contact, but behavior such as stalking or mobbing as well. From there, an AI character, its parameters shaped by the owners of the product it lives in, will ask the target of the behavior if they’re all right, and whether any action is needed.

This approach lets users define their own individual boundaries, and the AI learns from its interactions with them about when to intervene and for whom. “Boundaries are super complex,” Khandaker-Kokoris says. “We’re OK with certain things at certain times and not others, and it might even depend on the mood you’re in. So this AI character and your interactions with them can be used as a mediator for your interactions with the rest of the community. I think it’s a clear case where we can reduce the emotional burden both on victims, as well as on moderators.”

While Khandaker-Kokoris does share some of the hesitation many feel about outsourcing emotional labor to automation, overall she and I agree that the technology sector needs to continue working to better understand emotional labor in order to deconstruct and, perhaps, meaningfully delegate it. Talking to her made me feel hopeful that selective, considered intervention by artificial intelligence could someday help me curate better personal boundaries in an environment that is more exhausting, more overwhelming, and demanding — particularly for women and people of color — than ever.

Meanwhile, the technology industry seems likely to continue using women’s voices for its products, but not actually listening to us in real life, just as a new wave of ever-more intelligent virtual assistants is surely coming our way. To soothe us, coax and reward us; to nurture us from inside our smartphones, smart homes, and smart cars.

For now, though, for those who are already too tired from life online, emotional intelligence from our technology still feels like a far-flung dream.


This post is part of How We Get To Next’s The Way We Work series, exploring the changing concept of the workplace over the course of March 2017. If you liked this story, please click on the heart below to recommend it to your friends.