When Black Mirror Becomes Reality: Microsoft Patented Chatbot that Allows People to “Talk” to the Dead

By: Caroline Foster

When Black Mirror writers created an episode where a character talked to her dead husband through a chatbot that resembled him, viewers likely never imagined that this device could become a reality.  With Microsoft’s recent patent, this far-fetched dystopian idea is a strong possibility.

Microsoft’s new patent makes it possible to have a conversation through a chatbot with a deceased loved one.  According to the filing with the U.S. Patent and Trademark Office, Microsoft detailed a method for creating a conversational chatbot that someone models after a specific person — a “past or present entity … such as a friend, a relative, an acquaintance, a celebrity, a fictional character, a historical figure.”  This tool would collect “social data” of the specific person such as “images, social media posts, messages, voice data and written letters from the chosen individual.”  Then, the collected data would be used to train a chatbot to “converse and interact in the personality of the specific person.”

If the chatbot does not have enough information about a person in a specific area, it could utilize outside data sources to develop an answer.  In conversing with a live person, the chatbot would attempt to use conversational attributes of the deceased person, such as “style, diction, tone, voice, intent, sentence/dialogue length and complexity, topic and consistency,” in addition to behavioral characteristics such as “interests and opinions and demographic information such as age, gender, and profession.”  Further, the chatbot could even take on a physical presence of a person through two- and three-dimensional recreations gathered through photos and videos of the individual.

While Microsoft’s general manager of AI programs, Tim O’Brien, claimed in a tweet that “there’s no plan for this,” it prompts the question as to why get the patent in the first place.  In a different tweet O’Brien said, “yes, it’s disturbing,” and because the patent was filed in April 2017, it predates the AI ethics reviews the company does these days.  Further, grief counselors claim that a device like this could become an addiction.  According to Elizabeth Tolliver, assistant professor of counseling at the University of Nebraska Omaha, “My fear is that it would become more like an addiction … I’m concerned that people would want more and more of the technology to feel closer to the person that they’ve lost rather than living the life they’re currently alive in.”

Not only are there ethical and psychological questions related to the possible use of this patent, but there also are legal concerns.  In some states, “an individual can be sued for using another person’s likeness, name, or personal attributes for an ‘exploitative purpose’ without permission from the individual.”  In order to have a claim under unlawful use of name or likeness, a plaintiff must satisfy three elements: (1) use of a protected attribute; (2) for an exploitative purpose; and (3) without consent.

In terms of the chatbot, there is use of a protected attribute – the dead person’s name and likeness – and there is likely no consent unless the person was asked before they died.  It also must be determined if this use of the person’s characteristics and personal information is for an exploitative purpose.  The “use of someone’s name or likeness for news reporting and other expressive purposes is not exploitative, so long as there is a reasonable relationship between the use of the plaintiff’s identity and a matter of legitimate public interest.”  In the case of the chatbot, the person’s identity is not being used to make money from the commercial use of the deceased’s identity because each user would craft their own chatbot that resembles a certain person; thus, the person’s name and identity are not being used in advertising or promoting purposes.

In addition, someone could have a claim for unlawful use of name or likeness if that person’s identity is used for an individual’s own benefit.  Here, it is clear that the person’s identity is being used for an individual’s own benefit because the individual wants to have a sense of connection to a deceased family member or someone else.  However, this form of benefit is purely emotional, while the benefits the element seems to address appear to be either monetary or status benefits.  Likely, the chatbot will not qualify as a claim for unlawful use of name or likeness, and some states do not even recognize postmortem rights, making the issue moot.

Because there is no working prototype, these questions and concerns are a non-issue, but it may not be much longer until one’s personality “outlives” their death through artificial means. 

Student Bio: Caroline Foster is a second-year law student at Suffolk University Law School.  She currently serves as a staff member on the Journal of High Technology Law.  Caroline received her Bachelor of the Arts from Bucknell University, double majoring in English Literature and Philosophy.

Disclaimer: The views expressed in this blog are the views of the author alone and do not represent the views of JHTL or Suffolk University Law School.

 

Print Friendly, PDF & Email