An Unethical Way to Make ChatGPT More Ethical

By: Karlie Rubin

ChatGPT swept the internet in a frenzy in the latter months of 2022.  Created by OpenAI, a company located in San Francisco, ChatGPT is an artificial intelligence chatbot that generates text that is eerily close to human language.  So far, ChatGPT can write emails, essays, and poetry, answer questions, and generate lines of code based on a prompt.  Amidst the hype of this AI robot’s introduction to the internet, many tried to see how far they could push the system, attempting to use it to expel harmful or derogatory remarks.  It wasn’t long before the robot showed its true colors.

The way ChatGPT was programmed was through a deep learning technique called transform architecture.  Essentially, the robot pulls various amounts of data on the internet and sifts through it for words and sentences on how to answer.  The GPT-3 model allows it to draw from an enormous catalog of web information, such as Wikipedia articles, books, news posts, etc. – anything on the internet.  This certain model allows for speedy responses and creates the most human-like response from artificial intelligence yet.  The downside is that ChatGPT has also drawn from the dark side of the internet.  Without knowing “right from wrong” or “good from bad,” ChatGPT has made some racist and sexist remarks.  Artificial intelligence tools have a reputation for biases based on the data they are trained with, and ChatGPT is no different.

Once ChatGPT went live, the population quickly took to pushing the artificial intelligence to its limits to see if there were biases installed.  ChatGPT does have safeguards installed in the system to try and keep the robot from discharging derogatory remarks.  This means that if the bot is directly asked to give a problematic response, it won’t.  The bot will reply that it is not capable of generating harmful or offensive content.  It didn’t take long for users to discover various workarounds though, which could be used to get the bot to say something racist or sexist.  For example, one user asked ChatGPT to write a Python script for the race and gender of a good scientist, to which it replied that only a white male would make a good scientist.  In addition, it was asked to create a rap song based on the race and gender of a good scientist and expelled a racist rap song.  The users found that as long as they asked for these statements in non-standard ways, the bot had no problem discharging horribly biased responses.

To attempt to put guardrails in ChatGPT and avoid derogatory outputs, OpenAI hired the Kenyan company, Sama, to employ workers to classify, identify and label images or content as disturbing.  The task required these workers to examine child abuse images and stories about rape.  These workers were earning around $1.32 per hour with a monthly bonus of $70 for the explicit nature of their work.  Often times, the workers were subjected to sit with graphic videos and content for hours at a time surrounding the issues of child pornography, bestiality, hate speech, physical violence and sexual violence and more.  Sama claims to have offered personal welfare services to their employees during this time.  These services included personal spaces for counseling, meditation, prayer, gaming, and full meal services to help support the mental well-being of their employees who sift through such traumatizing content.

OpenAI sought to fix the ethics and morals of their ChatGPT program in an astonishingly unethical and immoral manner.  The contracts between Sama, who labeled itself as an “ethical AI company,” claimed to have paid their workers $1.32-$2.00 per hour.  Sama was being paid $12.50 per hour from OpenAI, while Sama was also working on contracting with Microsoft which will bring the company around $29 million.  From all those numbers, the true question is whether it is ethical for OpenAI to outsource this traumatizing work to Kenya, especially considering the work leaves employees with PTSD for years to come.  Facebook is also under scrutiny from this same issue for outsourcing their content moderation for little pay.

OpenAI released a statement addressing the mental health of these workers in Kenya and stated that it is Sama’s full responsibility to be assuring that their workers are getting the benefits of mental wellness in their facilities.  However, a Time Magazine investigation found that most employees found limited wellness appointment availability.  If they did obtain wellness appointments, they were often conducted in a group setting, with other one-on-one requests denied by Sama Management.  Shortly thereafter, Sama cut their contract short with OpenAI for asking to classify some C-4 (illegal) content.  Although OpenAI has refused to take any accountability for this C-4 content that arose, it also has refused to take any accountability for the welfare of the employees the work is being outsourced to.  While OpenAI doesn’t seem to have broken any labor laws and there have been no suits filed against them, questions on how artificial intelligence issues will be decided in the courts is still unknown.  The need for content moderation on ChatGPT and other internet tools remains unchanged, and the first step towards achieving that moderation is by properly caring for the workers who perform this unenviable task.

 

Student Bio: Karlie Rubin is a second-year law student at Suffolk University Law School.  She is a staff writer on the Journal of High Technology Law.  Karlie received her bachelor’s degree in Counseling/Psychology from Lesley University.

Disclaimer: The views expressed in this blog are the views of the author alone and do not represent the views of JHTL or Suffolk University Law School.

Print Friendly, PDF & Email