By: Justyn Trott
Ongoing consumer class actions arose against the Google Assistant in response to the virtual software developed by Google LLC and Alphabet, Inc. The Google Assistant is a software program that allows users to ask questions and give instructions by using their voice. This software comes preloaded onto a certain range of devices, including, Google Home, Google Pixel Smartphones, and third-party manufactured smartphones that use the Android operating system. Due to the Google Assistant being voice-activated, it is constantly listening for “hotwords” including the commands, “Okay Google” or “Hey Google.” When these “hotwords” are detected, the Google Assistant is programmed to switch to an active listening mode. In this mode, it starts to record and analyze the incoming audio so that the software can carry out the incoming commands. This active listening mode can also be activated manually by a button located on any of the devices.
Google Assistant was unveiled during Google’s developer conference on May 18, 2016, as part of the unveiling of the Google Home smart speaker and new messaging app Allo; Google CEO Sundar Pichai explained that the Assistant was designed to be conversational and a two-way experience, an otherwise “ambient experience that extends across devices.” Users primarily interact with the Google Assistant through their natural voice, including keyboard input also being supported. The Assistant is able to search the internet, schedule events and alarms, adjust hardware settings on the user’s device, and show information from the user’s Google account. Google has also announced that the Assistant will also be able to identify objects and gather visual information through the device’s camera, and support purchasing products, sending money, as well as identifying songs.
Consumers claim that Google is keeping and using the audio recordings for various purposes outside of carrying out user’s commands, including, targeting personalized advertising to users and to improve the voice recognition capabilities of the Google Assistant. Google has stated that they do in fact analyze the audio recordings but states that only 0.2 percent of all the recordings are actually analyzed. Additionally, sometimes Google Assistant may be triggered into an active listening mode when words get misperceived coming from the user, mistaking them as “hotwords,” otherwise known as “false accept.” Investigations from VRT NWS analyzing the nature of the “false accept,” found that a majority of the recorded audio has been due to the Google Assistant’s “false accept.”
The most recent suit against Google LLC is in response to their use of the audio recordings in “false accept” situations. From the plaintiff’s perspective, this qualifies as an invasion of privacy, with most conversations coming from inside homes. Additionally, plaintiffs believe that this practice also violates the Privacy Policy that Google presents to users prior to using the software. Moreover, a troubling concern is that these recordings can also include those from children, who are not believed to be able to consent to such policies.
The plaintiffs, who are all purchasers of Google Assistant Devices, have sued Google LLC and its parent company Alphabet Inc. under multiple state and federal laws, consisting of 12 claims, based on the “false accept” situations. Their conversations were obtained by Google and other third parties, without the proper consent or authorization starting as early as 2016. Subsequently, Google is moving to dismiss the claims under Federal Rules of Civil Procedure 12(b)(6) for failure to state a claim upon which relief can be granted.
The plaintiff’s claimed under the theory that they overpaid for their respective devices and would have paid less or not at all for the devices if they knew Google was intercepting the recordings from their devices. This claim was found plausible as the plaintiffs allege that they actually made a purchase for a Google Enabled Device through either Google directly or a third-party agent of Google. The second theory of injury was that Google “wrongfully monetized and profited from plaintiffs’ personal content and information.” This failed as plaintiffs did not show that they themselves had any vested interest in any money from the “personal content and information” that they claim Google used.
Ultimately, the court granted dismissal of all 12 claims by the plaintiff. Plaintiffs failed to successfully plead that their own conversations were taken away and that those conversations were subject to a reasonable expectation of privacy. Furthermore, plaintiffs failed to show that the conversations and content taken from them were monetized by Google. Therefore, plaintiffs did not show the court adequate injury or “harm to the alleged victim” for which they could recover.
Google may have been able to slide away from this suit unharmed, but the way technology is progressing this may be the first of many privacy issues that will arise. Individuals should not have to give up their sense of privacy just to get access to the new software advancements such as the Google Assistant, but it seems that there will always be a trade-off when it comes to your personal information in the digital world we live in. This should come as a wake-up call for tech companies to make changes to their security and privacy practices, hopefully allowing users to minimize the amount of information they share while still enjoying the convenience of hands-free technology. However, due to the lack of consumer privacy laws within the area of voice data, there may not be a wake-up call at all.
Student Bio: Justyn Trott is currently a second-year law student at Suffolk University Law School, focusing in Intellectual Property Law. He is a staffer on the Journal of High Technology Law. Prior to law school, Justyn received a Bachelor of Science in Biomedical Engineering from the University of Hartford.
Disclaimer: The views expressed in this blog are the views of the author alone and do not represent the views of JHTL or Suffolk University Law School.