By: Sofia Martinez-Guasch
It’s been over a year since Instagram has declared a “War on Bullying” and the social media platform is still fighting. As part of its continuing anti-bullying initiative, Instagram recently just added two new features. One that automatically hides negative comments and another that discourages users from making comments with toxic-like characteristics by sending an additional warning message.
Today, where the pandemic has caused us to spend most of our time online, Instagram’s new tools can be especially helpful for kids and teens experiencing cyberbullying. However, with school already underway, Instagram needs battle tactics beyond these new tools to win the cyberbully war.
In the U.S. alone, 59% of teens have experienced some type of cyberbullying. Just last year, Instagram put some of its eggs in the anti-bullying basket, recognizing that many young users are, in fact, bullying on the platform. In 2019, Instagram announced one of its first anti-bullying tools, Restrict. In an effort to give victims of bullying more power over bullies, Restrict allows a user to subject a bully’s actions to their approval–without the bully’s knowledge–before the actions are visible to others.
Over the past year, Instagram has also strived to make users more aware of bullying on the platform and how their behavior can contribute to cyberbullying. In addition to creating anti-bullying “stickers” for users to use on their stories, Instagram developed a feature that notifies users before posting a comment that may express harassment. While Restrict provides users with a more positive experience on the platform, unfortunately, most users reject rewriting their comments before posting. Nonetheless, given the long journey towards anti-bullying, Instagram recognizes the importance of taking steps, however big or small, in that direction.
Just last month, Instagram took a few more steps forward to prevent bullying on the platform. One new step Instagram is taking is automatically hiding negative comments. With AI, the platform can detect comments similar to those previously reported. Instead of removing comments from a post outright–which is what occurs automatically with comments, Instagram deems the worst–Instagram is making these negative comments harder for users to see at first glance. Hiding these comments from the public (which can be revealed by a single click) may discourage other users from piggybacking on harmful comments. While some comments may just be inside jokes between users, Instagram wants those using its platform to be safe rather than sorry.
Another new step Instagram is taking is confronting users with a new message meant to encourage the reevaluation of potentially offensive actions by warning posters of the potential consequences of posting such material. Last year, Instagram had notified users when their comments could be considered harmful. Now, users who repeatedly try to post harmful comments will receive an additional message. After that initial notification, Instagram will warn users that if their negative comments continue, they risk having their account removed.
While these anti-bullying tools may be helpful for users, the social media giant should continue its efforts by considering how the pandemic has affected cyberbullying. Since students have transferred to online learning, cyberbullying has increased by 70%. If educators now instructing online are encouraged to teach students about safe and respectful behavior online, Instagram should too focus its anti-bullying initiatives on teaching new users how to be good online citizens.
As a platform that creates social interactions, which students have increasingly relied on during the pandemic, Instagram should introduce educational messages when users immediately join the platform. Much of the cyberbullying that occurs on Instagram is between people who actually know each other, but often where the bully creates an anonymous profile. Having a set of educational messages on cyberbullying appear when someone creates an account may promote more positive experiences online. More important, however, these messages may allow users to realize that by creating these accounts, they are actually not a respectful online citizen. Especially now where much of our life is experienced online, potential users may be inclined to ensure that their identity online–not just offline–is respectful towards others.
Another approach Instagram could take to further educate users about respectful online behavior is to provide suggestions when users write comments. For example, Google’s Gmail can predict what users want to say in their emails based on patterns of common language and can prompt users to include that language by simply “tabbing” in the middle of the sentence. Via the patterns that Instagram’s AI has already detected on toxic language, Instagram could prompt users drafting those messages to write comments that may not come off negatively. While users may argue this may disrupt First Amendment rights, Instagram has stated that when it comes to cyberbullying, the platform rather “err on the side of caution.”
The bullies that pre-pandemic mostly trolled on school property, have now expanded their playground online and Instagram’s anti-bullying initiatives need to address these changes. By considering the rise in social interactions among kids and teens during the pandemic, Instagram should take more innovative approaches to prevent toxic behavior on its platform. With how much time we’re spending online, Instagram could further help victims of bullies, and bullies themselves, become survivors of today’s war on bullying.
Student Bio: Sofia Martinez-Guasch is a second-year law student at Suffolk University Law School and serves as a Staff Member on the Journal of High Technology Law. Sofia holds a Bachelor of Arts in English and a minor in Law & Public Policy from Northeastern University.
Disclaimer: The views expressed in this blog are the views of the author alone and do not represent the views of JHTL or Suffolk University Law School.