• Transform magazine
  • May 08, 2024

Top

AI and online moderation: why context is everything

Bodyguard Matthieu BOUTARD Managing Director 3

Matthieu Boutard, president and co-founder of Bodyguard.ai, considers how brands can best navigate the tricky waters of online hate speech and content moderation.

Elon Musk’s recent acquisition of Twitter has once again ignited the content moderation debate.

The business magnate and investor was quick to introduce several dramatic changes upon taking over the global social media platform, leading to concerns over the potential relaxing of content moderation efforts on the platform.  

Musk himself stated that Twitter’s commitment to content moderation will remain “absolutely unchanged”. Yet where bans have been lifted on many controversial accounts, scepticism remains, with several of the platform’s major partners including United Airlines, Carlsberg and Volkswagen having paused their advertising as they wait to see how the dust settles.

So, what exactly is the content over content moderation?

On one side of the debate, there is the view that moderation is a threat to free speech. On the other, meanwhile, many feel that moderation is needed to combat online hate.

Of course, this is not a straightforward yes or no issue. Any brand considering content moderation needs to strike a balance that both satisfies the most parties possible while equally driving optimal outcomes.

Online hate is a growing problem that can lead to serious consequences. Every hate comment represents an attack that can truly impact the real people operating on the frontline of social media, customer services or technical support.

Issues of burn-out, demoralisation, desensitisation, depression, PTSD, and other mental health issues may creep in when individuals are subjected to such abuse. And studies have shown that many simply won’t put up with hateful content, with 40% of people stating that they would leave a platform on their first encounter with toxic content. 

The need for content moderation is therefore clear. Indeed, brands and business leaders to be proactive in their approach to making the internet a safer, more inclusive place for all.

However, while toxicity cannot be allowed to pollute communications channels, there also needs to be room for differing points of view, criticism and honest feedback of all forms.

Customers are increasingly choosing to transact and interact with brands online. For brands to harness this, encouraging open dialogue and building trust in these interactions, they need to ensure that customers feel heard. That means listening to all feedback, be it positive or negative.

Brands must tread carefully

Striking this balance is an incredibly challenging task.

First, successful moderation is easier said than done, with the lifecycle of online comments presenting several challenges. In the case of Twitter, the average tweet has a lifespan of just 18 minutes, for example.

Unless there is instantaneous moderation in place, a toxic comment can do instant and irreversible damage. However, a trained human moderator takes around 10 seconds to analyse and moderate a single comment. When confronted with hundreds or thousands of comments every minute, it’s simply impossible for brands to keep up.

Here, AI is helping the situation, removing negative content at a much faster rate thanks to automation. That said, there are several limitations in relying too heavily on AI engines.

Currently, machine learning-based content moderation algorithms on social networks have an error rate varying between 20-40%. Resultantly, only 62.5% of hateful content can typically be removed successfully by AI.

At the same time, firms exploring the use of AI-based solutions need to consider the dangers to brand reputation of over-censorship, and how context is important to decision-making.

Algorithms are typically unable to gain a full grasp on important details such as context, sentiment analysis, punctuation and colloquialisms. When it comes to comments between friends, for example, it can be incredibly hard for engines to detect specific nuance or context.

For this reason, organisations pursuing content moderation must proceed with caution. Social media or other customer engagement platforms can be highly useful in gaining honest customer feedback and holding brands to account. Therefore, those that lean too far into moderation and remove all negative comments may face a considerable backlash.

95% of potential UK brand customers we spoke to mentioned maintaining freedom of speech as a concern. Therefore, while users may migrate to other platforms if they feel there is a toxic atmosphere, many will do the same if they feel the dial is pointing too far in the other direction.

The importance of a blended approach

For this reason, a blended approach between AI and human moderation will be the most effective way of moderating content, providing a happy medium for brand, moderator and community.

Yes, artificial intelligence is proving useful in helping brands protect online channels from toxicity and hate at speed, while also mitigating many of the issues such as fatigue and/or desensitisation challenging human moderators. Yet at the same time we are nowhere near ready to replace human moderators.

Specialists still need to play a crucial role as arbitrators of what is ultimately acceptable, and brands must continue to support those moderators who are responsible for taking key decisions on the appropriateness of comments.

It’s a difficult and fine line to draw. Indeed, the subtleties of language, sentiment analysis and the complexities associated with context of meaning can be the difference between censoring free speech and protecting communities.

As long as AI engines struggle to determine these fine details, a blended approach will remain vital.