Blog

Moderating User Generated Content

User-generated content is the very definition of a double-edged sword. On the one hand, it helps brands extend their reach and connect authentically with customers. And in some cases, user-generated content IS the product (think Twitter, Reddit, TikTok).

On the other hand, this content offers extraordinary reputational, legal and regulatory risk.

But it’s a risk most brands have to take.

Consumers trust and buy from brands more when they see user-generated content in their marketing, and the fast-growing UCG platform market is already worth over $3 billion.

And this is before the metaverse will offer enormous opportunities and enormous risks as fans build content and something new—experiences—for their peers.

Allowing users to generate content is already inherently risky—the content can be illegal, fraudulent, harassing, violating and otherwise not fit to share. But deciding exactly what constitutes inappropriate content and applying standards consistently isn’t easy, especially at the enormous scale of content.

So how can you manage the inherent risks user-generated content poses while reaping the substantial benefits?

This article seeks to answer that question. More specifically, you’ll learn:

  • The benefits of UGC moderation
  • How to develop a well-defined content moderation policy
  • How to get your audience and moderators on the same side
  • The tools and techniques that can help your UGC moderation

The Benefits of UGC Moderation

Without moderation, your platforms open to the ugliest the internet has to offer, from hate speech to trolling, from graphic content to fraud. And this can erode trust in your brand and deter visitors.

Users exposed to offensive and illegal content are less likely to purchase and most are less likely to return. Some offenses attract more attention, generating significant press and social media attention, and drawing the ire of consumer advocacy groups or legislators.

Content you never posted or saw can taint your brand.

But creating a safe haven from bad actors can boost engagement, interaction and transactions to new heights—boosting brand value.

In other words, moderation is the cornerstone of vibrant user communities.

So while there’s a strong risk aversion driver behind UCG moderation, there’s also a revenue building one.

How to Develop a Well-Defined Content Moderation Policy

Some moderation decisions are easy to make: no fraud, no spam, no pornography. But some are hard, and not all users are going to agree where the line is.

That’s why brands need a content policy that informs user audiences about what is and is not allowed.

A clear and transparent content policy (also known as an acceptable use policy) acts as an essential benchmark for both users and employees making the calls about content. Without a predetermined set of rules and guidelines, you leave acceptability open to individual interpretation and risk accusations of bias.

If you’re developing a content policy, consider the following recommendations:

Be specific

In addition to outlining what constitutes acceptable content, give specific examples of content that would be deemed appropriate or inappropriate. You can even use videos or images to demonstrate thresholds.

Be aware of context

Content policies need to be contextually appropriate. A dating site will have different standards than a children’s learning app.

Outline actions

Make sure your policy outlines what will happen if a user breaches its terms. Will content simply be deleted or will the user be given a warning or prevented from using the platform?

Work with your legal and compliance teams on the language of your content policy to make sure it’s legally binding and fits or can be adapted to meet local laws and regulations. Leave time and budget to get it professionally translated into all the languages your platform serves.

Keep your voice

A content policy is serious business, but that doesn’t mean it has to be tedious to read.

Ensuring the policy is accessible, and even having some fun with it, might encourage more engagement and awareness.

How to Get Your Audience and Moderators on the Same Side

The vast majority of users welcome fair content moderation. A Gallup poll revealed, for example, that 85% of Americans support the removal of false or misleading health information from social media.

The same study showed that 81% also supported the removal of deliberately misleading material on political issues.

But for some, content moderation and censorship are two sides of the same coin. People can feel aggrieved and personally slighted when content is flagged for moderation or outright rejected. Empathizing with your audience and showing understanding are important. The following recommendations might help:

  • Establish a consistent narrative explaining the purpose of content moderation and its importance
  • Make sure your customer support teams are familiar with your content policy and know how to point users to relevant parts
  • When appropriate, customer support should be able to suggest how the user could adapt or edit their content to make it acceptable for the site, for example, showing which specific words or images need to be changed
  • Ensure customer support is clear about what will happen next for the user, particularly if they violate the content policy again

Getting content moderation right is a tricky needle to thread. But if you shape your responses with courtesy, empathy and precision, you maximize your chances of keeping your audience on your side.

The Tools and Techniques That Can Help Your UGC Moderation

There are a wide variety of tools and techniques that can be used to assist and complement UGC moderation. Some of the more common are set out below.

Know Your Customer (KYC)

UGC is frequently used as an amplifier of outright misleading opinions (for example, medical ‘experts’ claiming miracle cures or denying the existence of certain diseases). So it can be important for UGC moderators to determine whether the author of a piece of content is who they claim to be.

KYC procedures should play an important role in validating individual user identity information.

They can be incorporated into the content moderation process at different stages. Depending on your business, you may want to run KYC checks on some or all users registering to submit content, and then refer content to moderators if a red flag is raised. Alternatively, moderators might only request a KYC check when they suspect the author may not be authentic, which can also be a way to avoid deterring your users from signing up due to strong KYC processes.

To find out more, check out our blog post on whether a proper KYC strategy could have prevented ‘The Tinder Swindler’?

Natural Language Processing (NLP)

Artificial intelligence is increasingly being used to automate content moderation. AI-powered tools called Natural Language Processing (NLP) can be used to analyze written or spoken human language, and even analyze sentiment, which can be useful for identifying hate speech or online abuse.

While NLP is far from foolproof, it can be used to block content that is clearly violating content policies, and to flag content AI tools aren’t sure about for human review.

Image recognition

Computer vision (an AI system enabling computers to derive meaningful information from digital images) can classify visual content.

This clever bit of tech can also be harnessed to predict the likely emotions the images/video will evoke. It can even indicate the probability that they contain certain categories of offensive or inappropriate content.

These AI applications are learning and improving all the time. But they’re not perfect. For example, depending on the amount of skin on display, an AI might not be able to distinguish between a proud parent showing off a child swimming on holiday and child pornography.

Unwarranted blocking or penalization of content is frequently the result—much to the very vocal annoyance of users.

Don’t Panic, Help Is Available

Content moderation is a high-stakes game. Properly channeled and controlled, user-generated content can generate huge business upside. Yet the risks of poorly moderated content carry just as a significant downside.

There are practical steps you can take to get on the right side of this equation, but avoiding the pitfalls takes significant expertise and resources.

Many businesses opt to outsource their content moderation and management functions altogether—some because it’s cheaper to outsource it to a BPO that can tap less expensive labor markets. But the humans-in-seats model isn’t always the highest-quality or best value.

Concentrix does things differently. We focus on getting exactly the right content moderation team that match the nuance of a specific business, and process engineer them to pair with ever-smarter AI. Over time, we optimize the AI models using feedback from moderators, seeking to turn ever more complex decisions over to AI.

Most companies want to break the cycle of ever-scaling resources, especially as Web 3.0 is about to dramatically up the complexity and volume of content.

Discover how we can help you to moderate user generated content about your brand.

Contact Concentrix

Let’s Connect

Blog

Moderating User Generated Content