Blog
How to Build Your Own Content Moderation Team
Building a content moderation team
Your online platform will pay a high price for hosting problematic user-generated content—even inadvertently.
Customers won’t want to visit it, advertisers won’t want to appear on it and your brand will suffer just by being associated with it.
To keep it clean, you’ll probably need content moderators working for you.
Many organizations choose to outsource content moderation, but for others it makes sense to build a team in-house.
This page draws on our long experience of providing content moderation services for our clients to give you an overview of everything you need to know and do to build and develop an effective in-house content moderation team of your own.
Choosing Your Approach
The approach you take to moderating your user-generated content has a big influence on the size and structure of the content moderation team you need to build.
There are a variety of options available to you, and you’ll probably wind up using a combination of several of them.
Pre-moderation or post-moderation?
Your human moderators can either screen user-submitted content before or after it’s published to your platform.
Pre-moderation works well for platforms where users don’t mind waiting for their content to be published—on classified advertisement sites like Craigslist and Gumtree, for example.
For platforms where speed of interaction is all-important—such as on social media sites—pre-moderation can be a bottleneck. Conversely, post-moderation is more cost-effective than screening everything.
Manual, automated, or hybrid
Manual content moderation is carried out by largely unassisted human moderators—and for most platforms, given the volume of content they’re required to screen, that is simply unscalable.
At the same time, automated content moderation tools, while capable of screening vast amounts of content at speed, aren’t always able to accurately identify what is and isn’t acceptable.
That’s why most platforms opt for a hybrid model, in which automated content moderation tools screen the content.
The more ambiguous content is relayed to human content moderators, triaging it for different types of moderation queues, e.g., sexually inappropriate or fraudulent content. The decisions those humans make also train the AI to adapt to slang and language evolutions, among other nuances.
Centralized or decentralized
Centralized content moderation is performed by a content moderation team and decentralized content moderation is generally performed by platform users.
While it’s cheaper, handing over the majority of content moderation to your users is highly risky. Turning your users into your first line of defense means exposing some of them to inappropriate content.
They also don’t have the training to consistently apply your standards. Finally, bad actors can game the system: for instance, shutting down voices they disagree with.
However, you’ll probably want a degree of decentralized moderation for your platform. That’s because some problematic content is likely to slip by both your automated and human moderators at some point—and you’ll want users to be able to bring that errant content to your attention.
The type of platform you run and the variety of content will dictate what approaches work best for you. Generally, we recommend using a centralized, hybrid approach that combines automated pre-moderation with human moderation for content flagged for review by AI and users.
How Content Moderation Teams Typically Work
Under the typical centralized, hybrid model described above, AI/Machine Learning (ML) algorithms handle the lion’s share without human intervention: often up to 80-90% of moderation.
The user-generated content that these algorithms can’t be certain about is sent into various queues for human content moderators to review.
This content will be filtered by language and location so the moderators can understand it and its cultural context.
It will then be categorized according to the sort of potential risk the AI has detected—some examples being fraud, nudity, hate speech, and misinformation.
The human moderators then decide if the AI has correctly detected a risk or not. If it has, the moderator may delete, censor or block the unacceptable content; if it hasn’t, the moderator will allow the content to be (or remain) published.
Whatever decision the human moderator makes will be fed back into the algorithm to improve its future accuracy.
Platform users should also be able to flag what they deem to be unacceptable content. The tickets they raise should go into a high-priority queue to be dealt with by human moderators.
The difference with this queue is that moderators will be on the lookout for platform users who are purposefully misleading or trolling them. They will usually be authorized to penalize these users with timeouts or IP bans.
Building Your Team: Size, Structure, Roles and Responsibilities
It’s impossible to generalize about how big a content moderation team needs to be as this depends largely on:
- Volume of content
- Content type
- Content length
- How quickly the content needs to be published after being posted
The average handling time of each case will be affected by content format—though even within content formats there can be wide variations in length.
For example, videos on TikTok can be 10 seconds long (and watched by moderators at an accelerated speed). But they can also now be 10 minutes long—or much longer since TikTok now allows live streams. Similarly, comments on Facebook can be one sentence or five paragraphs.
Your moderation approach will also affect the size of your team and the roles which you’ll need to fill.
To pre-moderate user advertisements, you might only need a team of 10-15 human moderators; to post-moderate queues of AI-flagged social media comments you might need hundreds.
This all said, there are certainly roles that recur across many types of content moderation team, and you’re likely to need to fill some or all of them:
Content moderators
Even if you’re relying largely on AI to screen content, you’re going to need to carefully select, screen, train and support human content moderators to review the more ambiguous content AI can’t decide on and review appealed decisions.
If you’re operating with a hybrid model, you’ll need human content moderators to work through queues of items flagged by AI and platform users.
Each queue will contain content that the AI has predicted as belonging to a specific category of unacceptable content (e.g. possible nudity, possible fraud, possible hate speech).
In our experience, you could be looking at more than 10 (possibly many more) content category queues for your website or app, and each queue will require a team—carefully selected to ensure they have the right skills and resilience to work on a particular category—working both day and night shifts to moderate it.
Because local context is hugely important when it comes to assessing how acceptable a piece of content is (with some words being offensive in the UK and not the US, for example), you’ll need moderators covering all the languages and cultures your platform operates in.
Quality assurance analyst
QA analysts provide a sort of double moderation service, helping ensure content moderators are consistently making the right calls.
They pick random cases to ensure quality—you could say they moderate the moderators.
The gold standard is having QA analysts review around 5% of the decisions made by experienced content moderators, and double that for new moderators. This typically means employing one QA analyst per 30-50 moderators.
Our QA analysts work closely with team leaders and the training team to ensure that any issues can be addressed and there are no misunderstandings when moderators are interpreting the client’s moderation policy.
Team leaders
These are needed to oversee, guide, and protect content moderators. It’s vital that every moderator has a close relationship with their team leader, to stop performance issues and psychological stressors flying under the radar.
For this reason, we recommend that each team leader oversees no more than 15 content moderators.
Additional subject matter experts can also walk the office floor checking on moderators, providing guidance and support when team leaders are unavailable.
Each floor walker should oversee around 15 moderators.
Psychologist
Looking after your moderators’ psychological wellbeing is vital to sustain performance and mitigate attrition, and psychologists are needed both during the initial hiring and onboarding process (to screen for trigger areas) and to provide ongoing mental health support to moderators. This support is so critical that it should be factored into all of your moderators’ workdays.
Operations managers
These people manage the day-to-day operations of the content moderation team. We would recommend you hire one operations manager for every 100 moderators.
Whatever the size and structure of your content moderation team, one thing is clear: you’re going to need content moderators. Probably a lot of content moderators.
So let’s take a closer look at some of the things you need to bear in mind while hiring, onboarding, training and developing those moderators.
Hiring and Onboarding Content Moderators
Not everybody is cut out to be a content moderator. As the frontline workers for platforms, they’ll be regularly exposed to potentially disturbing material.
So when you’re recruiting for your content moderation team, you need to remember that it’s not just about you looking for the right people to do the job—it’s about being the right company to do the job for. And that starts with the way you hire and onboard new moderators.
Your moderators will need to be smart and culturally aware enough to make quick decisions about often highly nuanced content.
The best way to guarantee cultural awareness is to hire locally—if you’re moderating in the US market, for example, it’s best to have a content moderation team in the US. If that’s not possible, look for similar cultures.
They’ll also need to be psychologically screened. This will prevent your organization hiring people who are unsuitable for the role, full stop—but it will also help you assign those who are suitable for the role to the right content queues.
Psychologists should interview new hires during onboarding to measure their level of resilience, and to identify what sort of content they find particularly triggering and should be shielded from overexposure to.
By the same token, those with a high level of resilience to more challenging content can be assigned to the relevant content queues.
Finally, anyone you hire should be aware of the mental health risks of the job.
Training and Development
Your moderators should be trained in as wide a range of processes and content formats as possible to smooth out absences for sickness or vacation.
That means exposing them to different queues and more complex, nuanced cases (within their capabilities and resilience levels, of course). They should also be schooled in what sort of mistakes the algorithms flagging content for their review can be expected to make.
Think about your long-term team strategy, too. Promoting your pool of content moderators into team leader and QA analyst positions gives them a career path and helps you as they’ll have good knowledge of your content moderation processes and guidelines.
Those who ascend to team leader positions will need good soft skills to work with the moderators they’re overseeing—it’s key to developing strong relationships with the moderators.
They’ll also need to work effectively with their team’s subject matter experts and psychologists, other moderation teams, and representatives of the wider business—particularly engineers and data scientists if your algorithms are developed in-house.
Safeguarding the Wellbeing of Your Moderators
Much of what your moderators will have to review on a daily basis will be upsetting—and potentially traumatizing. If they’re not properly cared for they could quickly burn out, turn over and even suffer PTSD.
Letting your moderation team become stressed out and traumatized isn’t humane or good business.
The happier moderators are, the more efficient and productive they’ll be. So how do you make them happy? You’ll need to build a supportive working environment and culture that encourages them to find meaning in their job as the first responders keeping the internet safe for users. Take a compassionate approach to management and consider using technology to monitor your moderators’ mental wellbeing.
Measuring Your Team’s Success
It’s critical to measure your content moderation team’s performance, both to optimize that performance and to secure buy-in from the rest of the business.
Here are some metrics that you can track to understand how well both your human moderators and automated content moderation systems are performing:
- Proportion of your platform users exposed to violations
- Percentage of content flagged as inappropriate
- Your human content moderators’ response times (between activity/content being flagged by automated systems and the ticket being closed)
- Tickets responded to by individual moderators per hour
- The impact of your team’s activities on customer satisfaction, tracked via metrics like CSAT and NPS
- How accurately your automated content moderation systems are categorizing content,and how this improves over time as these systems are fed more data
- F1 score (the predictive performance of your AI model)
- The recall rate of your model (accuracy of true vs. false positives and negatives)
From a wellbeing perspective, you should also keep an eye on how satisfied your human moderators are with their work and the support you offer them. Arrange it so they’re giving you regular feedback—and keep a watchful eye on your retention rate.
If you’re effectively supporting wellbeing, your moderators will be happy and proud to work for you—and will perform better work for your community of users.
Build the Content Moderation Team Your Platform’s Users Deserve
There’s no doubt about it: Building your own content moderation team takes a ton of hard thought, hard work, and financial investment.
If you think that you could use some outside help with this stuff—we get it. Concentrix provides outsourced content moderation services to over 200 clients worldwide in over 25 languages.
Our hybrid solution combines expert human content moderators with rigorously tested in-house AI and technology and real-time monitoring—and it can all be tailored to your organization’s specific needs.
If you want to talk more about what we can do for your platform and business by working together, hit the button below to get in touch.