Contents
Are you tired of feeling like an alien when tech folks toss around terms like pre-moderation and natural language processing? Fear not, for we’ve got the ultimate guide to content moderation lingo. From deciphering UGC to navigating the murky waters of hate speech – and just what in the world is this time to site anyway?
So, sit back, relax, and prepare to become a content moderation master with our comprehensive glossary. And no, there won’t be a test at the end of this.
Keep this blog post bookmarked so you can casually glimpse at it in meetings and nod along – and ask questions that will make you look savvy 😉
Types of content to moderate
There are countless formats and mediums to choose from. Let’s explore some of the most popular types of content, their unique features, and what they actually entail.
Text
Text-based user-generated content is any written content created by a user and shared with others. This type of content can include blog posts, social media posts, tweets, comments in discussion forums, and even longer works such as essays or stories.
Examples of user-generated text
- Entries on forums and communities.
- User-profile presentations.
- Reviews of a product or service.
- Item descriptions on marketplaces.
- Comments from users on another piece of content.
Images
Images are like the salt and pepper of content – they add flavor and spice to every platform. They’re the hilarious memes that keep you entertained for hours, the pics of food on Instagram, and the awkward selfies you secretly love taking for dating apps. Without them, apps and websites would be as exciting as a dry piece of toast.
Examples of images we moderate
- User-profile pictures and videos.
- Product pictures.
- Visualization for sharing economy services.
- Endorsements for products and services.
- Reviews.
Videos
User-generated videos can be both entertaining and terrifying at the same time. It could be a hilarious cat video or a cringe-worthy dance performance. But just like images, user-generated videos also need content moderation. Trust us; you don’t want to stumble upon a video of your grandma doing the Macarena in her underwear. We‘re here to protect you from that kind of trauma.
Examples of videos we moderate
- Graphic violence: Videos depicting violence, such as fights, assaults, and shootings. These videos can traumatize viewers and must be moderated to protect users from such content.
- Sexual content: Many popular platforms exist for creating and sharing short-form videos. These videos can harm younger viewers and require moderation to ensure age-appropriate content.
- Copyright infringements: Users can easily upload videos, but sometimes they may use copyrighted content without permission.
- Self-harm: Users can threaten to harm themselves (or others).
Types of content moderation
We have covered this topic extensively in about a gazillion blog posts. The 5 most common content moderation types:
- Pre-moderation
- Post-moderation
- Reactive moderation
- Proactive moderation
- Distributed moderation
Glossary
- API – An API is a way for different programs to talk to each other and share information like two people talking. It acts as a translator, allowing the programs to understand and work with each other.
- Automated & AI-powered moderation – Content moderation that uses algorithms, you know, a lot of code, to identify and remove inappropriate content. Image recognition, natural language processing, and other forms of automated content analysis. This is something we at Besedo do successfully for many of our customers.
- Automation rate – A measure of how much of a job can be automated. Besedo has helped companies like Kaidee and Change.org achieve really high numbers of automation.
- Average Reviewing Time (ART) – The average time it takes a piece of content to be reviewed. Latency kills, but faster is not always more accurate.
- Balancing free speech and content restrictions – The tension between allowing free expression and maintaining a safe and respectful environment. Platforms must strike a balance between allowing users to express themselves freely while also enforcing content policies to prevent harmful or inappropriate content from being shared. And no, content moderation is not censorship.
- Code of conduct – A set of ethical guidelines that govern the behavior of users on a platform. The code of conduct usually includes policies on respectful behavior, non-discrimination, and other ethical considerations. If you don’t have one of these, you should get one; at least then, you have something.
- Community guidelines – Guidelines that outline the rules and expectations for platform users. These are the “house rules,” if you will. These include policies on content, behavior, and conduct.
- Content policies – Not the same as community guidelines. The Content policies outline what types of content are allowed or prohibited on a platform. What can users write, and what type of images and videos can they post? This can include guidelines on hate speech, harassment, explicit content, and other inappropriate content.
- Copyright infringement – The unauthorized use of copyrighted material in a way that violates one of the copyright owner’s exclusive rights, such as the right to reproduce or perform the copyrighted work or to do derivative works. Examples of copyright infringement include copying a song from the internet without permission, downloading pirated movies, or it could be using images on an online marketplace without permission. Copyright infringement is illegal and is subject to criminal and civil penalties.
- Decentralized moderation – Moderation distributed across a network of users rather than being controlled by a central authority. This can involve peer-to-peer networks, blockchain technology, or other forms of decentralized moderation.
- False positive – An alert that incorrectly indicates that malicious activity is occurring.
- Filters – Filters play a crucial role in content moderation as they can automatically identify and remove inappropriate content, such as hate speech or explicit images, before reaching the audience on a platform.
- Hate speech and harassment – Offensive, threatening, or discriminatory speech. Targeted attacks on individuals or groups based on race, gender, religion, or other characteristics.
- Human moderation – This relies solely on human moderators. This can involve a team of moderators reviewing and removing inappropriate content.
- Image recognition – Technology that can identify and classify images. In content moderation, this is used to identify and remove inappropriate or explicit images. It can be nudity, text in images, underage people, and a lot more. But it’s also very powerful to approve content that is relevant such as photos of people in bathing suits or underwear in the right category on an e-commerce website.
- Inappropriate content – Simply content that violates a platform’s community guidelines or terms of service. This can include hate speech, harassment, and explicit content that violate platform policies. What this entail is different from platform to platform.
- Machine learning – A type of artificial intelligence that allows the software to learn and improve over time without being explicitly programmed. This can be used in automated moderation tools to improve accuracy and efficiency.
- Manual moderation – Content moderation that human moderators perform. This can involve reviewing flagged content, monitoring for inappropriate activity, and enforcing platform policies. Manual content moderation is part of Besedo’s offering.
- Misinformation and fake news – False information that is spread intentionally or unintentionally. Including conspiracy theories, hoaxes, and other forms of misinformation.
- Natural language processing (NLP) – Technology that can analyze and understand human language. In content moderation, NLP identifies and removes inappropriate language and hate speech. But it’s so much more than that. Natural language processing is also a way for a machine to learn the difference between online banter and actual threats. It’s a way for the machine to learn about sarcasm and all those things we humans take for granted. On our engineering blog, you can learn more about this.
- Platform-generated content – Content that is generated by the platform or website itself. When you hear about platform-generated content, it is usually automated posts, system-generated messages, and ads.
- Post-moderation – Moderation that takes place after content is published on a platform. Sometimes this can involve users flagging inappropriate content and human moderators reviewing and removing it.
- Pre-moderation – Content moderation that takes place before content is published on a platform. This can involve human moderators reviewing content and flagging inappropriate content before making it public.
- Proactive moderation – Moderation that is proactive in preventing inappropriate content from being published. You need to have filters, automated tools, AI technology, or human moderators actively seeking and removing inappropriate content.
- Reactive moderation – Moderation in response to user reports or complaints. Content moderators review and remove reported content. This is a very powerful tool, but it should only be used to supplement one of the other moderation methods for most websites.
- Spam and scams – Unsolicited messages or attempts to deceive users for financial gain. Oftentimes this includes phishing scams, fraudulent messages, and other forms of unwanted communication.
- Take-down – Action to remove content or a user.
- Terms of service – The legal agreement users must agree to to use a platform. This outlines the terms and conditions of using the platform and the consequences for violating them.
- Time to site – The time it takes for a piece of content to be published on a platform after the moderation process has taken place. In networking, CDN caching, and DNS caching, this is called Time To Live (TTL).
- Trust & safety – Refers to measures to ensure a safe and trustworthy environment for users, including policies, reporting tools, and risk identification systems, to build user trust and protect against harmful or abusive content or behavior.
- User Experience (UX) – The User Experience (UX) is the overall experience and satisfaction a user has when interacting with a product, system, or service.
- User-generated content (UGC) – Content that is created by users of a platform or website. Examples include any text, images, and videos uploaded by users. There’s a whole article that goes into great detail about UGC.
Ahem… tap, tap… is this thing on? 🎙️
We’re Besedo and we provide content moderation tools and services to companies all over the world. Often behind the scenes.
Want to learn more? Check out our homepage and use cases.
And above all, don’t hesitate to contact us if you have questions or want a demo.