SUPPORT HUMAN RESOURCES

The Trust & Safety Academy

Tailored as a guiding compass, this immersive course caters to both seasoned Trust & Safety professionals and those just stepping into the industry, promising unparalleled expertise.
The Trust & Safety Academy

Website: The Trust & Safety Academy, ActiveFence
Type: Course
Location: Self-paced, Online
Price: Free


Welcome to the Trust & Safety Academy, an innovative online program meticulously crafted by ActiveFence. Tailored as a guiding compass, this immersive course caters to both seasoned Trust & Safety professionals and those stepping into the industry, promising unparalleled expertise.

Introduction to Trust & Safety

How did the industry of Trust & Safety develop? What role does Trust & Safety play within organizations and the larger tech ecosystem? What are the threats facing online platforms and their users across different types of platforms? These are the questions Goldberger will answer in this first introductory class. From the creation of the internet and the first innovations in safety-driven technology to the impact of online platforms on the world, she will share the history, threats, and events that have shaped Trust & Safety and where it stands today.

Disinformation, Misinformation, and Geopolitical Risk

This two-part session will tackle the complexities of disinformation, and provide an understanding of how false narratives are created, the threat actors behind them, and how they interplay with current events. We will discus how foreign and domestic actors use fake accounts, amplify content, and establish complex networks to manipulate public discourse. Using real-world examples such as recent elections and the war in Ukraine, we will look at how platforms quickly become hijacked to spread false narratives. In the second half of the session, we will demonstrate how emerging trends, narratives, and misinformation create risks to online platforms that come with potential offline consequences. We will also explain how narratives promoting social unrest, hate speech, conspiracy theories, and political and health misinformation can arise from significant political and cultural events.

Terrorism and Extremism

Some of Trust & Safety’s most difficult tasks are dealing with the ugliest content on the web: building support for, creating, and disseminating content related to terrorism, extremism, hate speech, and violence. The recruitment of terrorists, promotion of white supremacy, and live streams of mass shootings are some real, on-platform examples of the manifestations of these risks. These abuses can come from organized networks or coordinated predators using sophisticated methods to avoid detection. We will dive into how threat actors enter online spaces to foment and spread these harms, and will explain their complexities and evasive techniques.

Child Abuse

In this session, we will focus on the online threats facing children, such as grooming, sextortion, and exploitation. Sharing how the nuanced tactics of digital predators deceive detection, we will expound on how predator communities have learned to manipulate codewords to exploit both the dark web and popular platforms.

The Intelligence-Based Approach to Content Moderation

After gaining a clear understanding of the online threat landscape, we will dive into the inner workings of Trust & Safety teams. We will start with the most essential component - detecting harmful content. Nisman will break down how the tools used by Trust & Safety teams allow for the proactive detection of threat actors and how employing tactics such as abuse analysis, linguistic recording, and network tracking can help teams prevent the spread of these harms.

The AI-Based Approach to Content Moderation

This class will shed light on the other side of harmful content detection: artificial intelligence. Orr will explain how technologies like machine learning models, automation, digital hashing, and risk scores, help teams scale by scanning more content, quicker, increasing the recall rate for potentially harmful content, and by extension, protecting the mental health of human moderators. Orr will also discuss the limitations of AI - from lacking visibility into the nuance and context of content to overlooking regional differences and content sentiment.

Trust & Safety’s Legislative Environment

In the past few years, the industry has witnessed a surge in countries worldwide regulating online platforms. Legislation like the UK Online Safety Bill, the EU’s Digital Services Act, and California’s Age-Appropriate Design Code Act have impacted how online platforms are regulated. This session will review these laws in addition to Section 230, current court cases on platform liability, and the future of internet law.

The Trust & Safety Lifecycle

Being in a new industry, Trust & Safety teams are tasked not only with protecting platforms and their users, but learning on the fly about how to best prepare, create, and maintain these digital spaces. This session will review the lifecycle of Trust & Safety teams, including safety by design, triaging risks, measuring success, and releasing transparency reports. It will provide students with an understanding the different components necessary to make and keep platforms healthy from the beginning, as well as how to maintain trust in the public sphere.

About the author
Steph Lundberg

Steph Lundberg

Steph is a writer and Support leader/consultant. When she's not screaming into the void for catharsis, you can find her crafting, hanging with her kids, or spending entirely too much time on Tumblr.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to SUPPORT HUMAN RESOURCES.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.