Meta’s Oversight Board is preparing to make some job cuts, according to a report by The Washington Post.
Last week, the body dubbed Meta’s “supreme court” told some employees their jobs were at risk, per an unnamed source quoted by the newspaper.
The cuts are expected to affect staff who support the 22 experts, such as academics and lawyers, who make decisions about content moderation on Facebook, Instagram and Threads, according to the Post.
The Oversight Board, which operates independently from Meta, was first announced by Mark Zuckerberg in late 2018 and began operating in October 2020. It was initially funded by Meta with a $130 million grant and a further $120 million in 2022.
The Oversight Board Trust chair, Stephen Neal, confirmed it was making “targeted cuts” in a statement sent to Business Insider.
He said the reductions would allow the board “to further optimize our operations by prioritizing the most impactful aspects of our work that are delivering results for millions of people who use Meta’s platforms around the world.”
Neal said Meta remained committed to the board’s success, and the board was confident the company would continue to provide additional funding in the future.
“Looking forward, we will continue to take the hardest cases, keep holding Meta to account, while working to improve how people across the world experience Facebook, Instagram and Threads,” he said.
A Meta representative told BI that the company “remains committed to the Oversight Board, which operates independently from the company, and continues to strongly support its work.”
Meta said that it valued the board’s perspective and planned to continue updating policies and practices in response to their feedback.
Although the Oversight Board operates independently from Meta, the layoffs could affect the company’s ability to police misinformation on its platforms amid mounting concerns about the spread of misinformation as the US election approaches.
The Financial Times reported that regulators were already concerned that Meta’s moderation did not go far enough to target political advertising that put electoral processes at risk.
Big Tech companies have been trying to show they are ready to combat new threats posed by the rise of AI-generated content and deepfakes. The wide range of widely accessible AI tools has led to a surge of fake visual content online, which many platforms are struggling to police.
Meta recently announced it will begin labeling a wider range of content with its “Made with AI” label after an Oversight Board recommendation.
The company said it will add the label to audio, video, or images when industry-standard AI image indicators are detected or when users identify the content they upload as AI-generated.