Why Meta’s fact-checking rollback is a dangerous gamble for trust, safety, and inclusion
What happens when a social media platform becomes uninhabitable for marginalised communities?
In a world increasingly defined by digital spaces, Meta’s decision to eliminate fact-checkers, loosen content moderation, and amplify political content feels less like a commitment to free speech and more like an experiment with colossal consequences.
This announcement raises questions that are deeply uncomfortable but impossible to ignore.
Who benefits from this decision and who pays the price?
Mark Zuckerberg’s rhetoric frames this change as a victory for free speech. But free speech on platforms with billions of users doesn’t exist in a vacuum. It has ripple effects on public trust, political stability, and personal safety.
For advertisers, this creates an environment where brand safety becomes increasingly difficult to guarantee. Imagine your ad running alongside posts promoting hate speech, conspiracy theories, or political extremism.
What happens when your brand unwittingly becomes associated with dangerous narratives? Studies by Edelman Trust Barometer show that 81% of consumers need to trust a brand before making a purchase. Trust is fragile, and misinformation shatters it.
For marginalised communities, particularly LGBTQIA+ people, this change feels like a rollback of years of progress. When fact-checkers leave and political content spikes, vulnerable groups become targets.
We have already seen how unchecked online rhetoric can translate into offline harm, whether it’s anti-trans legislation or violence against queer individuals.
And here’s the kicker: this isn’t speculation. It’s data. In 2024, GLAAD’s Social Media Safety Index gave Meta a failing grade for its ability to protect LGBTQIA+ users from hate speech and harassment. Removing fact-checkers and weakening hate speech guidelines will undoubtedly exacerbate these issues.
So, let’s ask the hard question: Is Meta prioritising profits over people?
Meta’s argument: The free speech illusion
Zuckerberg argues that removing fact-checkers will reduce censorship errors. He’s not entirely wrong.
No system is perfect, and moderation mistakes do happen. But the idea that a crowdsourced system, akin to X’s Community Notes, can replace professional fact-checkers is fundamentally flawed.
Community-based moderation often rewards the loudest voices, not the most accurate ones. It creates echo chambers where misinformation spreads unchecked and minority voices are drowned out. Research from MIT’s Media Lab found that false information spreads six times faster than true information on social media.
If Meta is truly committed to creating spaces for diverse, open discourse, why are the protections for marginalised communities being rolled back? Why is political content being amplified while the systems meant to prevent harm are being dismantled?
Marketers, brand safety is now your problem
For marketers and advertisers, the consequences are clear: your brand is now at the mercy of Meta’s diminished guardrails. Hate speech, political extremism, and conspiracy theories will become harder to contain. The brand safety tools you rely on may no longer be enough.
So, what can you do?
1. Diversify your advertising channels: Relying heavily on Meta platforms like Facebook, Instagram, and Threads now carries a significant reputational risk. Invest in platforms with stricter content moderation, like LinkedIn or YouTube.
2. Invest in social listening tools: Stay vigilant. Monitor your ad placements and social mentions to ensure your brand isn’t being associated with harmful content.
3. Build your own channels: Strengthen your owned platforms like email newsletters, branded communities, and exclusive content hubs. The less reliant you are on rented social media spaces, the better.
4. Partner with trusted influencers: Collaborate with creators and influencers who align with your values. Their authentic connection with audiences can help counterbalance misinformation.
The human cost: LGBTQIA+ communities left in the crosshairs
As a parent, I think about my daughter growing up in a world shaped by platforms like Meta. What kind of digital spaces are we building for the next generation? Will she grow up in an online world where hate speech is normalised and truth is optional?
Meta’s decision to allow accusations of mental illness based on gender identity or sexual orientation is chilling. It is normalising dehumanisation.
For young LGBTQIA+ users, this kind of language is dangerous. It chips away at self-esteem, fuels mental health crises, and sometimes, tragically, leads to self-harm.
We must ask: What happens when these platforms become uninhabitable for marginalised communities? When the hate becomes too loud and the protection systems too weak?
Global implications: The politics of moderation
Zuckerberg made it clear that Meta will push back against regulations in Europe and other regions with strict digital safety laws. But the question remains: Should global tech giants like Meta be allowed to set their own rules for what’s acceptable speech?
Regulations like the EU’s Digital Services Act and the UK’s Online Safety Act exist because platforms have repeatedly failed to self-regulate. Meta’s pivot away from moderation feels less like a principled stand and more like an evasion of accountability.
Action steps: What comes next?
If you’re a brand leader, marketer, or business owner, these are the steps you must consider:
• Advocate for stronger digital regulations: Governments must ensure that platforms remain accountable for misinformation and hate speech. Support policies that prioritise user safety over corporate profits.
• Double down on DEI commitments: If Meta isn’t protecting marginalised voices, your brand must. Create campaigns that uplift these communities and counter harmful narratives.
• Audit your ad placements: Frequently check where your ads are showing up on Meta platforms. If they’re appearing next to harmful content, pull them.
• Speak up: Silence is complicity. Use your platform to demand accountability from Meta. The collective voice of advertisers holds significant power.
What’s at stake?
Meta’s changes are about trust. They are about whether social media platforms can still function as spaces for meaningful dialogue and community-building, or if they will become digital wastelands dominated by rage and misinformation.
If brands, marketers, and advocacy groups don’t step up now, the consequences will ripple far beyond Meta’s platforms. They will seep into society, eroding trust, polarising communities, and putting vulnerable people at risk.
So, let me end with this: What are you, what are we going to do about it?
Drop your thoughts below, share this article, and let’s start a conversation that goes beyond outrage. Because the stakes couldn’t be higher.
Older article I wrote but Facebook will go the way of Twitter/x
https://thistleandmoss.com/p/twitterx-vs-bksy-the-trans-shitshow
This is when politics gets in the way of a societal ‘good’… Can’t see this approach lasting that long in more regulated markets like UK / EU. Time for advertisers to say something?