An open discussion about Meta’s unethical practices, data collection, and our responsibility as small business owners
How are you feeling about the whirlwind that 2025 has been so far? If we’re being honest, the world has been quite an overwhelming place to exist recently (and I say that as a white woman living in Canada).
One announcement that sparked a lot of conversation – and frustration – particularly for small business owners, was the recent changes to Meta’s terms, conditions, and policies.
Meta Platforms Inc., is the company that most notably owns and operates Facebook, Instagram, Threads, and WhatsApp, among other products and services.
On January 7, 2025, Meta’s CEO Mark Zuckerberg posted a video on Facebook and a series of text posts on Threads (which is Meta’s version of Twitter or “X”, if you aren’t familiar), announcing comprehensive changes to their approach to content moderation and operations.
The changes come off the tailwinds of the U.S. election and seem to be Zuckerberg’s way of proactively showing his compliance and support of Trump’s administration and the “return to free speech” platform with which Trump campaigned on.
Note: the changes were not presented by Zuckerberg as being directly in response to the election, but this sentiment can quickly be gleaned from the timing, wording, and specifics of the platform policies presented. There was also acknowledgement from Zuckerberg about the link between the changes and political events, in the announcement video, where he stated, "The recent elections also feel like a cultural tipping point towards once again prioritizing speech.”
These changes have led many to reconsider their relationship with social media, as well as their roles and responsibilities as small business owners and consumers.
Meta’s recent changes to content moderation
In case you missed the most recent policy changes which inspired this post, here’s an overview of some of the most notable changes.
1. “Replace fact-checkers with Community Notes, starting in the U.S.”
Fact-checkers were introduced initially as a way of limiting the spread of misinformation, particularly in response to the 2016 U.S. election.
If enough users reported a story as fake, the social network passed it along to a 3rd party fact-checker to confirm the validity of the post.
If the information failed this check, it was publicly flagged as “disputed by 3rd party fact-checkers,” with a link to additional information about why the story was disputed. This flagging also meant that Meta took action(s) to limit the visibility of these posts.
Now (at least starting in the U.S.), this program will be replaced by community notes. Facebook claims this is their way of reducing biases and personal perspectives within the fact-checking program.
"Experts, like everyone else, have their own biases and perspectives. This showed up in the choices some made about what to fact check and how…a program intended to inform too often became a tool to censor."
– Joel Kaplan, Meta's Chief Global Affairs Officer
Not all details have been released for Meta’s community notes program, but Zuckerberg did note that the system will work in a similar way to the program on X.
Essentially, the onus will now be on users to add context to posts which are potentially misleading or inaccurate.
Some initial thoughts on this policy change:
Is it really fair to put this responsibility on users? Certainly not, but this is common tactic of companies that invoke “safe harbor” as their reason for not moderating. This is protection which comes from Section 230 of the 1996 Communications Decency Act (CDA) in the United States. Whether you’re in the US or not, this provision impacts they way these companies operate.
How are community notes going to be more reliable than 3rd party fact checkers? Truthfully, the fact-checking solution wasn’t any good either. Misinformation and disinformation have spread like wildfire on Facebook for years.
Zuckerberg notes that X has seen success with the introduction of their community notes program, but what analytics and data are they referencing here? What does this success look like? How is it being measured?
By its very nature, a fact-checking program would prevent bias, would it not? What specific instances of bias are they referencing as the reason for the current program failing? By speaking in generalities, the whole issue is intentionally muddied for users and other concerned parties who want to evaluate these assertions critically.
Will these changes lay the groundwork for more confirmation bias and misinformation? Most definitely.
2. “Simplify our content policies and remove restrictions on topics like immigration and gender that are out of touch with mainstream discourse.”
For additional context on this statement, Kaplan wrote,
“We want to undo the mission creep that has made our rules too restrictive and too prone to over-enforcement. We’re getting rid of a number of restrictions on topics like immigration, gender identity and gender that are the subject of frequent political discourse and debate.”
– Joel Kaplan, Chief Global Affairs Officer
What this looks like practically, is loosening the platform’s rules around hate speech and abuse, specifically when it comes to sexual orientation, gender identity, and immigration status.
For example, one of the particularly alarming changes to community guidelines states,
“We do allow allegations of mental illness or abnormality when based on gender or sexual orientation, given political and religious discourse about transgenderism and homosexuality and common non-serious usage of words like “weird.”
If this isn’t a sign of our society progressing backwards, I don’t know what is. We all need to be concerned by this kind of rhetoric.
3. “Move our trust and safety and content moderation teams out of California, and our US content review to Texas. This will help remove the concern that biased employees are overly censoring content.”
This policy change is inherently flawed, implying that teams based in California are biased while assuming those in Texas will not be.
In reality, this more likely reflects an effort to relocate content moderation to a region where individuals are more aligned with Zuckerberg’s personal views.
Zuckerberg framed these changes as a return to Meta’s “roots around free expression and giving people voice on [their] platforms.”
However, it’s clear that these shifts are driven more by political pressures and self-preservation than by any genuine effort to improve user experience.
In reality, prioritizing “free speech” in a way that tolerates hate speech isn’t about protecting open dialogue—it’s about prioritizing some users at the expense of the safety and well-being of others.
“Addressing hate speech does not mean limiting or prohibiting freedom of speech. It means keeping hate speech from escalating into something more dangerous, particularly incitement to discrimination, hostility and violence, which is prohibited under international law.”
— United Nations Secretary-General António Guterres, May 2019
Ask any member of a marginalized group that has used Meta platforms to build community and reach their audience: Every one—especially the activists—will tell you they’ve been suppressed for years by the Meta algorithms.
Data is Meta’s currency
Data is Meta’s second-favourite currency – because it brings in the currency the shareholders like most: Money.
With profit as their main motive, the business model for Meta is simple: collect and sell user data.
These platforms thrive on digital advertising. They use their massive user base to offer highly targeted ads based on age, gender, location, interests, and behaviours.
Advertisers, including everyday users running small businesses, pay for this access and visibility—because the algorithm demands it.
Here's the thing, though: since advertisers are Meta’s customers—and many users double as advertisers—they’re essentially paying to be seen on a platform that controls visibility through its own algorithmic decisions. Talk about a conflict of interest!
This frustrates me, especially when I see small businesses and nonprofits struggling to reach their communities while Meta profits off the system it created.
Without users, these platforms have nothing to sell, giving us as consumers an enormous amount of power—but only if we know and choose to use it.
For the record, Meta isn’t alone. Companies like Google operate under the same model, profiting from data collection and targeted ads. (If this topic interests you, Ed Zitron talks about this a lot in his podcast, Better Offline. It's an excellent listen.)
That’s not all Meta is guilty of.
Beyond privacy concerns and unethical marketing, Meta’s platforms have been linked to some truly heinous atrocities in the real world, including loss of life.
Time and time again, they’ve violated the trust of their users, prioritizing engagement and profit over ethical responsibility.
Here’s a list of the more egregious examples. I can’t stress enough that more people should know about these stories.
Myanmar: The Rohingya Genocide
Facebook Moderators & Manipulation
Algorithmic Suppression of Marginalized People
What is shadowbanning? How do I know if it has happened to me, and what can I do about it?
Study looks at ‘shadowbanning’ of marginalized social media users
The Facebook Papers (from Whistleblower Frances Haugen)
The Trucker Convoys
How Facebook Transformed a Canadian Trucker Convoy Into a Global Movement
How Facebook twisted Canada’s trucker convoy into an international movement
Abusing User Data
The Facebook and Cambridge Analytica scandal, explained with a simple diagram
Everything You Need to Know About Facebook and Cambridge Analytica
Profiting Off Your Data, Exploiting User Lives and Connections
Changes to Community Guidelines and Moderation
These are the January 2025 changes discussed above.
Meta Is Laying the Narrative Groundwork for Trump’s Mass Deportations
LGBTQ+ advocates warn Meta’s policy changes ‘putting us back in the dark ages’
The examples of how Meta prioritizes profit over people are overwhelming, particularly when you realize this is only a collection of a handful of articles covering the worst examples.
The oldest reports above go back as far as 2018. And as I compiled this list, I couldn’t help but note that I remembered reading many of the more in-depth stories when they were initially published.
However, I continued using Facebook and Instagram despite my inner conflict. I wanted to be able to easily connect with friends and family.
Unfortunately, that reasoning hasn’t been sufficient for a long time.
Stepping away from Meta
Whether it's in the political realm, as employees, or as humans existing, the collective pressure we can put on powerful people is significant.
Taking your data away from Meta (or reducing how much they get) is an enormous step – a clear sign that you refuse to stand for their actions and policies.
Personally, I chose to delete the Facebook and Instagram applications from my phone about three years ago. I would check them only when I was on my computer, and I let friends who needed to reach me more urgently know that I was no longer going to be readily available on Messenger.
It only took a few days to get out of the habit of checking in on Facebook, though I had already been intentional about reducing how much I logged on prior to that.
The years I’ve spent on these applications since learning more and more about the incalculable harm Meta has done, have made me resent giving them my time, attention and data.
Then, these recent content moderation and policy changes came about. This announcement reintroduced a lot of questions for me around my role and responsibility as a Meta user – especially as someone who actively advocates for principles like diversity, inclusion, and protection of marginalized groups.
And so, I deleted Facebook and Instagram accounts on January 31, 2025.
I’m done compromising on my personal and business values. Instead, I’ll be focusing on more intentional spaces.
You can still find me on LinkedIn, through my website, and building community through the Magnificent Marketers Club, moving forward.
What’s at stake for small business owners who take a step back from social media?
My reasoning for holding onto these social media platforms for so long was in the name of personal connection and communication.
But I’m very aware that these aren’t the only elements at play.
I know the value (or at least perceived value) of maintaining an active presence on social media, specifically for small business owners.
To me, Instagram and Facebook stopped being meaningful spaces years ago.
Features are copied from competitors, and user experience has taken a back seat to monetizing our data.
We've known for years that we are the product, and the algorithm isn’t working in our favour (especially when it comes to paying for ads).
That said, I genuinely don’t judge anyone who stays on these platforms.
These tools have become so interwoven into business processes for some that they've become essential, and I would never suggest putting your livelihood at risk.
User numbers are massive, and it can seem like your visibility will be limited if you aren’t showing up on Facebook or Instagram.
However, I would encourage you to do your research so you know what’s at stake, to be cautious of how you use each platform, and to start reducing your dependency wherever possible.
No platform that sees its users as a product is built for your success.
If we can build up our presence on other platforms – specifically those that we own, have control over, and which align with our values – chances are we won’t miss out on much by boycotting Meta platforms.
Balancing personal values with business obligations
In the spirit of being more critical of how we support platforms like Meta, here are some questions to ponder as a user and business owner.
How (and where) do you draw the line between utilizing tools like Facebook or Instagram as a way to grow your business and maintain your livelihood, while also standing firm in your values and resisting the platform’s agenda and interests?
Is it possible to be active on a social media platform for your own interests, while simultaneously grappling with ethical concerns about the platform itself?
Can you part ways with Meta platforms as a business and marketing tool? Or do they simply provide too much of your revenue generation or visibility? How can you work toward reducing your dependency? (This last one is a hard one and may take longer than expected.)
Are you hesitant to step away from these platforms because the data shows that your sales and community are there? Or are you basing your feelings off of perceived value, or FOMO (fear of missing out), rather than real analytics?
What are the ethical implications of supporting massive corporations that don’t have your best interests in mind?
What would it look like to start to step back from these platforms? How can you begin transitioning your time, energy, and resources to intentional spaces that are more aligned with your values?
There’s no clear answer to any one of these. What you decide is best for yourself and your business might be different from the route I’ve decided to take, and that’s okay.
This blog post is simply meant to be my contribution to a conversation I believe needs to be had more often. Hopefully, you find it thought-provoking, and can use the information within this post to make more informed decisions.
Because we have so much more power than we tend to use.
Consider this an opportunity for you to revisit the mix of marketing channels you use, assess the potential threats that exist, and begin to ponder how you want to address them.
If you’re looking for guidance and support transitioning away from Meta platforms, I’d love to strategize with you. Let’s connect!