The plot twist on Facebook’s war against political advertisers: banning the researchers investigating them to protect them?

For a couple of months now, we’ve been the audience to a spectacle protagonized by Facebook, and the people in charge of political campaign’s advertising. We’ve seen them go back and forth, we’ve seen Facebook ban ads due to their geographic location, and we’ve seen political advertisers one-up them every time. But just like in every typical rom-com, it seems now that things have taken a twist, coincidentally when not just political ads were threatened, but also their whole platform. 

Facebook has been updating its advertising policies a lot these days, all in the name of protecting its users’ privacy and preventing misinformation, obviously. For example, their so controversial ban of launching political ads in a country that doesn’t correspond with your geographical location was justified with avoiding political foreign groups messing up other countries’ elections. But of course, ads about gambling and drinking targeted to teenagers were not too much of a threat, were they? 

And it is not a surprise for us to receive news on Facebook doing something incomprehensible anymore, it’s simply something that we are kind of used to, and are actually curious about hearing the excuses they’ll give this time (kind of a disappointed but not surprised moment). But the reality is that they forever have been stopping political advertisers from doing their thing, because they were the big troublemakers of the platform, always trying to jeopardize users’ safety and privacy… But what happens when someone else jumps into the story and puts both of them in the eye of the hurricane (or at least the eye of the press)?

Well, it’s also not surprising what happens: Facebook very easily switches sides, and instantly gets supportive and protective of their publishers, again, in the name of user safety. But hey, it feels like we are jumping to conclusions a bit too fast, aren’t we? So let’s jump right into the detailed explanation and what happened, and what we think really happened. 

So let’s begin with the story’s context: there is an organization named Cybersecurity for Democracy which works to expose online threats to, well, democracy. This organization is in charge of the Ad Observatory, a project created to collect data about political advertising in social media, in order to prevent misinformation and understand what they contained, who they were being targeted to, and why. 

The Ad Observatory’s study consists in installing a plugin in your browser that allows you to collect data on the political ads that users see, and why they are seeing them, aka, why they were targeted. This plugin doesn’t collect personal data, but it does collect the advertiser’s name, the ad’s content, information disclosed by Facebook about how the ad was targeted, and the moment the ad was shown to a user, as told by Daily Dot’s article. That data is collected by volunteers, and then it’s made public to report the mechanisms that Facebook is using, and whether or not they are trustworthy. 

But Facebook was apparently not so excited about the research these users were hosting in their platform and, even though the data collected was from advertisers and not from private users, they banned them in the name of users safety. As said by Mark Clark, Facebook’s Product Management Director, even though the project may have been well-intentioned, they couldn’t ignore the ongoing violations of protections against scraping, and they had to be banned. He also emphasized the fact that the browser extension collected data about Facebook users who had not consented to its collection (even though they were not really users, but were actually advertisers). 

However, Laura Endelson, a member of Cybersecurity for Democracy didn’t take too long in responding to Clark’s accusations and published her release in a Twitter thread. Basically, her point was pretty simple: Facebook’s concerns had nothing to do with users’ privacy, they were silencing them because they were calling out the problems that the platform had, and did nothing about. She also explained that Facebook had banned not only hers, but a lot of her team member’s accounts, so now they were absolutely unable to access Facebook’s ad library and carry on with her study, meant to, in her own words, “identify misinformation in political ads  including many sowing distrust in our election system, and to study Facebook’s apparent amplification of partisan misinformation.”

Besides the effects on the Ad Observatory project, the account banning also caused that many members of the organization that were participating in a different project about vaccines misinformation became now unable to access the platform and to continue their research. But well, who cares about vaccines misinformation if Facebook’s advertisers feel safe, right? 

It seems to me that Facebook’s ongoing protection updates are not really genuine, especially after their whole metric inflation scandal, which they continue to not account responsibility for. So, this organization comes in and tries to host a pretty cool project, who’d help a lot of people understand where the political ads they are seeing come from, and how much they should trust them, and Facebook bans them. So let’s be honest, who is Facebook really trying to protect? 

An interesting answer would be their political advertisers. If the researchers found that there was anything that these publishers were doing that was not legal or ethical, then they could’ve disclosed it and exposed a whole scandal, which obviously wouldn’t have looked great for politicians or their advertisers. However, even if they would care about losing publishers, would that really make so much of a difference that they had to ban the researchers? I mean, they could totally have contacted both parties and maybe tried to work on a project all together. But they didn’t, they chose to stop them. 

So was Facebook protecting users and advertisers, or were they really trying to protect themselves? A scandal for their publishers would mean that their own platform is flawed and potentially unsafe for users, so why would they take any chances on having a project using their platform to bring them down, or at least crack their ads system a bit much? Exactly, it wouldn’t have made any sense. So they banned them and exiled them from the whole platform instead. 

So I guess my point here is quite simple. Facebook, as any other tech giant, has been known for always (always!) having a second intention behind the reasons that they tell the public. And this case is not any different from other things we’ve seen before: there is someone trying to expose the truth about their mechanisms, and they get banned completely. So, for now, I guess that all we have left to do is sit back and wait for the next chapter on Facebook & Political Advertisers’ soap opera.

Related Posts