It's impossible to escape the Facebook "scandal" at the moment, and it's important to be fully aware of what is going on. I think this is a defining moment in our digital evolution as a society, so it's worth spending some time reflecting on what is happening.
As you have surely heard, it has been revealed that a rogue
researcher by the name of Kogan at Cambridge University has built an app
that scrapped a lot of data from Facebook users and their "friends".
(Apologies for the many quotes, but so many of the words used in this
story have been hijacked to mean different things.) Nothing that Kogan
did at that point was illegal by the terms of Facebook. This is, of
course, the crux of the story - it may not have been illegal by
Facebook's term, but it may have been highly unethical nonetheless. In
any case, Kogan then shared the data with a third party, which was
illegal, and this is how the company Cambridge Analytica (CA) got hold of
the data. It was then used by CA for political purposes. Some people say
CA was a decisive factor in the Trump and Brexit victories, but there
is at the moment no evidence for that.
The reactions of shock that I've heard so far are of four types:
1. Why does Facebook have so much data on us?
2. Why does Facebook allow others to obtain our personal data?
3. How is this data used to manipulate us?
4. Are all tech companies the same? What about Apple, Google, Amazon, Twitter?
Let's address each of these points briefly.
1. Why does Facebook have so much data on us?
The easy answer is because we give it to them. But there is more to this than meets the eye. Facebook tracks you almost everywhere you go online. Facebook also tricks you into sharing more data than you are probably aware of. As many Android users have found out, Facebook has been scrapping their call and text message data for years - either without permission, or using extremely sleazy tricks to get "permission" from its users. Facebook's value proposition is targeted advertising. Advertisers pay lots of money to Facebook to show their ads specifically to a small target group. This is a highly efficient way to advertise because you know you are advertising to the right audience. It's this lucrative advertising model that has turned Facebook into one of the most highly valued companies on the planet. Yes, Facebook is a surveillance machine, but it has itself no malicious intent - it just wants to know everything about you so it can match you to advertisers. Facebook is not a data seller, it is a matchmaker. The more it knows about you, the better it can match you with those who are willing to pay.
2. Why does Facebook allow others to obtain our personal data?
If data is Facebook's gold, why would it share it with others - such as Kogan, or anyone developing a Facebook app - on its platform? The best answer I can give is that by opening up to app developers, Facebook was hoping to increase engagement on its platform. The more you use the Facebook platform, the more Facebook knows about you, which is good for its matching-making capabilities. Facebook is, of course, aware of this problem and has already some time ago begun to limit data access. Given the current scandal and bad press, Facebook will almost certainly continue to constrain data access to third parties.
3. How is this data used to manipulate us?
As mentioned above, Facebook is in the matchmaking business. It sells this access to anyone willing to pay for it. This is no secret - you can go to Facebook and read in great detail how it works. Facebook writes: "With our powerful audience selection tools, you can target the people who are right for your business." It should come as no surprise that by business, they mean anyone willing to pay, including politicians and organizations with political intent. Advertising is manipulation.
4. Are all tech companies the same? What about Apple, Google, Amazon, Twitter?
It's easy to engage in the blame game and begin to accuse all tech companies of being "data hungry". Isn't it always good to know more about your users? Yes, but as we're learning, that knowledge is also a huge liability (and this doesn't even factor in direct legal liabilities - hello GDPR). The central question is whether that knowledge is core to your business. This is clearly not the case for Apple. The vast majority of Apple's business is selling hardware with a very high margin. Apple is now actively advertising the fact that it can take privacy very seriously because its business doesn't depend on user data, which is both true and smart. The majority of income for Amazon is also not advertising, but services (retail and web services). For Google and Twitter, the story is different, because their business does indeed depend on knowing their users for better advertisement. Close to 90% of Google's and Twitter's income comes from advertisement. Twitter may be in a better position because it is a micro-blogging platform, and it would be difficult to be outraged by the fact that Twitter data can be used by anyone given that it is de facto public data. In addition, Twitter's size is still very small compared to Facebook. Google may be the closest to Facebook in terms of business models. But importantly, Googles does not run a social communication network - it tried with Google Plus, but failed - and that sets it a bit apart. It is difficult to insert manipulative political content into the discussion unless you are the discussion platform. Still, the concern with Google is that its business currently depends most strongly on knowing users intimately.
Now what?
These answers can provide us with some insights. The first is that Facebook is never going to change substantially. The more it knows about you, the better it can do its matchmaking, which is of existential importance to its multi-billion dollar business. That is why Mark Zuckerberg has been on a 14-year apology tour - he embodies the idea of asking for forgiveness, not for permission. The second is that Facebook will continue to be used for political manipulation. As historian Niall Ferguson put it so aptly, there are two kinds of politicians: those who understand Facebook advertising, and those who will lose. We have just seen the tip of the iceberg. The third is that regulation will be quintessential to tame the beast, which is not Facebook, but the extreme effectiveness of micro-targeting. I believe you can manipulate absolutely everyone if you know all the details about their lives, their friends, their fears, and their dreams. And it is generally not necessary to manipulate everyone very strongly; by just nudging a fraction of people undecided on an issue, systems can change rather dramatically. Nudging 10% of swing voters will define the victor; nudging 10% of undecided parents to opt out of vaccination will lead to large disease outbreaks, etc. The fourth is that Mark Zuckerberg may have to step down from Facebook, which could spell its end in the long run. He built Facebook, and stands for everything that happened, for better or worse. I fully believe Facebook did not have any malicious intentions - they simply discovered an extremely lucrative business model and ran with it. But this is not just another "oops - we're sorry" story that's going to go away soon. People are waking up to the core of the Facebook business model - and to some extent to the micro-targeting model - and they don't like it. Someone will have to face the consequences.
CODA
As a final note, I've found it incredibly liberating, a bit more than a year ago, to leave Facebook. I did it because it took more from me than it gave me, and truly valuable interactions I continued to have through other communication channels. I was also getting concerned about its surveillance power, but that was the lesser problem to me, then. But fundamentally, I do believe that the only way to solve the extreme micro-targeting problem is by abandoning those platforms whose business are entirely built on it, and for many of us, this should be easy. I am extremely disturbed to hear some people argue their ability to communicate with friends depends on Facebook. In the end, unless we realize that Facebook's business depends on being our communication platform, and on knowing everything that we communicate through it for efficient micro-targeting, we won't be able to argue we're part of the solution.
The reactions of shock that I've heard so far are of four types:
1. Why does Facebook have so much data on us?
2. Why does Facebook allow others to obtain our personal data?
3. How is this data used to manipulate us?
4. Are all tech companies the same? What about Apple, Google, Amazon, Twitter?
Let's address each of these points briefly.
1. Why does Facebook have so much data on us?
The easy answer is because we give it to them. But there is more to this than meets the eye. Facebook tracks you almost everywhere you go online. Facebook also tricks you into sharing more data than you are probably aware of. As many Android users have found out, Facebook has been scrapping their call and text message data for years - either without permission, or using extremely sleazy tricks to get "permission" from its users. Facebook's value proposition is targeted advertising. Advertisers pay lots of money to Facebook to show their ads specifically to a small target group. This is a highly efficient way to advertise because you know you are advertising to the right audience. It's this lucrative advertising model that has turned Facebook into one of the most highly valued companies on the planet. Yes, Facebook is a surveillance machine, but it has itself no malicious intent - it just wants to know everything about you so it can match you to advertisers. Facebook is not a data seller, it is a matchmaker. The more it knows about you, the better it can match you with those who are willing to pay.
2. Why does Facebook allow others to obtain our personal data?
If data is Facebook's gold, why would it share it with others - such as Kogan, or anyone developing a Facebook app - on its platform? The best answer I can give is that by opening up to app developers, Facebook was hoping to increase engagement on its platform. The more you use the Facebook platform, the more Facebook knows about you, which is good for its matching-making capabilities. Facebook is, of course, aware of this problem and has already some time ago begun to limit data access. Given the current scandal and bad press, Facebook will almost certainly continue to constrain data access to third parties.
3. How is this data used to manipulate us?
As mentioned above, Facebook is in the matchmaking business. It sells this access to anyone willing to pay for it. This is no secret - you can go to Facebook and read in great detail how it works. Facebook writes: "With our powerful audience selection tools, you can target the people who are right for your business." It should come as no surprise that by business, they mean anyone willing to pay, including politicians and organizations with political intent. Advertising is manipulation.
4. Are all tech companies the same? What about Apple, Google, Amazon, Twitter?
It's easy to engage in the blame game and begin to accuse all tech companies of being "data hungry". Isn't it always good to know more about your users? Yes, but as we're learning, that knowledge is also a huge liability (and this doesn't even factor in direct legal liabilities - hello GDPR). The central question is whether that knowledge is core to your business. This is clearly not the case for Apple. The vast majority of Apple's business is selling hardware with a very high margin. Apple is now actively advertising the fact that it can take privacy very seriously because its business doesn't depend on user data, which is both true and smart. The majority of income for Amazon is also not advertising, but services (retail and web services). For Google and Twitter, the story is different, because their business does indeed depend on knowing their users for better advertisement. Close to 90% of Google's and Twitter's income comes from advertisement. Twitter may be in a better position because it is a micro-blogging platform, and it would be difficult to be outraged by the fact that Twitter data can be used by anyone given that it is de facto public data. In addition, Twitter's size is still very small compared to Facebook. Google may be the closest to Facebook in terms of business models. But importantly, Googles does not run a social communication network - it tried with Google Plus, but failed - and that sets it a bit apart. It is difficult to insert manipulative political content into the discussion unless you are the discussion platform. Still, the concern with Google is that its business currently depends most strongly on knowing users intimately.
Now what?
These answers can provide us with some insights. The first is that Facebook is never going to change substantially. The more it knows about you, the better it can do its matchmaking, which is of existential importance to its multi-billion dollar business. That is why Mark Zuckerberg has been on a 14-year apology tour - he embodies the idea of asking for forgiveness, not for permission. The second is that Facebook will continue to be used for political manipulation. As historian Niall Ferguson put it so aptly, there are two kinds of politicians: those who understand Facebook advertising, and those who will lose. We have just seen the tip of the iceberg. The third is that regulation will be quintessential to tame the beast, which is not Facebook, but the extreme effectiveness of micro-targeting. I believe you can manipulate absolutely everyone if you know all the details about their lives, their friends, their fears, and their dreams. And it is generally not necessary to manipulate everyone very strongly; by just nudging a fraction of people undecided on an issue, systems can change rather dramatically. Nudging 10% of swing voters will define the victor; nudging 10% of undecided parents to opt out of vaccination will lead to large disease outbreaks, etc. The fourth is that Mark Zuckerberg may have to step down from Facebook, which could spell its end in the long run. He built Facebook, and stands for everything that happened, for better or worse. I fully believe Facebook did not have any malicious intentions - they simply discovered an extremely lucrative business model and ran with it. But this is not just another "oops - we're sorry" story that's going to go away soon. People are waking up to the core of the Facebook business model - and to some extent to the micro-targeting model - and they don't like it. Someone will have to face the consequences.
CODA
As a final note, I've found it incredibly liberating, a bit more than a year ago, to leave Facebook. I did it because it took more from me than it gave me, and truly valuable interactions I continued to have through other communication channels. I was also getting concerned about its surveillance power, but that was the lesser problem to me, then. But fundamentally, I do believe that the only way to solve the extreme micro-targeting problem is by abandoning those platforms whose business are entirely built on it, and for many of us, this should be easy. I am extremely disturbed to hear some people argue their ability to communicate with friends depends on Facebook. In the end, unless we realize that Facebook's business depends on being our communication platform, and on knowing everything that we communicate through it for efficient micro-targeting, we won't be able to argue we're part of the solution.