Surveillance Capitalism - Or - Why Bradism Will Never Track You
Shoshana Zuboff must be an incredibly intelligent person. I know that starting a review of a book with a sentence like that could sound sarcastic when read on the internet, but I mean it. Throughout this book and its broad range of topics it is evident that the author has a comprehensive understanding of - among many things - economics, business, psychology, history, philosophy, and of course technology. I'm potentially as qualified as her to talk about just one of those subjects.
Does that mean I shouldn't review this book? Probably. But because I aim to reflect on the non-fiction that I read in order to absorb it better I shall review it, but with the caveat that the author is a lot smarter than me, and that my opinions are not authoritative.
Why does that matter? Well, a good chunk of the opening section of this book is dedicated to trying to explain that just because you think you're smart enough that tech companies and their disrespect for your privacy won't influence your spending habits/life, you can't know that for sure. No one can, because the pervasiveness of big tech companies is doing something to human civilization that no one has ever seen through to the end before. Not even Napoleon.
That concept was my attitude. I've never really bought anything on impulse because of an aptly timed buy button appearing. I block ads and trackers. I do all my web searches in private browsing mode. Everything on my Facebook is locked down.
If you're like me in those regards, you're not going to learn anything shocking while reading this book. Instead what you'll get is a thorough summary of how Google and then Facebook and the rest adapted their business and operating models to use a huge (gigantic) amount of computing resources to be able to track and classify every person on the planet, primarily for the sake of competitive advantage and revenue from ad sales. And if you think you're exempt from tracking and predictions based on your personality you better hope you've never appeared in the background of a photograph, had a Street View car drive past your Wi-Fi network, or had your Wi-Fi/Bluetooth on in your phone while walking around in a public place.
So if Google, Facebook, Amazon and the rest of the internet is now an orgy of cyber surveillance and pushy marketing that affects most people but definitely not me, what's the big deal? Just because something is unregulated does that mean it's bad?
Probably. The key takeaways for me were:
Governments and society cannot keep up with technological evolutions, or hope to regulate them. This is exacerbated by the fact that both governments and corporations are essentially just people, and quite often they are people with motives of making money. And sometimes (often) there is overlap between people in big tech and people regulating big tech.
This is bad for humanity in general because of the opportunity costs for building a better world.
The book describes a learning and teaching divide, where public advancement of machine learning, AI and data mining is held back because established companies hire the best people and patent their ideas for the sake of competitive advantage, i.e. to charge more for ads for things. If some of those resources were turned towards other endeavors like combating climate change, poverty, exploring space, etc. humanity might be able to advance further in my lifetime.
Instead this type of capitalism is fuelling overconsumption. If society could buy a little less impulsively, in lower quantities, there are a lot of material benefits. Less consumerism equals less pollution and carbon, reduced spending meaning reduced earning requirements. Ramping down consumerism might be what gives us that four day work week and a healthier planet to enjoy it on.
Finally, the impact of this kind of technological immersion combined with poker machine logic JavaScript functions has never been measured in the youth. The book refers to many peer reviewed studies that describe the negative implications for the psychology of young people. And it doesn't sound like a good idea that we sacrifice the minds of Gen Z and future generations to the "machine zone" for the sake of increased profits.
These threats appear to be material, but I didn't like how Zuboff uses strawman arguments to paint the evils of future technology against dumb policies that can't stop it. After bestowing so much credit to artificial intelligence, I don't see evidence that "humanity" can't be programmed into the governing processes and software policies.
The stanzas of sonnets that open each chapter give some contrasting artistic imagery to scientific subjects of economics and computer science, but in Part 3 in particular I feel the argument gets too poetic. After all, do Zuckerburg or the Google board really want to be the heads of a totalitarian government? Or just make a lot of money? Or are their motives irrelevant? There's no doubt these companies possess incredible, possibly unregulatable power over markets and people. As the book points out, the power to make power means even if not evil now, they may already be on an unstoppable path.
Does this mean we will never be able to "live free in a human future"? I don't know. I'm not that intelligent.
Comments