low angle shot on security camera with office building background, monitor the whole city from a height,shanghai china.
low angle shot on security camera with office building background, monitor the whole city from a height,shanghai china.

As commentators focus more on China’s burgeoning surveillance system, an excellent opportunity for critical reflection arises. In the past two years, scores of articles in reputable publications describe China’s surveillance capability in Orwellian terms – a dystopian cacophony of cameras, artificial intelligence (AI), facial-recognition software, and constant social monitoring.

Labeled the “social credit system,” China’s practice of using its domestic surveillance capacity to evaluate the “trustworthiness” of its citizens has received notable scholarly attention in many countries. Consumer behavior, social-media involvement, and signs of political discontent are just some of the many indicators the Chinese state will use in making this determination. The project also underscores the widespread cooperation between China’s leading technology giants (Alibaba, Tencent, Baidu) and the Communist Party. Indeed, these companies provide much of the data for creating an individual’s score.

Incidentally, the “social credit system” has no official name in Chinese … perhaps echoing an old Mao Zedong insight that to name something is to validate its existence. Regardless, while China’s burgeoning practice is indeed alarming, the scary reality is that the current surveillance nexus in the United States poses a striking similarity. This is not to suggest that these systems are completely identical, nor to play down the severity of the situation in China. Yet to ignore the evolving logic of surveillance capitalism in the West is to limit severely our collective ability to change it and prevent disaster.

Having lived in China as a foreigner, I am familiar with the cameras, with the posters of propaganda, and with the omnipresence of political networks that guide economics. Yet despite the signs of surveillance, I did not feel the overwhelming presence of the state as many dystopian accounts suggest. The point, however, is that a similar normalization of surveillance – an internalized acceptance – can be sensed in the United States as well.

Day-to-day life involves a mobile connectivity that even a decade ago was unprecedented. And this connectivity is generating data at a lightning pace. Since 2018, human beings have created more than 2.5 quintillion bytes of data each day – the result of our online searches, social-media posts, consumer habits, daily routines, and much, much more. Indeed, a famous study concluded that in 2018 alone, the world generated more data than the entirety of human existence in its history.

Regardless of whether this is accurate, the fact that data occupies a central role in the future is undeniable. This world, however, did not come from nowhere. It was the result of a confluence of factors (historical, economic, political, etc) that when taken on whole is neither good nor bad but simply reality. Surveillance in the United States, needless to say, involves the state.

From COINTELPRO and the PATRIOT Act to Edward Snowden’s leaks and the countless whistleblower accounts spread throughout the corners of the Internet, “Big Data” is undeniably interwoven with the intelligence establishment.

Yet if US surveillance was born in Washington, it was certainly conceived in Silicon Valley.

If US surveillance was born in Washington, it was certainly conceived in Silicon Valley. Without the proliferation – and in some instances, direct support – of private Internet companies, US surveillance capacity would lack a massive amount of data on individuals

Without the proliferation – and in some instances, direct support – of private Internet companies, US surveillance capacity would lack a massive amount of data on individuals. The point is that Google, Facebook and Amazon have accomplished what the US Defense Advanced Research Projects Agency (DARPA) never could: that is, create a service so attractive, inexpensive and useful that people don’t think twice about what happens on the back end when they upload a picture or purchase a new book on a website.

The collection, processing, sale and use of our personal information in massive data sets is exactly the type of ecosystem the Chinese government uses to regulate its people. And with fifth-generation telecommunications (5G) and the Internet of Things (IoT), the capacity for all this will greatly expand. In addition, AI and self-adaptive algorithms will further enable companies to process data more efficiently and accurately, which means more value will be gained on a range of services and Web-based applications, reinforcing already dubious behavior.

In the US, the state does not determine its citizens’ “social credit” score. We Americans are told that the traditions of free speech and freedom of the press ensure a baseline of liberty that, when compared with China at least, look more benign and less dystopian. We are told that people ultimately determine the market and that the big digital players will respond to the wishes of the consumer. We are told …

And yet, Americans do have a social credit score.

Except that one’s “score” that determines access to climbing the social hierarchy is primarily the outcome of the market and not the state. It is the outcome of a world where recruitment, employment, law enforcement and education are processes of sorting, categorizing and, ultimately, acting upon data. The market is just as vicious as the state in determining who’s in and who’s out.

People are shocked when they learn about the extent to which data analytics sorts résumés, processes e-mails, and determines what is “acceptable” in the social-media world. As the sociologist Manuel Castells puts it, exclusion becomes the main fulcrum of power in the digital age. And both in the US and China, exclusion is the main game in digital space.

For example, take the multiple instances of Google’s ad platform discriminating against certain groups through using protected characteristics (race, gender, etc) as lead data indicators. Or the stories of how application algorithms have misinterpreted e-mails, misconstrued interview signals, or even misrepresented an applicant as a different, less qualified person.

These are just the mistakes. When the system really works, one’s place is the “natural reflection” of one’s aptitude and relative qualification. And what really “qualifies” someone is hard to “quantify” – at least without the risk of our social indicators falling victim to personal bias or in worse circumstances, the whim of something people cannot control.

The point here is not to scream with naïveté at how the social hierarchy sorts the “winners” from the “losers.” Surveillance capitalism did not create inclusion or how power networks determine “qualification” across social roles, institutions, and complex hierarchical structures. These have arguably been in existence since the beginning of complex social organization, have evolved with the passing of history and have adapted with new combinations of sources of power and control.

American life was all about who’s in and who’s out long before the Internet and data mining. Digitization has merely taken these qualia and processed them into discrete, quantifiable units – through the collection of behavioral data and the use of predicative indicators. And this transformation has occurred right under our eyes. Through the very market that has offered free, unlimited access to the most powerful communication tool in the history of mankind.

The advent of AI and complex algorithmic processing presents many benefits, from smart cities to engaged civic activity, better decision-making and streamlined functional processes. But possible downside effects of these technologies create problems that, if not addressed now, may become permanently engrained in the state/economy nexus. And this should be avoided.

Many have rightly raised criticism and concern at the emerging reality of China’s futuristic surveillance state. Yet looking at the US, we see a daily barrage of disinformation, people or bots making implausible statements through digital forms, and constant, non-stop addiction to screens. All of this has produced a situation of constant surveillance – surveillance of our psychologies, our moods, our creative potentials. And this has resulted in a practice not dissimilar to that of the Middle Kingdom.

Hunter Dorwart

Hunter Dorwart is an independent researcher living in Washington, DC. He explores issues on a range of topics including startup financing, international trade policy, artificial intelligence, and geopolitics. He is currently researching changes to international data privacy with the International Bar Association.

Leave a comment

Your email address will not be published.