The murder of a woman in Noida was captured by a CCTV camera. Photo: iStock
The murder of a woman in Noida was captured by a CCTV camera. Photo: iStock

Would you act differently if you knew that somebody constantly watches you to analyze your behavior and emotions? People will have to answer this question soon. While facial recognition and other biometric technologies are becoming more prominent in a range of services including password authentication and Internet of Things (IoT) devices, the pervasiveness of such technologies in our daily lives is becoming disturbing.

And this says nothing about how fast they are spreading.

Surveillance in retail stores that analyzes the behavior of shoppers, facial-recognition technology to expedite checking in to flights and hotels, targeted-marketing algorithms that deliver personalized ads by scanning the customer’s face – these are a few examples, the tip of the iceberg, of how industries are using artificial-intelligence (AI) tech.

With little or no regulation and a fast-growing market, the list is increasing with exponential speed. The technology is cheap, and requires little fixed capital outlay. Moreover, stronger capability in cloud computing supports the petabytes of data needed for biometric identity systems.

Much criticism has been leveled against facial-recognition technology, and for good reason. Indeed, the technologies implicate important privacy concerns in the emerging Big Data ecosystem.

Surveillance is not new, but mixed in with digital technologies and constant online monitoring, dystopian impressions seem more palpable and valid. The point, however, that is often missed is that we may be too late to change anything.

While academics and civil society sit and discuss the implications of disruptive technologies, industry is busy supplying facial-recognition technologies on a mass scale.

And more governments are beginning to deploy questionable AI surveillance systems. Recent research found that of 176 countries, 75 are using AI technologies for surveillance purposes. This use includes facial recognition (64 countries), safe-city platforms (56 countries), and predictive policing (52 countries).

To be sure, facial-recognition technologies can bring positive results. For example, in the health-care industry, these technologies aid in the detection of genetic disorders. Yet on the whole, they pose more risk than benefit in the absence of public policy.

And most disturbing is the deployment of AI surveillance systems by governments that pitch it under the guise of greater public safety. Absent scrutiny, this justification might be a slippery slope, especially in autocratic regimes.

Yet the real fear is that AI surveillance is beginning to create (or reinforce) a type of digital authoritarianism even in so-called liberal democracies.

Take for example the UK, where there are more than 6 million surveillance cameras, making the country second in the world in terms of number of cameras per citizen. Not surprisingly, the leading country is China, whose population is roughly 20 times that of the UK.

China not only has the highest number of cameras per person, it has been a major supplier and driver of surveillance technologies globally. Chinese companies provide AI surveillance technology to more than 63 countries, and Huawei alone has government contracts with more than 50 countries.

These technologies are funded with Chinese state loans or even donations in some cases. Indeed, all of this raises concerns on how digital dystopia may spread through global development and capital markets.

Our data-driven future of biometric technologies and artificial intelligence forces us to re-examine the core concepts around the social contract and the ability of people to govern themselves and participate in public dialogue.

Technological pessimism should not prevent us from realizing that we can, and in some circumstances must, redefine the emerging power relationships at the heart of digital technology. Yet optimism should not blind us to the reality and the uphill struggle we face.

Olena Mykhalchenko is a Fulbright and Edmund S Muskie Scholar, focusing on the intersection of artificial intelligence and human rights, the future of work in the gig economy, and the social impact of Industry 4.0. Currently she is a consultant at Datamize, advising on data privacy and algorithmic ethics.

Leave a comment