A recent series of articles have shown how research linking Facebook’s demographic data to personality traits was shared and may have led to unauthorized access to data of up to 50 million Facebook users. While Facebook has denied that this is a data breach, it has suspended the accounts of the parties involved for not abiding by the rules.
There are, however, multi-layered issues that require urgent attention for data-protection law in India.
Data breach and Cambridge Analytica
Aleksandr Kogan, a psychology professor at the University of Cambridge (no connection to Cambridge Analytica), gained access to personal data of users on Facebook through an app called “thisisyourdigitallife,” which was portrayed as a research tool for psychologists for academic purpose.
The app was downloaded by about by 270,000 people. Through this app, Kogan collected information on users who took the tests. He not only had information such as the city they lived in or the content they liked, but also information on their friends who didn’t have a very strict privacy setting.
This data was shared with Cambridge Analytica in breach of Facebook’s platform policies. Facebook admitted in its statement that it became aware of this unauthorized use in 2015, and subsequently asked Cambridge Analytica to delete the data. Facebook did not see fit to alert users about this use of their data and took very limited steps to secure the data, by seeking certifications that the information had been destroyed.
Issues for data protection law
This entire affair raises two important questions about data-protection laws globally, and particularly for countries like India that are in the process of framing their laws on privacy regarding data protection.
First, the delayed and limited actions taken by Facebook, upon becoming aware of the unauthorized sharing of data, raise questions about how such breaches may be regulated. The claim by Facebook that this was not a data breach is premised on the other claim that data was harvested in a legitimate manner after obtaining consent from the users. This is reminiscent of several data-security incidents in India, where public collectors of data have claimed that by securing only one key point in a data ecosystem and ignoring others, they have adequately discharged their data-security obligations.
Such a response draws from the tendency in data-protection regulations to focus solely on data-collection practices (by providing notice and obtaining consent), and not pay enough heed to subsequent processing, sharing and use of the data.
The failure of a company with the resources of Facebook to enforce its platform standards, and the very limited steps it took upon becoming aware of the breach, are of extreme concern in this case. It is reflective of a growing trend by data controllers to enable data-driven business models and analytics, which depend on sharing of data with various actors in the data ecosystem, but only taking responsibility for one end of the data flow (at the level of their collection of data).
Second, this poses questions around the scope of data-protection laws. Definitions of personal data that are too prescriptive, such as the catalogue approach used in the Massachusetts breach notifications statute or the Information Technology Act in India, are too restrictive or likely to be outdated every few years.
A better model is to look at three kinds of data being captured – volunteered data (data actively provided by individuals such as details in a form when they sign up for a service); observed data (behavioral data generated through an individual’s use of the service); and inferred data (data neither actively nor passively provided by the individual, but arrived at through analysis of collected data).
Rethinking privacy principles
Data-protection laws emerged in a world that saw a preponderance of volunteered data. However, as the bulk of the data collected and traded is either observed or inferred, it raises serious questions about whether these traditional frameworks remain meaningful. The idea of privacy as control is what finds articulation in data-protection policies across jurisdictions beginning from the Fair Information Practice Principles (FIPP) in the United States.
The redressal mechanisms available such as the right to access, notification and opt-out would also be of limited value in these contexts. The traditional choice against the collection of personal data that users have had access to, at least in theory, is a way to “opt out” of certain services. This draws from the free-market theory that individuals exercise their free will when they use services and always have the choice of opting out, thus arguing against regulation but relying on the collective wisdom of the market to weed out harms.
The proliferation of Internet-enabled devices, their integration into the built environment and the real-time nature of data collection and analysis, however, are beginning to undermine this concept. The ubiquity of data-collection points, as well as the compulsory provision of data as a prerequisite for the access and use of many key online services, is making opting-out of data collection not only impractical but in some cases impossible.
In this context, it is necessary to rethink regulation of data processing and recognize a need to ensure that the principle of consent and purpose limitation are not mere formalities but are implemented so as to make them meaningful. At the other end, it is necessary that the focus of data-protection regulations move toward use and processing of data by employing the use of the “legitimate interest” principle, and the adoption of impact assessments and harms-based approaches.
The Facebook breach highlights once again that data is a toxic asset, and continuing to hoard it even after its purpose has been met is always fraught with risks. The fact that, in this case, users could put not only their own privacy at risk also that of their friends reminds us that privacy needs to be seen as a social good, not simply as a tradeoff in a private transaction.