Daniel J. Solove, Professor of Law, George Washington University Law School
and Paul Schwartz, Professor of Law at the University of California, Berkeley School of Law.
Personally identifiable information (PII) is one of the most central concepts in information privacy regulation. The scope of privacy laws typically turns on whether PII is involved. The basic assumption behind the applicable laws is that if PII is not involved, then there can be no privacy harm. At the same time, there is no uniform definition of PII in information privacy law. Moreover, computer science has shown that the very concept of PII can be highly malleable. Because PII defines the scope of so much privacy regulation, the concept of PII must be rethought. Professors Paul Schwartz (Berkeley Law School) and Daniel Solove (George Washington University Law School) will argue that PII cannot be abandoned; the concept is essential as a way to define regulatory boundaries. Instead, they will propose a new conception of PII, one that will be far more effective than current approaches.
Daniel is the founder of the organisation TeachPrivacy
Introduced themselves as Bert & Ernie of the Privacy world.
Technology changes the meaning of PII, it is a moving target. It plays a central concept in privacy law, and is often the trigger for when privacy law applies. Unfortunately, there is not a consistent definition or approach to PII in the law.
The three approaches to PII in the US
Tautological approach
PII is information that identifies a person. Not particularly useful as it is circular logic. Then the aspect of the answer being indentified versus indentifiable, means the burden of proff is upon the claimant to prove that the information clearly identified, not is at risk of indentifying.
Non public approach
the problem here is that there is actually not a clear definition of what non public actually means. There is a huge grey area, and this becomes an ineffective trigger.
Specific types approach
This is a rule, as opposed to the prior twomstandards. It attempts to enumerate the specific PII types and list them. The childrens PII act does this in the US. The problem here is that this is a static and inflexible approach being applied to a moving target. Many of these statues become under inclusive when it comes to information that actually could identify a person.
PIPEDA uses the term identifiable data, and is fairly broad in its application for PII. The problem becomes less the definition now rather the approach becomes all or nothing under Cdn legislation. This is reflective of EU legislation.
Problems of de-identification.
Case in point is the NetFlix survey contents, where supposedly anonymous data in the survey was readily identified by a third party research group, by cross correlating against data publicly available in IMDB.
We are seeing more and more data about people out there, and the ability to link it up to create correlations is becoming easier. The more information you have on the Internet, the harder it is to remain anonymous. The calim is that the combination of a zip code, birthdate and gender can identify 80% of the US population, my seatmate, a seasoned privacy expert calls BS on that claim quietly at our table.
The scholars provide us with a spectrum of risk of identification based on their theory.
U of Colorado prof is quoted as comparing PII to a game of whack a mole, and states that we should instead regulate the flow of information. However, without some concept of PII, privacy law has to regulate all data, not just the sensitive data.
Google flue trends cited as an example of the use of deidentified PII in the medical field as a public service.
PII 2.0 is the proposed solution to these dilemmas based on three tenents:
Identifiability is a continuum of risk.
Approach should be as a standard, not a rule.
Privacy should not be a hard on/off switch, but a tailored solution.
There are three categories of PII in this theory, moving from the current two categories.
Identified - the PII has been ascertained and the information must be protected. Plus identifiable data when significant probability of linkage to a specific person can occur.
Identifiable - specific identification is possible, has not yet occurred, And this data must also be protected and audited.
Non identifiable - only a remote risk of identification, need for protection of data is minor.
The speakers cite the dangers of the "release and forget" approach, and agree that there is a need for a track and audit approach coupled with risk assessments for identified and identifiable data.
This approach is compatible with the methodology of privacy by design, embedding privacy constraints and models into technological design and business practices.
Summing up, the presenters state that there is still great legal uncertainty about the concept of PII on a world-wide basis, and it is hard to predict the impact of privacy law on business, and therefore it is a source of business risk.
In the end, the PII 2.0 concept is about the taxonomy of PII, intended to help organisations to understand if they are subject to privacy laws or not per geo-political boundaries and constraints.
One delegate challenged that creating these categories in a vacuum from practical application is of limited value. The response was that the first two categories put the onus on the regulatory regimes and business to be responsible about how they classify data.
Questions were raised on the practicality of data moving from one category to the other over time, and how this could be managed from a track and audit purpose. The response was de-identification should be the rule, not the option for organisations holding data that is to be published. I'm uncertain this really answered the question.
The discussion was fairly esoteric, and likely provides something of use within legal circles, but moderate to low value in practical application in the technology world until legislation applies clearer boundaries to the PII containers, which, is what these gents are trying to encourage.
- Posted using BlogPress from my iPad
Location:13th Privacy & Security Conference
No comments:
Post a Comment