privacy

The Future of Privacy: More Data and More Choices

As I wrapped up my time at the Future of Privacy Forum, I prepared the following essay in advance of participating on a plenary discussion on the “future of privacy” at the Privacy & Access 20/20 conference in Vancouver on November 13, 2015 — my final outing in think tankery. 

Alan Westin famously described privacy as the ability of individuals “to determine for themselves when, how, and to what extent information about them is communicated to others.” Today, the challenge of controlling let alone managing our information has strained this definition of privacy to the breaking point. As one former European consumer protection commissioner put it, personal information is not just “the new oil of the Internet” but is also “the new currency of the digital world.” Information, much of it personal and much of it sensitive, is now everywhere, and anyone’s individual ability to control it is limited.

Early debates over consumer privacy focused on the role of cookies and other identifiers on web browsers. Technologies that feature unique identifiers have since expanded to include wearable devices, home thermostats, smart lighting, and every type of device in the Internet of Things. As a result, digital data trails will feed from a broad range of sensors and will paint a more detailed portrait about users than previously imagined. If privacy was once about controlling who knew your home address and what you might be doing inside, our understanding of the word requires revision in a world where every device has a digital address and ceaselessly broadcasts information.

The complexity of our digital world makes a huge challenge out of explaining all of this data collection and sharing. Privacy policies must either be high level and generic or technical and detailed, each option proves of limited value to the average consumer. Many connected devices have little capacity to communicate anything to consumers or passersby. And without meaningful insight, it makes sense to argue that our activities are now subject to the determinations of a giant digital black box. We see privacy conversations increasingly shift to discussions about fairness, equity, power imbalances, and discrimination.

No one can put the data genie back in a bottle. No one would want to. At a recent convening of privacy advocates, folks discussed the social impact of being surrounded by an endless array of “always on” devices, yet no one was willing to silence their smartphones for even an hour. It has become difficult, if not impossible, to opt out of our digital world, so the challenge moving forward is how do we reconcile reality with Westin’s understanding of privacy.

Yes, consumers may grow more comfortable with our increasingly transparent society over time, but survey after survey suggest that the vast majority of consumers feel powerless when it comes to controlling their personal information. Moreover, they want to do more to protect their privacy. This dynamic must be viewed as an opportunity. Rather than dour information management, we need better ways to express our desire for privacy. It is true that “privacy management” and “user empowerment” have been at the heart of efforts to improve privacy for years. Many companies already offer consumers an array of helpful controls, but one would be hard-pressed to convince the average consumer of this. The proliferation of opt-outs and plug-ins has done little to actually provide consumers with any feeling of control.

The problem is few of these tools actually help individuals engage with their information in a practical, transparent, or easy way. The notion of privacy as clinging to control of our information against faceless entities leaves consumers feeling powerless and frustrated. Privacy needs some rebranding. Privacy must be “appified” and made more engaging. There is a business model to be made in finding a way to marry privacy and control in an experience that is simple and functional. Start-ups are working to answer that challenge, and the rise of ephemeral messaging apps are, if not perfect implementations, a sure sign that consumers want privacy, if they can get it easily. For Westin’s view of privacy to have a future, we need to do a better job of embracing creative, outside-the-box ways to get consumers thinking about and engaging with how their data is being used, secured, and ultimately kept private.

Social Listening and Monitoring of Students

The line between monitoring consumer sentiment in general and tracking individual customers is unclear and ill-defined. Companies need to understand public perceptions about both different types of online tracking and different sorts of consumer concerns. Monitoring by schools appears to be even more complex. In an opinion piece in Education Week, Jules Polonetsky and I discuss the recent revelation that Pearson—the educational testing and publishing company—was monitoring social media for any discussion by students of a national standardized test it was charged with administering. // Read more on Education Week.

Plunging Into the Black Box Society

Frank Pasquale’s The Black Box Society has been steadily moving up my reading list since it came out, but after Monday’s morning-long workshop on the topic of impenetrable algorithms, the book looks to be this weekend’s reading project. Professor Pasquale has been making the rounds for a while now, but when his presentation was combined by devastating real world examples of how opaque credit scores are harming consumers and regulators that were ill-equipped to address these challenges, U.S. PIRG Education Fund and the Center for Digital Democracy were largely successful in putting the algorithmic fear of God into me.

A few takeaways: first, my professional interest in privacy only occasionally intersects with credit reporting and the proliferation of credit scores, so it was alarming to learn that 25% of consumers have serious errors in their credit reports, errors large enough to impact their credit ratings. (PIRG famously concluded in 2004 that 79% of credit reports have significant errors.)

That’s appalling, particular as credit scores are increasingly essential, as economic justice advocate Alexis Goldstein put it, “to avail yourself of basic opportunity.” Pasquale described the situation as a data collection architecture that is “defective by design.” Comparing the situation to malfunctioning toasters, he noted that basic consumer protection laws (and tort liability) would functionally prohibit toasters with a 20% chance of blowing up on toast-delivery, but we’ve become far more cavalier when it comes to data-based products. More problematic is the byzantine procedures for contesting credit scores and resolving errors.

Or even realizing your report has errors. I have taken to using up one of my free, annual credit reports every three months with a different major credit reporting bureau, and while I think this procedure makes me feel like a responsible credit risk, I’m not sure what good I’m doing. It also strikes me as disheartening that the credit bureaus have turned around and made “free” credit reports into both a business segment and something of a joke — who can forget the FreeCreditReport.com “band”?

Second, the Fair Credit Reporting Act, the first “big data” law, came out of the event looking utterly broken. At one point, advocates were describing how individuals in New York City had lost out on job opportunities due to bad or missing credit reports — and had frequently never received adverse action notices as required by FCRA. Peggy Twohig from the Consumer Financial Protection Bureau then discussed how her agency expected most consumer reporting agencies to have compliance programs, with basic training and monitoring, and quickly found many lacked adequate oversight or capacity to track consumer complaints.

And this is the law regulators frequently point to as strongly protective of consumers? Maybe there’s some combination of spotty enforcement, lack of understanding, or data run amok that is to blame for the problems discussed, but the FCRA is a forty-five year-old law. I’m not sure ignorance and unfamiliarity are adequate explanations.

Jessica Rich, the Director of the FTC’s Bureau of Consumer Protection, conceded that there were “significant gaps” in existing law, and moreover, that in some respects consumers have limited ability to control information about them. This wasn’t news to me, but no one seemed to have any realistic notion for how to resolve this problem. There were a few ideas bandied back-and-forth, including an interesting exchange about competitive self-regulation, but Pasquale’s larger argument seemed to be that many of these proposals were band-aids on a much larger problem.

The opacity of big data, he argued, allows firms to “magically arbitrage…or rather mathematically arbitrage around all the laws.” He lamented “big data boosters” who believe data will be able to predict everything. If that’s the case, he argued, it is no longer possible to sincerely support sectoral data privacy regulation where financial data is somehow separate from health data, from educational data, from consumer data. “If big data works the way they claim it works, that calls for a rethink of regulation.” Or a black box over our heads?

Hate the Consumer Privacy Bill of Rights, but Love the Privacy Review Boards

Considering the criticism on all sides, it’s not a bold prediction to suggest the White House’s Consumer Privacy Bill of Rights is unlikely to go far in the current Congress. Yet while actual legislation may not be the cards, the ideas raised by the proposed bill will impact the privacy debate. One of the bill’s biggest ideas is the creation of a new governance institution, the Privacy Review Board.

The bill envisions that Privacy Review Boards will provide a safety valve for innovative uses of information that strain existing privacy protections but could provide big benefits. In particular, when notice and choice are impractical and data analysis would be “not reasonable in light of context,” Privacy Review Boards could still permit data uses when “the likely benefits of the analysis outweigh the likely privacy risks.” This approach provides a middle-ground between calls for permissionless innovation, on one hand, and blanket prohibitions on innovative uses of information on the other.

Instead, Privacy Review Boards embrace the idea that ongoing review processes, whether external or internal, are important and are a better way to address amorphous benefits and privacy risks. Whatever they ultimately look like, these boards can begin the challenging task of specifically confronting the ethical qualms being raised by the benefits of “big data” and the Internet of Things.

This isn’t a novel idea. After all, the creation of formal review panels was one of the primary responses to ethical concerns with biomedical research. Institutional review boards, or IRBs, have now existed as a fundamental part of the human research approval process for decades. IRBs are not without their flaws. They can become overburdened and bureaucratic, and the larger ethical questions can be replaced by a rigid process of checking-off boxes and filling out paperwork. Yet IRBs have become an important mechanism by which society has come to trust researchers.

At their foundation, IRBs reflect an effort to infuse research with several overarching ethical principles identified in the Belmont Report, which serves as a foundational document in ethical research. The report’s principles of respect for persons, beneficence, and justice embody the ideas that researchers (1) should respect individual autonomy, (2) maximize benefits to the research project while minimizing risks to research subjects, and (3) ensure that costs and benefits of research are distributed fairly and equitably.

Formalizing a process of considering these principles, warts and all, went a long way toward alleviating fears that medical researchers lacked rules. Privacy Review Boards could do the same today for consumer data in the digital space. Consumers feel like they lack control over their own information, and they want reassurances that their personal data is only being used in ways that ultimately benefit them. Moreover, calls to develop these sorts of mechanisms in the consumer space are also not new. In response to privacy headaches, companies like Facebook and Google have already instituted review panels that are designed to reflect different viewpoints and encourage careful consideration.

Establishing the exact requirements for Privacy Review Boards will demand flexibility. The White House’s proposal offers a litany of different factors to consider. Specifically, Privacy Review Boards will need to have a degree of independence and also possess subject-matter expertise. They will need to take the sizes, experiences, and resources of a given company into account. Perhaps most challenging, Privacy Review Boards will to balance transparency and confidentiality. Controversially, the proposed bill places the Federal Trade Commission in the role of arbiter of the board’s validity. While it would be interested to imagine how the FTC could approach such a task, the larger project of having more ethical conversations about innovative data use is worth pursuing, and perhaps the principles put forward in the Belmont Report can provide a good foundation once more.

The principles in the Belmont Report already reflect ideas that exist in debates surrounding privacy. For example, the notion of respect for persons echoes privacy law’s emphasis on fair notice and informed choice. Beneficence stresses the need to maximize benefits and minimize harms, much like existing documentation on the FTC’s test for unfair business practices, and justice raises questions about the equity of data use and considerations about unfair or illegal disparate impacts. If the Consumer Privacy Bill of Rights accomplishes nothing else, it will have reaffirmed the importance of a considered review process. Privacy Review Boards might not have all the answers – but they are in a position to monitor data uses for problems, promote trust, and ultimately, better protect privacy.

Big Data: Catalyst for a Privacy Conversation

This week, the Indiana Law Review released my short article on privacy and big data that I prepared after the journal’s spring symposium. Law and policy appear on the verge of redefining how they understand privacy, and data collectors and privacy advocates are trying to present a path forward. The article discusses the rise of big data and the role of privacy in both the Fourth Amendment and consumer contexts. It explores how the dominant conceptions of privacy as secrecy and as control are increasingly untenable, leading to calls to focus on data use or respect the context of collection. I quickly argue that the future of privacy will have to be built upon a foundation of trust—between individuals and the technologies that will be watching and listening. I was especially thrilled to see the article highlighted by The New York Times’ Technology Section Scuttlebot.

No Privacy/No Control

This week, the Pew Research Center released a new report detailing Americans’ attitudes about their privacy. I wrote up a few thoughts, but my big takeaway is that Americans both want and need more control over their personal information. Of course, the challenge is helping users engage with their privacy, i.e., making privacy “fun,” which anyone will tell you is easier said than done. Then again, considering we’ve found ways to make everything from budgeting to health tracking “fun,” I’m unsure what’s stopping industry from finding some way to do it. // More on the Future of Privacy Forum blog.

Developing Consensus on the Ethics of Data Use

Information is power, as the saying goes, and big data promises the power to make better decisions across industry, government, and everyday life. Data analytics offers an assortment of new tools to harness data in exciting ways, but society has been slow to engage in a meaningful analysis of the social value of all this data. The result has been something of a policy paralysis when it comes to building consensus around certain uses of information.

Advocates noted this dilemma several years ago during the early stages of the effort to develop a Do Not Track (DNT) protocol at the World Wide Web Consortium. DNT was first proposed seven years ago as a technical mechanism to give users control over whether they were being tracked online, but the protocol remains a work in progress. The real issue lurking behind the DNT fracas was not any sort of technical challenge, however, but rather the fact that the ultimate value of online behavioral advertising remains an open question. Industry touts the economic and practical benefits of an ad-supported Internet, while privacy advocates maintain that targeted advertising is somehow unfair. Without any efforts to bridge that gap, consensus has been difficult to reach.

As we are now witnessing in conversations ranging from student data to consumer financial protection, the DNT debate was but a microcosm of larger questions surrounding the ethics of data use. Many of these challenges are not new, but the advent of big data has made the need for consensus ever more pressing.

For example, differential pricing schemes – or price discrimination – have increasingly become a hot-button issue. But charging one consumer a different price than another for the same good is not a new concept; in fact, it happens every day. The Wall Street Journal recently explored how airlines are the “world’s best price discriminators,” noting that what an airline passenger pays is tied to the type of people they’re flying with. As a result, it currently costs more for U.S. travelers to fly to Europe than vice versa because the U.S. has a stronger economy and quite literally can afford higher prices. Businesses are in business, after all, to make money, and at some level, differential pricing makes economic sense.

However, there remains a basic concern about the unfairness of these practices. This has been amplified by perceived changes in the nature of how price discrimination works. The recent White House “Big Data Report” recognized that while there are perfectly legitimate reasons to offers different prices for the same products, the capacity for big data “to segment the population and to stratify consumer experiences so seamlessly as to be almost undetectable demands greater review.” Customers have long been sorted into different categories and groupings. Think urban or rural, young or old. But big data has made it markedly easier to identify those characteristics that can be used to ensure every individual customer is charged based on their exact willingness to pay.

The Federal Trade Commission has taken notice of this shift, and begun to start a much-needed conversation about the ultimate value of these practices. At a recent discussion on consumer scoring, Rachel Thomas from the Direct Marketing Association suggested that companies have always tried to predict customer wants and desires. What’s truly new about data analytics, she argued, is that it offers the tools to actually get predictions right and to provide “an offer that is of interest to you, as opposed to the person next to you.” While some would argue this is a good example of market efficiency, others worry that data analytics can be used to exploit or manipulate certain classes of consumers. Without a good deal more public education and transparency on the part of decision-makers, we face a future where algorithms will drive not just predictions but decisions that will exacerbate socio-economic disparities.

The challenge moving forward is two-fold. Many of the more abstract harms allegedly produced by big data are fuzzy at best – filter bubbles, price discrimination, and amorphous threats to democracy are hardly traditional privacy harms. Moreover, few entities are engaging in the sort of rigorous analysis necessary to determine whether or not a given data use will make these things come to pass.

According to the White House, technological developments necessitate a shift in privacy thinking and practice toward responsible uses of data rather than its mere collection and analysis. While privacy advocates have expressed skepticism of use-based approaches to privacy, increased transparency and accountability mechanisms have been approached as a way to further augment privacy protections. Developing broad-based consensus around data use may be more important.

Consensus does not mean unanimity, but it does require a conversation that considers the interests of all stakeholders. One proposal that could help drive consensus are the development of internal review boards or other multi-stakeholder oversight mechanisms. Looking to the long-standing work of institutional review boards, or IRBs, in the field of human subject testing, Ryan Calo suggested that a similar structure could be used as a tool to infuse ethical considerations into consumer data analytics. IRBs, of course, engage in a holistic analysis of the risks and benefits that could result from any human testing project. They are also made up of different stakeholders, encompassing a wide-variety of diverse backgrounds and professional expertise. These boards also come to a decision before a project can be pursued.

Increasingly, technology is leaving policy behind. While that can both promote innovation and ultimately benefit society, it makes the need for consensus about the ethics at stake all the more important.

Big Data Privacy Bingo

With the White House’s Big Data and Privacy Review anticipated any day now, I figured it was long past time to put together a quick #bigdataprivacy bingo card. If you go to enough privacy (or big data) events and workshops, you’ll quickly realize how many of the same buzzwords and anecdotes get cited over and over . . . and over again. In the battle between privacy and innovation, bingo may be the only thing that wins.

White House/MIT Big Data Privacy Workshop Recap

Speaking for everyone snowed-in in DC, White House Counselor John Podesta remarked that “big snow trumped big data,” while on the phone to open the first of the Obama Administration’s three big data and privacy workshops.  This first workshop, which I was eager to attend (if only to continue my streak of annual appearances in Beantown), focused on advancing the “start of the art” in technology and practice.  For a mere lawyer such as myself, I anticipated a lot of highly technical jargon, and in that regard I was not disappointed. // Full recap on the Future of Privacy Forum Blog.

Common Sense Media Student Privacy Summit All About Self-Regulation

The biggest takeaway from Common Sense Media’s School Privacy Zone Summit was, in the words of U.S. Secretary of Education Arne Duncan, that “privacy needs to be a higher priority” in our schools.  According to Duncan, “privacy rules may be the seatbelts of this generation,” but getting these rules right in sensitive school environments will prove challenging.  As the Family Educational Rights and Privacy Act (FERPA), one of the nation’s oldest privacy laws, turns forty this year, what seems to be apparent is that are schools lack both the resources and training necessary to even understand today’s digital privacy challenges surrounding student data.

Dr. Terry Grier, Superintendent of the Houston Independent School District, explains that his district of 225,000 students is getting training from a 5,000 student district in North Carolina.  The myriad of different school districts, varying sharply in wealth and size, has made it impossible for educators to define rules and expectations when it comes to how student data can be collected and used.

Moreover, while privacy advocates charge that schools have effectively relinquished control over their students’ information, several panelists noted that we haven’t yet decided who the ultimate custodian of student data even is.  One initial impulse might be to analogize education records to HIPAA health records, which belong to a patient, but Cameron Evans, CTO of education at Microsoft, suggested that it might be counterproductive to think of personalized education data as strictly comparable to individual health records.  On top of this dilemma, questions about how to communicate and inform parents have proven difficult to answer as educational technology shifts rapidly, resulting in a landscape that one state educational technology director described as the “wild wild west.”

There was wide recognition by both industry participants at the summit and policymakers that educational technology vendors need to establish best practices – and soon.  Secretary Duncan noted there was a lot of energy to address these issues, and that it was “in the best interest of commercial players to be self-policing.”  The implication was clear: begin establishing guidelines and helping schools now or face government regulation soon.

Average Folks and Retailer Tracking

Yesterday evening, I found myself at the Mansion on O Street, whose eccentric interior filled with hidden doors, secret passages, and bizarrely themed rooms, seemed as good as any place to hold a privacy-related reception. The event marked the beta launch of my organization’s mobile location tracking opt-out.  Mobile location tracking, which is being implemented across the country by major retailers, fast food companies, malls, and the odd airport, first came to the public’s attention last year when Nordstrom informed its customers that it was tracking their phones in order to learn more about their shopping habits.

Today, the Federal Trade Commission hosted a morning workshop to discuss the issue, featuring representatives from analytics companies, consumer education firms, and privacy advocates. The workshop presented some of the same predictable arguments about lack of consumer awareness and ever-present worries about stifling innovation, but I think a contemporaneous conversation I had with a friend better highlights some of the privacy challenges mobile analytics presents.  Names removed to predict privacy, of course!

Technology Policy Institute Tackles Big Data

A recent paper by the Technology Policy Institute takes a pro-business look at the Big Data phenomenon, finding “no evidence” that Big Data is creating any sort of privacy harms.  As I hope to lay out, I didn’t agree with several of the report’s findings, but I found the paper especially interesting as it critiques my essay from September’s “Big Data and Privacy” conference.  According to TPI, my “inflammatory” suggestion that ubiquitous data collection may harm the poor was presented “without evidence.” Let me first say that I’m deeply honored to have my writing critiqued; for better or worse, I am happy to have my thoughts somehow contribute to a policy conversation.  That said, while some free market voices applauded the report as a thoughtful first step at doing a a Big Data cost-benefit analysis, I found the report to be one-sided to its detriment.

As ever in the world of technology and law, definitions matter, and neither myself nor TPI can adequately define what “Big Data” even is.  Instead, TPI suggests that Big Data phenomenon describes the fact that data is “now available in real time, at larger scale, with less structure, and on different types of variables than previously.”  If I wanted to be inflammatory, I would suggest this means that personal data is being collected and iterated upon pervasively and continuously.  The paper then does a good job of exploring some of the unexpected benefits of this situation.  It points to the commonly-lauded Google Flu Trends as the posterchild for Big Data’s benefits, but neglects to mention the infamous example where Target was able to uncover a teenage customer was pregnant before her family.

At that point, the paper looks at several common privacy concerns surrounding Big Data and attempts to debunk them. Read More…

Recapping EPIC’s Failing the Grade Educational Privacy Event

The arrival of new technologies in the field of education, from connected devices, student longitudinal data systems, and massive open online courses (MOOCs) present both opportunities and potential privacy risks for students and educators.  As part of my work at the Future of Privacy Forum, I have started surveying the issue of privacy in education, and early, anecdotal conversations suggest a pressing need for more education and awareness among all stakeholders.  With that in mind, I was pleased to see the Electronic Privacy Information Center (EPIC) host an informative discussion on education records and student privacy.

The focus of the discussion was on the growing “datafication” of student’s personal information.  Sen. Edward Markey (D-Mass), who has been active in the field of children’s privacy, opened the event with an introduction to the topic area.  In addition to discussing his Do Not Track Kids legislation, which would extend COPPA-type protections to 13, 14, and 15 year-olds, the Senator highlighted his new student privacy legislation.  The goals of the legislation were explained as follows:

  1. Student data should never be available for commercial purposes (focus on advertising);
  2. Parents should have access and rectification rights to data held by private companies, similar to what is afforded for records held by schools;
  3. Safeguards should be put in place to ensure that there are real protections for student records held by third parties; and
  4. Private companies must delete information that they no longer need. Student records should not be held permanently by companies, only by parents.

The panel itself featured Marc Rotenberg and Khaliah Barnes of EPIC; Kathleen Styles, Chief Privacy Officer at the Department of Education (DOE); Joel Reidenberg of Fordham Law School; Deborah Peel of Patient Privacy Rights; and Pablo Molina, Chief Information Officer at Southern Connecticut State University.

Read More…

1 2 3  Scroll to top