Opinion

Safe and Secure VR: Policy Issues Impacting Kids’ Use of Immersive Tech

After Oculus Quest commercials blanketed the airwaves before the holidays, a number of folks at Common Sense Media raised concerns about Facebook’s take on virtual reality. I decided to seize on this interest to offer up some thoughts on how to improve virtual reality for kids, putting out a short paper: Safe and Secure VR: Policy Issues Impacting Kids’ Use of Immersive Tech.

To guide tech companies’ decisions as they create immersive content aimed at kids, I suggest several ways to ensure kids experience these technologies in a safe, secure, and responsible environment, including:

  1. Parental controls should be effective and account for the unique features of VR games, such as its immersive nature. For example, providing clear time-limit mechanisms to prevent overuse.
  2. VR platforms must create safer virtual environments. We need a strong set of standards for rating and moderating VR experiences so families can choose what is appropriate for their children.
  3. Companies must step up their protection of kids’ data, especially because immersive tech like VR requires the collection of so much sensitive behavioral information.

A number of colleagues and VR enthusiasts offered feedback, and I remain thankful to Lindsey Barrett, Mary Berk, Jon Brescia, Jeff Haynes, Girard Kelly, Joe Newman, and Jenny Radesky for their thoughtful feedback — and willingness to read the paper.

// Download the full paper here

Project Aria and Mapping Augmented Reality

On the heels of Facebook’s announcement that Reality Labs would be deploying smart glasses to both assist in mapping and create “digital twin” of the real world:

Maps hold tremendous power. They not only help people navigate the world, but they also establish boundaries and shape our perceptions. Mapping technology is equally important. Global navigation systems are military assets, and Apple publicly apologized for the shaky launch of its mapping app in 2012. We have gotten used to mapping roads, but AR changes the game by encouraging us to map every square foot of space on the planet.

// Read the piece at Slate here

Some Initial Ideas on Improving Privacy in AR, VR, and XR

The time to begin developing XR privacy guidelines and controls is now. Growing numbers of consumers are worried about how data collected via VR headsets and AR apps are used, and privacy compliance has emerged as the top legal risk impacting XR companies. XR industry surveys have found that companies are more concerned with consumer privacy and data security than product liability, health and safety, or intellectual property.

In this post for IAPP’s Privacy Perspectives, I offer some initial areas that should be top of mind. As a privacy advocate and XR enthusiast, I suggest there’s a real need for AR/VR platforms and developers to (1) improve transparency and begin making XR-specific data disclosures, (2) embrace transparency reporting and technical solutions to restrain data sharing, and (3) commit to diversity and inclusion.

Privacy and Private Rights of Action

As Congress continues to slog through the process of crafting a comprehensive federal privacy framework, two intractable issues have emerged: federal preemption and private rights of action. These two issues are intertwined because they get at the core of how privacy rights and obligations should be enforced. While preemption has received most of the attention, a carefully constructed private right of action could also play an important role in advancing privacy rights at the national level. Instead, any inclusion of a private right of action has been treated as an all-or-nothing proposition.

Privacy advocates recommend individuals be permitted to privately enforce federal privacy protections through a statutory private right of action without any showing of harm. Meanwhile, industry-friendly proposals treat private rights of action as a non-starter. Both sides are locked into absolutist positions, and lawmakers’ efforts to craft an impactful privacy law have been hurt in the process.

In this post for IAPP’s Privacy Perspectives, I get into the nuance of private enforcement and offer up several ideas for how lawmakers could incorporate private rights of action into a national privacy law.

Congress, What About the Data Brokers?

In light of Congress’ decidedly tech-focused privacy hearings, I thought it important to elevate the surprising absent of data brokers to the conversation:

[D]ata brokers speak of “ethically sourced” data and “enhanced transparency“ through self-regulation. The reality is that while many companies now collect a whole lot of our information, there’s really only one industry that doesn’t want us to know much about it in exchange. Data brokers may know everything about you—but they still don’t want you to know about them.

// Read the piece at Slate here

EU Privacy Rules Will Not “Kill People”

With the GDPR recently coming into effect, U.S. policymakers and industry players have found opportunities to critic the GDPR on the grounds that it will somehow harm health care. The U.S. secretary of commerce recently insisted without evidence that European law will stop lifesaving drugs from coming to market. Others with a stake in the industry have suggested that the GDPR’s limits on data sharing will actually hurt people seeking medical care. The attack on privacy laws for health information motivated me to draft an opinion piece for STAT.

// Read the full essay at STAT First Opinion.

The Future of Privacy: More Data and More Choices

As I wrapped up my time at the Future of Privacy Forum, I prepared the following essay in advance of participating on a plenary discussion on the “future of privacy” at the Privacy & Access 20/20 conference in Vancouver on November 13, 2015 — my final outing in think tankery. 

Alan Westin famously described privacy as the ability of individuals “to determine for themselves when, how, and to what extent information about them is communicated to others.” Today, the challenge of controlling let alone managing our information has strained this definition of privacy to the breaking point. As one former European consumer protection commissioner put it, personal information is not just “the new oil of the Internet” but is also “the new currency of the digital world.” Information, much of it personal and much of it sensitive, is now everywhere, and anyone’s individual ability to control it is limited.

Early debates over consumer privacy focused on the role of cookies and other identifiers on web browsers. Technologies that feature unique identifiers have since expanded to include wearable devices, home thermostats, smart lighting, and every type of device in the Internet of Things. As a result, digital data trails will feed from a broad range of sensors and will paint a more detailed portrait about users than previously imagined. If privacy was once about controlling who knew your home address and what you might be doing inside, our understanding of the word requires revision in a world where every device has a digital address and ceaselessly broadcasts information.

The complexity of our digital world makes a huge challenge out of explaining all of this data collection and sharing. Privacy policies must either be high level and generic or technical and detailed, each option proves of limited value to the average consumer. Many connected devices have little capacity to communicate anything to consumers or passersby. And without meaningful insight, it makes sense to argue that our activities are now subject to the determinations of a giant digital black box. We see privacy conversations increasingly shift to discussions about fairness, equity, power imbalances, and discrimination.

No one can put the data genie back in a bottle. No one would want to. At a recent convening of privacy advocates, folks discussed the social impact of being surrounded by an endless array of “always on” devices, yet no one was willing to silence their smartphones for even an hour. It has become difficult, if not impossible, to opt out of our digital world, so the challenge moving forward is how do we reconcile reality with Westin’s understanding of privacy.

Yes, consumers may grow more comfortable with our increasingly transparent society over time, but survey after survey suggest that the vast majority of consumers feel powerless when it comes to controlling their personal information. Moreover, they want to do more to protect their privacy. This dynamic must be viewed as an opportunity. Rather than dour information management, we need better ways to express our desire for privacy. It is true that “privacy management” and “user empowerment” have been at the heart of efforts to improve privacy for years. Many companies already offer consumers an array of helpful controls, but one would be hard-pressed to convince the average consumer of this. The proliferation of opt-outs and plug-ins has done little to actually provide consumers with any feeling of control.

The problem is few of these tools actually help individuals engage with their information in a practical, transparent, or easy way. The notion of privacy as clinging to control of our information against faceless entities leaves consumers feeling powerless and frustrated. Privacy needs some rebranding. Privacy must be “appified” and made more engaging. There is a business model to be made in finding a way to marry privacy and control in an experience that is simple and functional. Start-ups are working to answer that challenge, and the rise of ephemeral messaging apps are, if not perfect implementations, a sure sign that consumers want privacy, if they can get it easily. For Westin’s view of privacy to have a future, we need to do a better job of embracing creative, outside-the-box ways to get consumers thinking about and engaging with how their data is being used, secured, and ultimately kept private.

Voter Privacy and the Future of Democracy

As the election season gets into full swing, I teamed up with Evan Selinger to discuss some of the privacy challenges facing the campaigns. A recent study by the Online Trust Alliance found major failings’ with the campaigns’ privacy policies, and beyond the nuts and bolts of having an online privacy notice, political hunger for data presents very real challenges for voters and perhaps more provocatively, for democracy. // More at the Christian Science Monitor’s Passcode.

Ethics and Privacy in the Data-Driven World

As part of the U.S. Chamber of Commerce’s “Internet of Everything” project, my boss and I co-authored a short essay on the growing need for company’s to have a “data ethics” policy:

Formalizing an ethical review process will give companies an outlet to weigh the benefits of data use against a larger array of risks. It provides a mechanism to formalize data stewardship and move away from a world where companies are largely forced to rely on the “gut instinct” of marketers or the C-Suite. By establishing an ethics policy, one can also capture issues that go beyond privacy issues and data protection, and ensure that the benefits of a future of smart devices outweigh any risks.

// Read more at the U.S. Chamber Foundation.

Social Listening and Monitoring of Students

The line between monitoring consumer sentiment in general and tracking individual customers is unclear and ill-defined. Companies need to understand public perceptions about both different types of online tracking and different sorts of consumer concerns. Monitoring by schools appears to be even more complex. In an opinion piece in Education Week, Jules Polonetsky and I discuss the recent revelation that Pearson—the educational testing and publishing company—was monitoring social media for any discussion by students of a national standardized test it was charged with administering. // Read more on Education Week.

Plunging Into the Black Box Society

Frank Pasquale’s The Black Box Society has been steadily moving up my reading list since it came out, but after Monday’s morning-long workshop on the topic of impenetrable algorithms, the book looks to be this weekend’s reading project. Professor Pasquale has been making the rounds for a while now, but when his presentation was combined by devastating real world examples of how opaque credit scores are harming consumers and regulators that were ill-equipped to address these challenges, U.S. PIRG Education Fund and the Center for Digital Democracy were largely successful in putting the algorithmic fear of God into me.

A few takeaways: first, my professional interest in privacy only occasionally intersects with credit reporting and the proliferation of credit scores, so it was alarming to learn that 25% of consumers have serious errors in their credit reports, errors large enough to impact their credit ratings. (PIRG famously concluded in 2004 that 79% of credit reports have significant errors.)

That’s appalling, particular as credit scores are increasingly essential, as economic justice advocate Alexis Goldstein put it, “to avail yourself of basic opportunity.” Pasquale described the situation as a data collection architecture that is “defective by design.” Comparing the situation to malfunctioning toasters, he noted that basic consumer protection laws (and tort liability) would functionally prohibit toasters with a 20% chance of blowing up on toast-delivery, but we’ve become far more cavalier when it comes to data-based products. More problematic is the byzantine procedures for contesting credit scores and resolving errors.

Or even realizing your report has errors. I have taken to using up one of my free, annual credit reports every three months with a different major credit reporting bureau, and while I think this procedure makes me feel like a responsible credit risk, I’m not sure what good I’m doing. It also strikes me as disheartening that the credit bureaus have turned around and made “free” credit reports into both a business segment and something of a joke — who can forget the FreeCreditReport.com “band”?

Second, the Fair Credit Reporting Act, the first “big data” law, came out of the event looking utterly broken. At one point, advocates were describing how individuals in New York City had lost out on job opportunities due to bad or missing credit reports — and had frequently never received adverse action notices as required by FCRA. Peggy Twohig from the Consumer Financial Protection Bureau then discussed how her agency expected most consumer reporting agencies to have compliance programs, with basic training and monitoring, and quickly found many lacked adequate oversight or capacity to track consumer complaints.

And this is the law regulators frequently point to as strongly protective of consumers? Maybe there’s some combination of spotty enforcement, lack of understanding, or data run amok that is to blame for the problems discussed, but the FCRA is a forty-five year-old law. I’m not sure ignorance and unfamiliarity are adequate explanations.

Jessica Rich, the Director of the FTC’s Bureau of Consumer Protection, conceded that there were “significant gaps” in existing law, and moreover, that in some respects consumers have limited ability to control information about them. This wasn’t news to me, but no one seemed to have any realistic notion for how to resolve this problem. There were a few ideas bandied back-and-forth, including an interesting exchange about competitive self-regulation, but Pasquale’s larger argument seemed to be that many of these proposals were band-aids on a much larger problem.

The opacity of big data, he argued, allows firms to “magically arbitrage…or rather mathematically arbitrage around all the laws.” He lamented “big data boosters” who believe data will be able to predict everything. If that’s the case, he argued, it is no longer possible to sincerely support sectoral data privacy regulation where financial data is somehow separate from health data, from educational data, from consumer data. “If big data works the way they claim it works, that calls for a rethink of regulation.” Or a black box over our heads?

Hate the Consumer Privacy Bill of Rights, but Love the Privacy Review Boards

Considering the criticism on all sides, it’s not a bold prediction to suggest the White House’s Consumer Privacy Bill of Rights is unlikely to go far in the current Congress. Yet while actual legislation may not be the cards, the ideas raised by the proposed bill will impact the privacy debate. One of the bill’s biggest ideas is the creation of a new governance institution, the Privacy Review Board.

The bill envisions that Privacy Review Boards will provide a safety valve for innovative uses of information that strain existing privacy protections but could provide big benefits. In particular, when notice and choice are impractical and data analysis would be “not reasonable in light of context,” Privacy Review Boards could still permit data uses when “the likely benefits of the analysis outweigh the likely privacy risks.” This approach provides a middle-ground between calls for permissionless innovation, on one hand, and blanket prohibitions on innovative uses of information on the other.

Instead, Privacy Review Boards embrace the idea that ongoing review processes, whether external or internal, are important and are a better way to address amorphous benefits and privacy risks. Whatever they ultimately look like, these boards can begin the challenging task of specifically confronting the ethical qualms being raised by the benefits of “big data” and the Internet of Things.

This isn’t a novel idea. After all, the creation of formal review panels was one of the primary responses to ethical concerns with biomedical research. Institutional review boards, or IRBs, have now existed as a fundamental part of the human research approval process for decades. IRBs are not without their flaws. They can become overburdened and bureaucratic, and the larger ethical questions can be replaced by a rigid process of checking-off boxes and filling out paperwork. Yet IRBs have become an important mechanism by which society has come to trust researchers.

At their foundation, IRBs reflect an effort to infuse research with several overarching ethical principles identified in the Belmont Report, which serves as a foundational document in ethical research. The report’s principles of respect for persons, beneficence, and justice embody the ideas that researchers (1) should respect individual autonomy, (2) maximize benefits to the research project while minimizing risks to research subjects, and (3) ensure that costs and benefits of research are distributed fairly and equitably.

Formalizing a process of considering these principles, warts and all, went a long way toward alleviating fears that medical researchers lacked rules. Privacy Review Boards could do the same today for consumer data in the digital space. Consumers feel like they lack control over their own information, and they want reassurances that their personal data is only being used in ways that ultimately benefit them. Moreover, calls to develop these sorts of mechanisms in the consumer space are also not new. In response to privacy headaches, companies like Facebook and Google have already instituted review panels that are designed to reflect different viewpoints and encourage careful consideration.

Establishing the exact requirements for Privacy Review Boards will demand flexibility. The White House’s proposal offers a litany of different factors to consider. Specifically, Privacy Review Boards will need to have a degree of independence and also possess subject-matter expertise. They will need to take the sizes, experiences, and resources of a given company into account. Perhaps most challenging, Privacy Review Boards will to balance transparency and confidentiality. Controversially, the proposed bill places the Federal Trade Commission in the role of arbiter of the board’s validity. While it would be interested to imagine how the FTC could approach such a task, the larger project of having more ethical conversations about innovative data use is worth pursuing, and perhaps the principles put forward in the Belmont Report can provide a good foundation once more.

The principles in the Belmont Report already reflect ideas that exist in debates surrounding privacy. For example, the notion of respect for persons echoes privacy law’s emphasis on fair notice and informed choice. Beneficence stresses the need to maximize benefits and minimize harms, much like existing documentation on the FTC’s test for unfair business practices, and justice raises questions about the equity of data use and considerations about unfair or illegal disparate impacts. If the Consumer Privacy Bill of Rights accomplishes nothing else, it will have reaffirmed the importance of a considered review process. Privacy Review Boards might not have all the answers – but they are in a position to monitor data uses for problems, promote trust, and ultimately, better protect privacy.

Big Data Conversations Need a Big Data Definition

As part of my day job, I recently recapped the Federal Trade Commission’s workshop on “Big Data” and discrimination. My two key takeaways were that regulators and the advocacy community wanted more “transparency” into how industry is using big data, particularly in positive ways, and second, that there was a pressing need for industry to take affirmative steps to implement governance systems and stronger “institutional review board”-type mechanisms to overcome the transparency hurdle the opacity of big data present.

But if I’m being candid, I think we really need to start narrowing our definitions of big data. Big data has become a term that gets attached to a wide-array of different technologies and tools that really ought to be addressed separately. We just don’t have a standard definition. The Berkeley School of Information recently asked forty different thought leaders what they thought of big data, and basically got forty different definitions. While there’s a common understanding of big data as more volume, more variety, and at greater velocity, I’m not sure how any of these terms is a foundation to start talking about practices or rules, let alone ethics.

At the FTC’s workshop, big data was spoken in the context of machine learning and data mining, the activities of data brokers and scoring profiles, wearable technologies and the greater Internet of Things. No one ever set ground rules as to what “Big Data” meant as a tool for inclusion or exclusion. At one point, a member of the civil rights community was focused on big data largely as the volume of communications being produced by social media at the same time as another panelist was discussing consumer loyalty cards. Maybe there’s some overlap, but the risks and rewards can be very different.

Playing Cupid: All’s Fair in Love in the Age of Big Data?

After a three year dry spell, OkCupid’s fascinating OkTrends blog roared to life on Monday with a post by Christian Rudder, cofounder of the dating site. Rudder boldly declared that his matchmaking website “experiment[s] on human beings.” His comments are likely to reignite the controversy surrounding A/B testing on users in the wake of Facebook’s “emotional manipulation” study. This seems to be Rudder’s intention, writing that “if you use the Internet, you’re the subject of hundreds of experiments at any given time, on every site. That’s how websites work.”

Rudder’s announcement detailed a number of the fascinating ways that OkCupid “plays” with its user’s information. From removing text and photos from people’s profiles to duping mismatches into thinking they’re excellent matches for one another, OkCupid has tried a lot of different methods to help users find love. Curiously, my gut reaction to this news was that it was much less problematic that the similar sorts of tests being run by Facebook – and basically everyone involved in the Internet ecosystem.

After all, OkCupid is quite literally playing Cupid. Playing God. There’s an expectation that there’s some magic to romance, even if it’s been reduced to numbers. Plus, there’s the hope these experiments are designed to better connect users with eligible dates, while most website experiments are to improve user engagement with the service itself. Perhaps all is fair in love, even if it requires users to divulge some of the most sensitive personal information imaginable.

Whatever the ultimate value of OkCupid’s, or Facebook’s, or really any organization’s user experiments, critics are quick to suggest these studies reveal how much control users have ceded over their personal information. But I think the real issue is broader than any concern over “individual control.” Instead, these studies beg the question of how much technology – fueled by our own data – can shape and mediate interpersonal interactions.

OkCupid’s news immediately brought to mind a talk by Patrick Tucker just last week at the Center for Democracy & Technology’s first “Always On” forum. Tucker, editor at The Futurist magazine and author of The Naked Future, provided a firestarter talk that detailed some of the potential of big data to reshape how we live and interact with each other. At a similar TEDx talk last year, he posited that all of this technology and all of this data can be used to give individuals an unprecedented amount of power. He began by discussing prevailing concerns about targeted marketing: “We’re all going to be faced with much more aggressive and effective mobile advertising,” he conceded, ” . . . but what if you answered a push notification on your phone that you have a 60% probability of regretting a purchase you’re about to make – this is the antidote to advertising!”

But he quickly moved beyond this debate. He proposed a hypothetical where individuals could be notified (by push notification, of course) that they were about to alienate their spouse. Data can be used not just to set up dates, but to manage marriages! Improve friendships! For an introvert such as myself, there’s a lot of appeal to these sorts of applications, but I also wonder when all of this information becomes a crutch. As OkCupid explains, when its service tells people they’re a good match, they act as if they are “[e]ven when they should be wrong for each other.”

Occasionally our reliance on technology crosses not just some illusory creepy line, but fundamentally changes our behavior. Last year, at IAPP’s Navigate conference, I met Lauren McCarthy, an artist researcher in residence at NYU, who discussed how she used technology to augment her ability to communicate. For example, she demoed a “happy hat” that would monitor the muscles in your face and provide a jolt of physical pain if the wearer stopped smiling. She also explained using technology and crowd-sourcing to make her way through dates.  She would secretly video tape her interactions with men in order to provide a livestream for viewers to give her real time feedback on the situation.  “He likes you.” “Lean in.” “Act more aloof,” she’d be told. As part of the experiment, she’d follow whatever directions were being beamed to her.

I asked her later whether she’d ever faced the situation of feeling one thing, e.g., actually liking a guy, and being directed to “go home” by her string-pullers, and she conceded she had. “I wanted to stay true to the experiment,” she said. On the surface, that struck me as ridiculous, but as I think on her presentation now, I wonder if she was forecasting our social future.

Echoing OkCupid’s results, McCarthy also discussed a Magic 8 ball device that a dating pair could figuratively shake to direct their conversation. Smile. Compliment. Laugh, etc. According to McCarthy, people had reported that the device had actually “freed” their conversation, and helped liberate them from the pro forma routines of dating.

Obviously, we are free to ignore the advice of Magic 8 balls, just as we can ignore push notifications on our phones. But if those push notifications work? If the algorithmic special sauce works? If data provides “better dates” and less alienated wives, why wouldn’t we use it? Why wouldn’t we harness it all the time? From one perspective, this is the ultimate form of individual control, where our devices can help us to tailor our behavior to better accommodate the rest of the world. Where then does the data end and the humanity begin? Privacy, as a value system, pushes up against this question, not because it’s about user control but because part of the value of privacy is in the right to fail, to be able to make mistakes, and to have secret spaces where push notifications cannot intrude. What that spaces looks like, however, when OkCupid is pulling our heartstrings.

1 2 3  Scroll to top