technology

The Future of Privacy: More Data and More Choices

As I wrapped up my time at the Future of Privacy Forum, I prepared the following essay in advance of participating on a plenary discussion on the “future of privacy” at the Privacy & Access 20/20 conference in Vancouver on November 13, 2015 — my final outing in think tankery. 

Alan Westin famously described privacy as the ability of individuals “to determine for themselves when, how, and to what extent information about them is communicated to others.” Today, the challenge of controlling let alone managing our information has strained this definition of privacy to the breaking point. As one former European consumer protection commissioner put it, personal information is not just “the new oil of the Internet” but is also “the new currency of the digital world.” Information, much of it personal and much of it sensitive, is now everywhere, and anyone’s individual ability to control it is limited.

Early debates over consumer privacy focused on the role of cookies and other identifiers on web browsers. Technologies that feature unique identifiers have since expanded to include wearable devices, home thermostats, smart lighting, and every type of device in the Internet of Things. As a result, digital data trails will feed from a broad range of sensors and will paint a more detailed portrait about users than previously imagined. If privacy was once about controlling who knew your home address and what you might be doing inside, our understanding of the word requires revision in a world where every device has a digital address and ceaselessly broadcasts information.

The complexity of our digital world makes a huge challenge out of explaining all of this data collection and sharing. Privacy policies must either be high level and generic or technical and detailed, each option proves of limited value to the average consumer. Many connected devices have little capacity to communicate anything to consumers or passersby. And without meaningful insight, it makes sense to argue that our activities are now subject to the determinations of a giant digital black box. We see privacy conversations increasingly shift to discussions about fairness, equity, power imbalances, and discrimination.

No one can put the data genie back in a bottle. No one would want to. At a recent convening of privacy advocates, folks discussed the social impact of being surrounded by an endless array of “always on” devices, yet no one was willing to silence their smartphones for even an hour. It has become difficult, if not impossible, to opt out of our digital world, so the challenge moving forward is how do we reconcile reality with Westin’s understanding of privacy.

Yes, consumers may grow more comfortable with our increasingly transparent society over time, but survey after survey suggest that the vast majority of consumers feel powerless when it comes to controlling their personal information. Moreover, they want to do more to protect their privacy. This dynamic must be viewed as an opportunity. Rather than dour information management, we need better ways to express our desire for privacy. It is true that “privacy management” and “user empowerment” have been at the heart of efforts to improve privacy for years. Many companies already offer consumers an array of helpful controls, but one would be hard-pressed to convince the average consumer of this. The proliferation of opt-outs and plug-ins has done little to actually provide consumers with any feeling of control.

The problem is few of these tools actually help individuals engage with their information in a practical, transparent, or easy way. The notion of privacy as clinging to control of our information against faceless entities leaves consumers feeling powerless and frustrated. Privacy needs some rebranding. Privacy must be “appified” and made more engaging. There is a business model to be made in finding a way to marry privacy and control in an experience that is simple and functional. Start-ups are working to answer that challenge, and the rise of ephemeral messaging apps are, if not perfect implementations, a sure sign that consumers want privacy, if they can get it easily. For Westin’s view of privacy to have a future, we need to do a better job of embracing creative, outside-the-box ways to get consumers thinking about and engaging with how their data is being used, secured, and ultimately kept private.

Ethics and Privacy in the Data-Driven World

As part of the U.S. Chamber of Commerce’s “Internet of Everything” project, my boss and I co-authored a short essay on the growing need for company’s to have a “data ethics” policy:

Formalizing an ethical review process will give companies an outlet to weigh the benefits of data use against a larger array of risks. It provides a mechanism to formalize data stewardship and move away from a world where companies are largely forced to rely on the “gut instinct” of marketers or the C-Suite. By establishing an ethics policy, one can also capture issues that go beyond privacy issues and data protection, and ensure that the benefits of a future of smart devices outweigh any risks.

// Read more at the U.S. Chamber Foundation.

A Few Thoughts on De-Identification and Lightning Strikes

Been spending more and more time at work trying to get a handle on the politics (and definition) of de-identification. De-identification, in short, are processes designed to make it more difficult to connect information with one’s identity. While industry and academics will argue over what exactly that means, my takeaway is that de-identification battles have become proxies for a profound lack of trust and transparency on both sides. I tried to flesh out this idea a bit, and in the process, made the mistake of wading into the world of statistics. // Read more on the Future of Privacy Forum Blog.

Developing Consensus on the Ethics of Data Use

Information is power, as the saying goes, and big data promises the power to make better decisions across industry, government, and everyday life. Data analytics offers an assortment of new tools to harness data in exciting ways, but society has been slow to engage in a meaningful analysis of the social value of all this data. The result has been something of a policy paralysis when it comes to building consensus around certain uses of information.

Advocates noted this dilemma several years ago during the early stages of the effort to develop a Do Not Track (DNT) protocol at the World Wide Web Consortium. DNT was first proposed seven years ago as a technical mechanism to give users control over whether they were being tracked online, but the protocol remains a work in progress. The real issue lurking behind the DNT fracas was not any sort of technical challenge, however, but rather the fact that the ultimate value of online behavioral advertising remains an open question. Industry touts the economic and practical benefits of an ad-supported Internet, while privacy advocates maintain that targeted advertising is somehow unfair. Without any efforts to bridge that gap, consensus has been difficult to reach.

As we are now witnessing in conversations ranging from student data to consumer financial protection, the DNT debate was but a microcosm of larger questions surrounding the ethics of data use. Many of these challenges are not new, but the advent of big data has made the need for consensus ever more pressing.

For example, differential pricing schemes – or price discrimination – have increasingly become a hot-button issue. But charging one consumer a different price than another for the same good is not a new concept; in fact, it happens every day. The Wall Street Journal recently explored how airlines are the “world’s best price discriminators,” noting that what an airline passenger pays is tied to the type of people they’re flying with. As a result, it currently costs more for U.S. travelers to fly to Europe than vice versa because the U.S. has a stronger economy and quite literally can afford higher prices. Businesses are in business, after all, to make money, and at some level, differential pricing makes economic sense.

However, there remains a basic concern about the unfairness of these practices. This has been amplified by perceived changes in the nature of how price discrimination works. The recent White House “Big Data Report” recognized that while there are perfectly legitimate reasons to offers different prices for the same products, the capacity for big data “to segment the population and to stratify consumer experiences so seamlessly as to be almost undetectable demands greater review.” Customers have long been sorted into different categories and groupings. Think urban or rural, young or old. But big data has made it markedly easier to identify those characteristics that can be used to ensure every individual customer is charged based on their exact willingness to pay.

The Federal Trade Commission has taken notice of this shift, and begun to start a much-needed conversation about the ultimate value of these practices. At a recent discussion on consumer scoring, Rachel Thomas from the Direct Marketing Association suggested that companies have always tried to predict customer wants and desires. What’s truly new about data analytics, she argued, is that it offers the tools to actually get predictions right and to provide “an offer that is of interest to you, as opposed to the person next to you.” While some would argue this is a good example of market efficiency, others worry that data analytics can be used to exploit or manipulate certain classes of consumers. Without a good deal more public education and transparency on the part of decision-makers, we face a future where algorithms will drive not just predictions but decisions that will exacerbate socio-economic disparities.

The challenge moving forward is two-fold. Many of the more abstract harms allegedly produced by big data are fuzzy at best – filter bubbles, price discrimination, and amorphous threats to democracy are hardly traditional privacy harms. Moreover, few entities are engaging in the sort of rigorous analysis necessary to determine whether or not a given data use will make these things come to pass.

According to the White House, technological developments necessitate a shift in privacy thinking and practice toward responsible uses of data rather than its mere collection and analysis. While privacy advocates have expressed skepticism of use-based approaches to privacy, increased transparency and accountability mechanisms have been approached as a way to further augment privacy protections. Developing broad-based consensus around data use may be more important.

Consensus does not mean unanimity, but it does require a conversation that considers the interests of all stakeholders. One proposal that could help drive consensus are the development of internal review boards or other multi-stakeholder oversight mechanisms. Looking to the long-standing work of institutional review boards, or IRBs, in the field of human subject testing, Ryan Calo suggested that a similar structure could be used as a tool to infuse ethical considerations into consumer data analytics. IRBs, of course, engage in a holistic analysis of the risks and benefits that could result from any human testing project. They are also made up of different stakeholders, encompassing a wide-variety of diverse backgrounds and professional expertise. These boards also come to a decision before a project can be pursued.

Increasingly, technology is leaving policy behind. While that can both promote innovation and ultimately benefit society, it makes the need for consensus about the ethics at stake all the more important.

White House/MIT Big Data Privacy Workshop Recap

Speaking for everyone snowed-in in DC, White House Counselor John Podesta remarked that “big snow trumped big data,” while on the phone to open the first of the Obama Administration’s three big data and privacy workshops.  This first workshop, which I was eager to attend (if only to continue my streak of annual appearances in Beantown), focused on advancing the “start of the art” in technology and practice.  For a mere lawyer such as myself, I anticipated a lot of highly technical jargon, and in that regard I was not disappointed. // Full recap on the Future of Privacy Forum Blog.

Common Sense Media Student Privacy Summit All About Self-Regulation

The biggest takeaway from Common Sense Media’s School Privacy Zone Summit was, in the words of U.S. Secretary of Education Arne Duncan, that “privacy needs to be a higher priority” in our schools.  According to Duncan, “privacy rules may be the seatbelts of this generation,” but getting these rules right in sensitive school environments will prove challenging.  As the Family Educational Rights and Privacy Act (FERPA), one of the nation’s oldest privacy laws, turns forty this year, what seems to be apparent is that are schools lack both the resources and training necessary to even understand today’s digital privacy challenges surrounding student data.

Dr. Terry Grier, Superintendent of the Houston Independent School District, explains that his district of 225,000 students is getting training from a 5,000 student district in North Carolina.  The myriad of different school districts, varying sharply in wealth and size, has made it impossible for educators to define rules and expectations when it comes to how student data can be collected and used.

Moreover, while privacy advocates charge that schools have effectively relinquished control over their students’ information, several panelists noted that we haven’t yet decided who the ultimate custodian of student data even is.  One initial impulse might be to analogize education records to HIPAA health records, which belong to a patient, but Cameron Evans, CTO of education at Microsoft, suggested that it might be counterproductive to think of personalized education data as strictly comparable to individual health records.  On top of this dilemma, questions about how to communicate and inform parents have proven difficult to answer as educational technology shifts rapidly, resulting in a landscape that one state educational technology director described as the “wild wild west.”

There was wide recognition by both industry participants at the summit and policymakers that educational technology vendors need to establish best practices – and soon.  Secretary Duncan noted there was a lot of energy to address these issues, and that it was “in the best interest of commercial players to be self-policing.”  The implication was clear: begin establishing guidelines and helping schools now or face government regulation soon.

Average Folks and Retailer Tracking

Yesterday evening, I found myself at the Mansion on O Street, whose eccentric interior filled with hidden doors, secret passages, and bizarrely themed rooms, seemed as good as any place to hold a privacy-related reception. The event marked the beta launch of my organization’s mobile location tracking opt-out.  Mobile location tracking, which is being implemented across the country by major retailers, fast food companies, malls, and the odd airport, first came to the public’s attention last year when Nordstrom informed its customers that it was tracking their phones in order to learn more about their shopping habits.

Today, the Federal Trade Commission hosted a morning workshop to discuss the issue, featuring representatives from analytics companies, consumer education firms, and privacy advocates. The workshop presented some of the same predictable arguments about lack of consumer awareness and ever-present worries about stifling innovation, but I think a contemporaneous conversation I had with a friend better highlights some of the privacy challenges mobile analytics presents.  Names removed to predict privacy, of course!

Ephemeral Communication and the Frankly App Podcast

My former coworker was utterly enamored with Snapchat, on the grounds that she liked being able to express herself in ways that were not permanent.  In terms of our interpersonal relationships, it used to be that only diamonds were forever — now most of our text messages are, too.

Should a simple text last forever?  Last week, I reached out to Frankly, a new text-messaging app that provides for self-destructing texts, to talk about the development of the app and the future of ephemeral communication.

Click on the media player above to listen, or download the complete podcast MP3 here.

Because Everyone Needs Facebook

Facebook has rolled out several proposed updates to its privacy policy that ultimately gives Facebook even more control over its users information.  Coming on the heels of $20 million settlement by Facebook for using user’s information for inclusion in advertisements and “sponsored stories,” Facebook has responded by requiring users to give it permission to do just that:

You give us permission to your name, profile picture, content, and information in connection with commercial, sponsored, or related content (such as a brand you like) served or enhanced by us.

A prior clause that suggested any permission was “subject to the limits you place” has been removed.

This is why people don’t trust Facebook. The comments sections to these proposed changes are full of thousands of people demanding that Facebook leave their personal information alone, without any awareness that that ship has sailed.  I don’t begrudge Facebook’s efforts to find unique and data-centric methods to make money, but as someone who is already reluctant to share too much about myself on Facebook, I can’t be certain that these policies changes aren’t going to lead to Facebook having me “recommend” things to my friends I have no association with.

But no one is going to “quit” Facebook over these changes.  No one ever quits Facebook.  As a communications and connectivity platform, it is simply invaluable to users.  These changes will likely only augment Facebook’s ability to be deliver users content, but as someone who’s been with Facebook since early on, Facebook sure has transformed from this safe lil’club into a walled Wild West where everyone’s got their eye on everyone.

 

Enter the Nexus?

In 2032, a group of genetically engineered neo-Nazis create a super virus that threatens to wipe away the rest of humanity. Coming on the heels of a series of outbreaks involving psychotropic drugs that effectively enslave their users, this leads to the Chandler Act, which places sharp restrictions on “research into genetics, cloning, nanotechnology, artificial intelligence, and any approach to creating ‘superhuman’ beings.” The Emerging Risks Directorate is launched within the Department of Homeland Security, and America’s war on science begins.

This is the world that technologist Ramez Naam sets his first novel, the techno-thriller Nexus. Nexus is a powerful drug, oily and bitter, that allows humans minds to be linked together into a mass consciousness. A hodgepodge of American graduate students develop a way to layer software into Nexus, allowing enterprising coders to upload programs into the human brain. It’s shades of The Matrix, but it’s hardly an impossible idea.

Read More…

MOOCs and My Future Employment Prospects?

Massive open online courses are a new, rapidly evolving platform for delivering educational instruction. Since their appearance just a half-decade ago, multiple platforms now offer dozens of free courses from leading universities across the country. However, as MOOCs work to transform education, they also seek to find ways to turn innovative educational experiences into viable business models. In many respects, this is the same challenge facing many Internet services today. Yet while many “free” Internet services rely upon their users giving up control of their personal data in exchange, this bargain becomes strained when we enter the field of education.

Education aims to encourage free thought and expression.  At a basic level, a successful learning experience requires individuals to push their mental abilities, often expressing their innermost thoughts and reasoning. A sphere of educational privacy is thus necessary to ensure students feel free to try out new ideas, to take risks, and to fail without fear of embarrassment or social derision. As a data platform, MOOCs by their very nature collect vast stores of educational data, and as these entities search for ways to turns a profit, they will be tempted to take advantage of the huge quantities of information that they are currently sitting upon.

As MOOCs look for ways to turn a profit, they will be tempted to turn to the vast stores of personal data that they are currently sitting upon.  It will be essential to consider the privacy harms that could result if this personal educational data is treated carelessly.

This is already some evidence that MOOC organizers recognize this challenge.  In January, a dozen educators worked to draft a “Bill of Rights” for learning in the digital age.  The group, which included Sebastian Thrun, founder the MOOC Udacity, declared that educational privacy was “an inalienable right.” The framework called for MOOCs to explain how student data was being collected, used by the MOOC, and more importantly, made available to others.  “[MOOCs] should offer clear explanations of the privacy implications of students’ choices,” the document declared.

In addition to Udacity, the leading MOOCs–Coursera and edX–can improve how they approach student privacy.  Most MOOCs have incredibly easy sign-up processes, but they are much less clear about what data they are collecting and using.  At the moment, the major MOOCs rely on the usual long, cumbersome privacy policies to get this information across to users.

These policies are both broad and unclear.  For example, Coursera states in its Privacy Policy that it “will not disclose any Personally Identifiable Information we gather from you.”  However, it follows this very clear statement by giving itself broad permission to use student data: “In addition to the other uses set forth in this Privacy Policy, we may disclose and otherwise use Personally Identifiable Information as described below. We may also use it for research and business purposes.”  More can be done to offer clear privacy guidelines.

Beyond providing clearer privacy guidelines, however, MOOCs also should consider how their use of user-generated content can impair privacy.  A potential privacy challenge exists where a MOOC’s terms of service grant it such a broad license to re-use students’ content that they effectively have the right to do whatever they wish. EdX, a project started by educational heavyweights Harvard and MIT, states in its Terms of Service that students grant edX “a worldwide, non-exclusive, transferable, assignable, sublicensable, fully paid-up, royalty-free, perpetual, irrevocable right and license to host, transfer, display, perform, reproduce, modify, distribute, re-distribute, relicense and otherwise use, make available and exploit your User Postings, in whole or in part, in any form and in any media formats and through any media channels (now known or hereafter developed).” Coursera and Udacity have similar policies.

Under such broad licenses, students “own” their exam-records, forums posts, and classroom submissions in name only. The implications of a MOOC “otherwise using” my poor grasp of a history of the Internet course I sampled for fun is unclear. This information could be harnessed to help me learn better, but as MOOC’s become talent pools for corporate human resource departments, it could bode ill for my future employment prospects.

At the moment, these are unresolved issues.  Still, as MOOCs move to spearhead a revolution in how students are taught and educated, providing students of all ages with a safe-space to try out new ideas and learn beyond their comfort zone will require both educators and technology providers to think about educational privacy.

Would Could Facial Recognition Privacy Protections Look Like?

Concerns about facial recognition technology have appeared within the context of “tagging” images on Facebook or how it transforms marketing, but these interactions are largely between users and service providers. Facial recognition on the scale offered by wearable technology such as Google Glass changes how we navigate the outside world.  Traditional notice and consent mechanisms can protect Glass users but not the use by the user himself.  // More on the Future of Privacy Forum Blog.

Keeping Secrets from Society

While the first round of oral arguments surrounding gay marriage was the big event before the Supreme Court today, the Court also issued a 5-4 opinion in Florida v. Jardines, which advances the dialog both on the state of the Fourth Amendment and privacy issues generally.  In Jardines, the issue was whether police use of drug-sniffing dog to sniff for contraband on the defendant’s front porch was a “search” within the meaning of the Fourth Amendment.  By a slim majority, the Court held that it was.

This is what our protection against “unreasonable searches” has become: a single-vote away from letting police walk up to our front doors with dogs in order to see if they alert to anything suspicious.  What I think is even more alarming about the decision is how little privacy was discussed, let alone acknowledged. Only three judges–curiously, all three women–recognized that the police’s behavior clearly invaded the defendant’s privacy.  The ultimate holding was that bringing a dog onto one’s property was a trespass, and the Fourth Amendment specifically protects against that.  But while defaulting to a property-protective conception of the Fourth Amendment has the virtue of “keep[ing] easy cases easy,” as Justice Scalia put it, it ignores that nuanced reality that the Fourth Amendment was designed as a tool to obstruct surveillance and to weaken government.

The dissent, meanwhile, was ready to weaken the Fourth Amendment even more.  While this case was in many ways directly analogous to a prior decision, Kyllo v. United States, where the Court restricted the use of thermal goggles to inspect a house, the dissenters made the alarming assertion that “Kyllo is best understood as a decision about the use of new technology.”  What makes that rationale scary is that Kyllo included the unfortunate invocation that whether or not government surveillance constitutes a search is contingent upon whether or not the technology used is “a device that is not in general public use.”  This creates the not only the possibility but also the incentive to use technological advances to diminish the Fourth Amendment’s protection.  It creates a one-way ratchet against privacy.

I am not the first person to suggest that the Supreme Court’s Fourth Amendment jurisprudence is utterly incoherent.  I particularly enjoy the description that our Fourth Amendment is “in a state of theoretical chaos.” Last year, facing a case where the government attached a GPS unit to a car, tracked a suspect for a month, and never got a warrant, the Court unanimously concluded this violated the Fourth Amendment.  That was great.  More problematic, the case produced three very different opinions, that could not even cleanly divide along ideological lines.  What it boils down is this: we are a serious privacy problem in this country.

And while its easy to point a finger at a power-hungry government, the blame rests with us all.  We have been quick–eager even–to give up our privacy, particularly as we have embraced a binary conception of privacy.  We either possess it, or our secrets our open to the world.  We have been conditioned to think our privacy ends when we walk out the front door, and now we live in a world where nothing stops anyone from looking down on everything we do from an airplane, a bit lower from a helicopter, and, yes, soon even lower from a drone.  We have no expectation of privacy in our trash anymore.

Just look at Facebook!  Facebook isn’t even a product–it’s users are the product.  Vast treasure troves of personal data flows into the business’ coffers, and it wants more.  As The New York Times reported today, Facebook’s data-collection efforts extend far beyond its mere website.  Facebook doesn’t even stop when you leave the internet.  But worry not, says Facebook, “there’s no information on users that’s being shared that they haven’t shared already.”

That’s certainly true, but today it’s being aggregated. The data freely available about each and every one of us could, as The Times put it, “leave George Orwell in the dust.” Private companies collect “real-time social data from over 400 million sources,” and Twitter’s entire business model depends upon selling access to its 400 million daily tweets. Our cars can track us, and just today I saw the future of education, which basically involves knowing everything possible about a student.

I’m hesitant to quote Ayn Rand, but since an acquaintance shared this sentiment with me, it has dwelt in my mind:

Civilization is the progress toward a society of privacy. The savage’s whole existence is public, ruled by the laws of his tribe. Civilization is the process of setting man free from men.

Perhaps our collective future is, as Mark Zuckerberg posits, destined to be an open book.  Perhaps Google CEO Eric Schmidt is right when he cautions that “[i]f you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place.”  I am certainly not immune to oversharing on the Internet, and for whatever my privacy is worth, I don’t really have anything to hide.  But that’s not the point. Before anyone embraces a world where the only privacy that exists is in our heads, I would suggest reading technologist Bruce Schneier’s rebuttal:

For if we are observed in all matters, we are constantly under threat of correction, judgment, criticism, even plagiarism of our own uniqueness. We become children, fettered under watchful eyes, constantly fearful that — either now or in the uncertain future — patterns we leave behind will be brought back to implicate us, by whatever authority has now become focused upon our once-private and innocent acts. We lose our individuality, because everything we do is observable and recordable.

Of course, as my boss describes it, Adam and Eve’s flight from the Garden of Eden had less to do with shame and more to do with attempting to escape the ever-present eye of God. Some would suggest we might have been better off in that idyllic paradise, but I much prefer to keep a secret or two.

Is Big Brother Getting Into Our Cars?

In the public relations battle between The New York Times and Tesla over the paper’s poor review of Tesla’s Model S electric car, the real story may be the serious privacy issues the whole imbroglio demonstrated.  After The Times’ John Bruder wrote a less-than-flattering portrayal of his time with the Model S, Tesla Motors CEO Elon Musk challenged the review using information provided from the vehicle’s data recorders.  In the process, Mr. Musk revealed that “our cars can know a lot about us,” writes Forbes’ Kashmir Hill. For example, Mr. Musk was able to access detailed information about the car’s exact speed and location throughout Bruder’s trip, his driving habits, and even whether cruise control had been set as claimed.

“My biggest takeaway was ‘the frickin’ car company knows when I’m running the heater?’ That’s a bigger story than the bad review,” gasped one PR specialist. Indeed, our cars are rapidly becoming another rich source of personal information about us, and this presents a new consideration for drivers who may be unaware of how “smart” their cars are becoming. Connected cars present a bountiful set of bullet points for marketers, but whether consumers are being provided with the necessary information needed to understand the capabilities of these vehicles remains an open question.

And it is not just car companies that will possess this wealth of information. Progressive Insurance currently offers Snapshot, a tracking device that reports on drivers’ braking habits, how far they drive, and whether they are driving at night. Progressive insists the Snapshot program is neither designed to track how fast a car is driven nor where it is being driven, and the Snapshot device contains no GPS technology, but the technological writing is on the wall. A host of marketers, telcos, insurers, and content providers will soon have access to this data.

In the very near future, parents will easily be able to track their teenagers driving in connected cars. Assuming cars permit their drivers to violate traffic rules, it may be impossible to actually get away with risky driving habits. Telcos increasingly find cars to be a lucrative growth opportunities. “[Cars are] basically smartphones on wheels,” AT&T’s Glenn Lurie explains, and indeed, many automakers see smartphones as an integral part of creating connected cars.

While we continue to grasp with the privacy challenges and data opportunities presented by smartphones, we have only just begun to address the similar sorts of concerns posed by connected cars.  In fact, privacy concerns have largely taken a backseat to practical hurdles like keeping drivers’ eyes on the road and more pressing legal concerns such as liability or data ownership. Indeed, at the last DC Mobile Monday event, the general consensus among technologists and industry was that consumers would willingly trade privacy if they could have a “safer,” more controlled driving experience. Content providers were even quicker (perhaps too quick) to suggest that privacy concerns were merely a generational problem, and that younger drivers simply do “not think deeply about privacy.”

That may be true, but while industry may wish to treat our vehicles as analogous to our phones, it also remains true that the average consumer sees her car as an extension of her home.  While the law may not recognize this conception, industry would be wise to tread carefully. OnStar’s attempt to change its privacy policy in 2011 proves illustrative. OnStar gave itself permission to continue to track subscribers after they had cancelled the service, and to sell anonymized customer data to anyone at anytime for any purpose. The customer backlash was brutal: “My vehicle’s location is my life, it’s where I go on a daily basis. It’s private. It’s mine,” went one common sentiment.

A recent article in The L.A. Times wondered whether car black boxes were the beginning of a “privacy nightmare” or just a simple safety measure.  The answer likely falls somewhere in between, and if the Tesla episode reveals anything, it is that the striking the proper balance may be more difficult than either privacy advocates or industry expect.While Mr. Musk had a wealth of data at his disposal and Mr. Bruder had only a book of observations to counter that data, neither party has been able to provide a clear account of Mr. Bruder’s behavior behind the wheel.  For example, what Mr. Musk termed “driving in circles for over half a mile,” Mr. Bruder claimed was looking for a charging station that was poorly marked.  Technologist Bruce Schneier cautions that the inability of intense electronic surveillance to provide “an unambiguous record of what happened . . . will increasingly be a problem as we are judged by our data.”

Most everyday scenarios presented by connected cars will not produce a weeks long dispute between a CEO and a major newspaper. Instead, Schneier notes, neither side will be able to spend the sort of time and effort trying to figure out what really happened. Certainly, consumers may find themselves at an informational disadvantage. In the long term, drivers may be willing to trade their privacy for the benefits of an always connected car, but these benefits need to be clearly communicated. That is a discussion that has yet to be had in full.

 Scroll to top