Archive for December, 2009

Highlights From the FTC’s Privacy Roundtable Part 3

Note: This article originally appeared on the J.C. Neu & Associates Blog

This is part 3 of highlights from the FTC’s December 7th Privacy Roundtable. Part 1 covered the panel on "Exploring Existing Regulatory Frameworks," and Part 2 covered the panel on "Benefits and Risks of Collecting, Using, and Retaining Consumer Data" This post highlights comments from "Consumer Expectations and Disclosures" and "Information Brokers."

Disclaimer: I took notes using my Twitter account. About halfway through the "Benefits and Risks" panel, Twitter decided that I was a spammer, and shut down my account. I was mad, and it meant that I did not cover the whole session.

Benefits and Risks of Collecting, Using, and Retaining Consumer Data

  • Lorrie Faith Cranor,Associate Professor of Computer Science, Carnegie Mellon University commented on consumers’ state of ignorance regarding how information flows, much like an unseen underground river. "Most people do not understand how information flows," or "what a third-party cookie is."
  • Alan Westin Professor Emeritus of Public Law and Government, Columbia University referenced several of his studies which indicated that "…people are not prepared to equate [the need for] behavioral marketing with [funding] free services, and that "most people believe that they’re being abused," but there was general consensuses that most people surveyed also believed that they were protected by law and regulations that do not actually exist. In the meantime, Mr. Westin’s research also indicates that most people are no longer willing to trade privacy for freebies on the internet, because of the disconnect between "free" services and the fact that personal information pays for most of it.
  • Alan Davidson, Director of U.S. Public Policy and Government Affairs for Google emphasized that the industry is trying to educate consumers and give them the tools they need in order to control their privacy, as evidenced by Google’s dashboard, for instance. He suggested that the audience Bing "Google Dashboard" for more information.
  • Jules Polonetsky Co?Chair and Director of the Future of Privacy Forum made reference to the results of several large surveys conducted by his organization. For instance, one indicated that there is a substantial public misconception about what "Behavioral Advertising" is. Among the handful of survey respondents who had heard the term, all of them mistook "Behavioral Advertising" for the concept of subliminal advertising. His organization is also attempting to generate symbols explaining how personal information is used, an approach endorsed by Privacy Commons and other groups.
  • My apologies to Joel Kelsey, Policy Analyst for the Consumers Union, and Adam Thierer, President of University of Pennsylvania, Annenberg School for Communication. Each of these individuals actively participated, but unfortunately I was unable to capture their thoughts because I was under a temporary Twitter ban at the time.

Information Brokers

Short editorial: This session was by far the least enlightening.

  • Jennifer Barrett, Global Privacy and Public Policy Officer for Acxiom started off the panel by discussing what constituted "sensitive personal information." She replied that Acxiom classifies "sensitive information" is any information which could contribute to identity theft, whereas "restricted information" is an unlisted phone number, for example.
  • Rick Erwin, President of Experian Marketing Services explained that they consider information on children, older Americans, and self-reported ailment data to be "sensitive," adding that Experian has "three decades of experience using sensitive information for marketing," and is able to adequately balance the interests of marketers and consumers. Mr. Erwin also discounted the harms of marketing, saying "we can’t point to deep consumer harm based on bad advertising."
  • Pam Dixon, Executive Director of the World Privacy Forum disagreed. She contended that the definition of "sensitive information" is difficult at best because otherwise benign information can be aggregated to create sensitive information. In regards to health information, getting consent from consumers is almost illusory because consumers have no way of knowing how the information will be used in the future. Informed consent is impossible without telling consumers what "boxes" they will be put in. Consumers need the right to know on what lists they will appear, for how long, and they must have the right to revoke their consent. Pam Dixon contended that "we need to make Opt Out work for consumers," and that opting out should always be free.
  • In response, Jennifer Barrett insisted that the Information Broker industry needs no further regulation: "We’re already very regulated," she said.
  • Jim Adler, Chief Privacy Officer and General Manager of Systems for Intelius explained that they offer special opt-out services to government officials.
  • Chris Jay Hoofnagle, Lecturer in Residence at the University of California Berkeley School of Law was scheduled to participate but was unable due to technical difficulties.

The FTC has posted the webcast if you missed it.  The next Roundtable is scheduled for January 28, 2010 in Berkeley, CA and will also be broadcast online.

No Comments

Highlights From the FTC’s Privacy Roundtable: Part 2

Note: This article originally appeared on the J.C. Neu & Associates Blog

This is part 2 of highlights from the FTC’s December 7th Privacy Roundtable. Part 1 covered the panel on "Exploring Existing Regulatory Frameworks." This post highlights comments from "Benefits and Risks of Collecting, Using, and Retaining Consumer Data." This session was moderated by Jeffrey Rosen of The George Washington University Law School and Chris Olsen, of the FTC’s Division of Privacy and Identity Protection.

  • Leslie Harris, President and CEO of the Center for Democracy & Technology emphasized that information taken out of context can be used to unfairly judge a person, such as a search for "marijuana" or a medical condition. The danger increases when a rich profile of search terms and surfing data is constructed over time.
  • Susan Grant, Director of Consumer Protection for the Consumer Federation of America, said that "privacy is a fundamental human right."
  • Alessandro Acquisti of Carnegie Mellon University, Heinz College, explained that the definition of "sensitive information" continues to change with technology, new uses for information, and new ways to correlate and aggregate personal information. Technology cannot stop re-identification or de-anonymization, but should be used to increase the transaction costs for re-identifying personal information. He also spoke about how companies are bypassing consumer efforts to maintain privacy and anonymity through technologies such as flash cookies.
  • Richard Purcell, CEO of the Corporate Privacy Group, emphasized that citizens’ health depends on anonymized health data used for research, and that privacy must be weighed using a cost-benefit analysis. He further
  • Michael Hintze, Associate General Counsel for Microsoft, explained that companies use log and use information for a number of legitimate reasons, such as security analysis, and search result optimization. However, he admitted that search terms can reveal individuals’ "innermost thoughts," and that anonymization is not a silver bullet to protecting users. Instead, retention and deletion policies such as Microsoft’s policy of deleting IP addresses and cross-cookie session information is designed to truly anonymize search data.
  • David Hoffman, Director of Security Policy and Global Privacy Officer for Intel, stressed that we should focus on data minimization. "We have wasted time arguing about what constitutes PII," when the question should be, "what information will have an impact on an individual?"
  • Jim Harper, Director of Information Policy Studies for The Cato Institute, argued that regulating too early can stifle innovation and prevent consumers from determining what they want themselves. Instead, we should attempt to define the problem set first. Mr. Harper explained that "there is a role for trial and error in determining what the real problems are," and that intellectualizing what consumers really want can lead to problems. Instead, we should "let a thousand flowers bloom" and let the social systems and advocacy draw out and solve the real issues.
  • David Hoffman generally agreed that we don’t want to frustrate innovation, and that we are not currently in a position to understand all of the problems ourselves. He explained that it took a room full of experts the better part of a day to map out data flows. "We can’t expect consumers to understand how the data flows if experts can’t understand it now."
  • Leslie Harris and Alessandro Acquisti said that Notification and transparency is necessary but not sufficient. Mr. Acquisti noted that consumers make decisions which are harmful to long-term privacy because humans are bad at making decisions when the benefits are short-term but harm is long-term. He compared privacy erosive behavior to smoking, since each smoker realizes that smoking causes cancer, but any individual cigarette doesn’t hurt much.
  • Susan Grant explained that consumers don’t realize that their information can be used for other purposes, and that the benefits of marketing do not outweigh privacy concerns and fraud and abuse. Jim Harper countered that advertisers can introduce a new medication to vulnerable populations, and that denying them that opportunity can create silent harms. Michael Hintze added that niche ads aren’t good or bad- they’re responsible or irresponsible
  • Richard Purcell also argued that companies should spend the time and money to train their customers, and create "privacy by design" rather than "privacy by default." Finally, the FTC should "regulate the hell out of" lazy companies and bad actors.
  • Richard Purcell further emphasized that we lack a cohesive taxonomy for discussing privacy, and that we need to better define concepts such as "anonymity," "deidentification," and "sensitive data."
  • The panel was asked to consider widespread customer blacklisting. Susan Grant said that consumers need tools to discover and amend secret "bad customer" lists, since they have none now. Distinctions based upon invisible information is bad for consumers. Leslie Harris agreed, saying that we need a law that provides access and correction for data brokers as well. She also criticized the FTC for failing to investigate privacy violations, saying that all of our bad examples are "accidental," not intentional long-term decisions to violate privacy, outed by the FTC.
  • In the larger context, Jim Harper said that Government access to personal information is the elephant in the room that nobody has yet addressed. Governments are beginning to discover "the cloud" for their own purposes, and when data is available to government on the current terms, it constitutes surveillance on a massive scale.

I’ll do a few more installments in the coming days.

The FTC has posted the webcast if you missed it.  The next Roundtable is scheduled for January 28, 2010 in Berkeley, CA and will also be broadcast online.

No Comments

Highlights From the FTC’s Privacy Roundtable: Part 1

Note: This article originally appeared on the J.C. Neu & Associates Blog

The FTC’s December 7th Privacy Roundtable assembled a Who’s Who of privacy luminaries, academics, advocates, and industry players. This post highlights some of the more interesting comments from the meeting. I also tweeted the event (@aarontitus, #FTC #Privacy or #ftcpriv) and the FTC has posted the webcast if you missed it.  The next Roundtable is scheduled for January 28, 2010 in Berkeley, CA and will also be broadcast online.

The meeting consisted of five panels. This posts highlights "Panel 5: Exploring Existing Regulatory Frameworks:"

  • During Session 5, Intuit’s Chief Privacy Officer Barbara Lawler posited that existing regulatory frameworks unfairly place the entire burden on consumers to protect themselves. "Consumers should expect a safe marketplace. They shouldn’t be the ones to police the marketplace," she said.
  • Barbara Lawler also noted that "Data is never really at rest," because it’s moving between data centers and backups in multiple locations throughout the globe. It is therefore incorrect to think of data, especially Cloud data, as being in one place. Instead, "data is in one place and many places at the same time," potentially in multiple jurisdictions.
  • Evan Hendricks of Privacy Times and Marc Rotenberg of EPIC suggested that the current model of "Notice and Consent" has failed to protect consumers, and that the FTC (and legislation in general) should return to well-established Fair Information Practices (FIPs), including a prohibition on "secret databases." Mr. Rotenberg went so far as to conclude that Notice and Choice principles are not a subset of FIPs, but instead "stand in opposition to fair information practices." He also joked that "the best part of Graham-Leach-Bliley Act is that you get paper notices you can tape on your window and get more privacy."
  • Ira Rubinstein of New York University School of Law proposed that self-regulation is not binary or "monolithic," and that a self-regulatory scheme would be preferable, especially if viewed as a "continuum, based on government intervention." He argued that self-regulation would be especially appropriate in the United States, which has traditionally been very friendly to e-commerce.
  • Michael Donohue of OECD gave an overview of international legal concepts of privacy which generally agreeing with Marc Rotenberg’s observation that "most countries have come to surprisingly similar conclusions about privacy."
  • J. Howard Beales of the GWU School of Business argued in favor of a "harm-based model," because it is impossible to reach the best solution without first defining the harm. Marc Rotenberg responded that privacy harms are almost never financial.
  • Several panelists emphasized that privacy can be highly (and appropriately) subjective. One cited an example from a balding friend of his, "I don’t care if anyone knows that I use Rogaine, but my 70-year-old grandmother would."
  • Fred Cate of the Center for Applied Cybersecurity Research emphasized that the Notice and Consent model is flawed because some activities should not be consentable. For example, one may not "consent" to be served fraudulent or misleading advertising. Likewise, some uses of personal information should be prohibited and non-consentable. Most importantly, Notice and Choice are only tools- not the goal of privacy.
  • After Panel 5 was done, Bureau of Consumer Protection Director David C. Vladeck said the FTC would investigate whether it is better to give consumers notice how their personal information may be used: 1. At the time of collection, or 2. At the time of use.
  • David C. Vladeck also said that the data broker industry warranted FTC attention because it is "largely invisible to the consumer."

More highlights on the other sessions to come.

No Comments

My Thoughts About Privacy Commons

I spend most of my free time working on Privacy Commons, and so I was excited to see Christopher’s post and critique on the subject. Thanks as usual, Christopher, for your thought-provoking questions and observations. Likewise, Aza, CUPS, and Ralf Bendrath. Great work—each of you. I want to pick each of your brains sometime. I also want to apologize in advance for any incomplete sentences or thoughts. This is a slapped-up post.

Some Problems With Privacy Policies

As Christopher, myself, and many others have pointed out, the problems with privacy policies are myriad. Here are a few:

  • Inaccessible or Unintelligible. many privacy policies are not easily understood or even physically accessible; so complicated and wrapped in legalese that they are “nigh useless” to the average consumer.
  • Complicated Solution. Unless we’re careful, a Privacy Commons may end up equally or more complicated than the status quo.
  • Non-Standard. Privacy Policies are not standardized, making it impossible to compare apples-to-apples.
  • Incomplete. They often fail to address important privacy issues or fail to consider all potential parties
  • Unsophisticated. Many boilerplate privacy policies demonstrate a fundamental lack of understanding of how privacy policies translate to privacy and business practices. Some simply don’t address the most salient issues, which may be unique to their industry. Consequently, many of the policies never translate to practice.
  • Treated as Only Legal Documents. Privacy policies are often treated as “compliance” documents and relegated to the legal department. Consequently, many fail to address or actually contradict field practices.
  • Privacy Waiver. Many privacy policies waive, rather than confer, privacy rights. The medical industry is extremely efficient at this practice.
  • Technology-Dependent. Privacy policies which strictly enumerate technologies quickly become outdated in the face of emerging technologies.
  • Non-Binding. Most importantly, US courts have consistently interpreted privacy policies to be unbinding notices, rather than contracts. As a result, privacy policies generally create no enforceable rights or enforceable expectations of privacy. In this sense, privacy policies can create a false expectation of confidentiality, privacy, or even fiduciary responsibility.

Some Assumptions About Privacy Policies

Based on my experience in technology, advocacy, and the law, I want to air some of my basic assumptions about Privacy Policies. Of course, I invite challenges to these assumptions:

  1. Mitigate Liability. Privacy is the subject of dozens of laws and regulations. The present primary business case for developing, maintaining, and conforming to a privacy policy is to mitigate liability.
  2. Inform Data Subjects. Data Subjects include consumers, employees, or any individual about whom information is collected, stored, or aggregated.
  3. Empower Data Subjects. Mere information is not enough. A privacy policy which produces information overload without actionable intelligence is counter-productive.
  4. Articulate Privacy Practices. For the benefit of both data subjects and the data stewards who must execute the privacy policy, it must explain and reflect real business practices.
  5. People Don’t Read. Anything more than about two paragraphs will never be read. That’s why high-level iconography is so appealing (and achievable).
  6. Must Be Easy-to Understand. Because people don’t read. Fewer words and easy-to-grasp iconography are better.
  7. Short Policies Are Inherently Incomplete. Two paragraphs and pretty pictures may be sufficient to inform consumers on the portions of the privacy policy they find most important, but will always be incomplete. More on this below.
  8. Adoption & Enforcement. A Privacy Commons must be optimized for adoption, rather than enforcement. That’s simply because despite the Federal Government, the states and the FTC’s regulation in the area, a privacy commons must be market-driven to be successful.
  9. Sector-Specific. Different sectors/activities collect different sets of personal information, are regulated differently. In order to ensure that privacy policies are relevant, they must be taylored to specific activities.
  10. Living Documents. A privacy policy which was correct six months ago may not be correct today.
  11. Privacy Policies are Complex. Deal with it. Privacy Policies are complex, just like Creative Commons or the Telephone. More on that below.
  12. Business Documents. Privacy Policies are business documents with legal, practical, business, and ramifications for corporations, their agents and employees, and data subjects.


Thinkers like Christopher Parsons worry that a Privacy Commons will be unnecessarily complex. Non-attorneys are often (justifiably) baffled at why lawyers take 3,000 words to say what can be said in 300 and a handshake. It turns out that a simple handshake is not as simple as most people think. Behind each handshake there is a wide range of assumptions which are not as standard as one might believe. Many (if not most) disputes arise when there is a misunderstanding about an unspoken assumption—the meaning of a word, or silence on a particular issue. That’s why it takes lawyers so many words to say something so simple; simple things are not as simple as we thought.

To demonstrate this point, we need look no further than Creative Commons. While the human-readable version of the “Attribution Non-Commercial Share Alike” creative commons license consists of 5 images and 286 words, the legal version contains 3,384 words. Clearly the unnecessary work of a verbose lawyer who needed to justify his existence, right?

Not so fast. The full Attribution Non-Commercial Share Alike license covers a whole bunch of other stuff that consumers don’t usually take time to think about, unless of course there is a dispute. It’s only at that point that we’re glad we included it. The legalese version covers essential topics like media and language translation, public performance, DRM, collections of works, waiver of compulsory license fees, preservation of moral rights, representations and warranties, limitation on author’s liability, termination, severability, waiver, and entire agreement, just to name a few. Consumers don’t (and shouldn’t) think about this kind of stuff when they proverbially “shake hands” with a licensee. Creative Commons is simple on the surface, but look under the hood and you’ll see the complexity necessary to create the elegance that most people associate with the CC licenses. Saying that the legalese version of a Creative Commons License (or Privacy Commons Policy) is a “necessary evil” is incorrect and misses the point. It’s not evil at all; it’s just necessary.

It’s like a telephone—an elegant piece of equipment which is exceedingly easy to use. The end-user only cares about a few things: Connectivity, line quality, cost, and accessibility. Yet the infrastructure and technology supporting telephony and networking is extremely robust and complex. Consumers pay the telcos to worry about all of the other stuff so they can focus on the four or five things that consumers care about. The millions of miles of copper, routers, substations and central offices aren’t a “necessary evil,” they’re just necessary.

Some Conclusions About Privacy Policies

We’re just going to have to deal with the fact that privacy policies are complex, and will continue to be complex. The best solution (as I see it) is to do three things: ID c.

  • Require Thoroughness. A Privacy Commons-compliant policy is thorough
  • Identify Cultural Notions of Privacy. Identify culturally important notions of privacy, and embody them in easy-to-understand iconography. Christopher Parsons suggests these notions might center on Data Collection, Data Sharing, Data Identification, Data Tracking, Data Deletion, and Aggregation, which I think is a good start. And Ralf Bendrath offers these excellent icons, which are more elegant than any I’ve seen.
  • Embody the Cultural Notions of Privacy in Iconography. Then let the legalese version fill in the (necessary) gaps.

A privacy policy which conforms to Privacy Commons requirements will be complete, informative, easy to understand, and easy to adopt. Like Creative Commons, Privacy Commons seeks to identify common cultural notions of privacy, and embody them in easy-to-understand policy frameworks, with simple high-level iconography.

Note: I usually blog on securitycatalyst.com and jeffreyneu.com, but this post doesn’t fit very well on either.

1 Comment

NJ Supreme Court: Attorney-Client Privilege in Personal Email at Work

Note: This article originally appeared on the J.C. Neu & Associates Blog

Yesterday the New Jersey Supreme Court heard arguments in the Stengart v. Loving Care Agency, Inc. case. The issue is whether the New Jersey attorney-client privilege is preserved, when an employee e-mails her attorney from a personal email account, on a company computer.

The first reaction from most lawyers is, "yikes, I hope so."

Maria Stengart was a senior employee at Loving Care, which provides Home Care Services for children and adults. Among other things, Loving Care’s employee handbook states that “Email and voice mail messages, internet use and communication, and computer files are considered part of the company’s business and client records. Such communications are not to be considered private or personal to any individual employee.” Stengart was issued a company laptop, on which she occasionally accessed her personal Yahoo account. She resigned in December, 2007 and shortly thereafter filed suit against Loving Care alleging constructive discharge due to sexual harassment and ethnic discrimination.

In April 2008 Loving Care sent an image of her laptop hard drive to a data recovery company, which recovered at least one personal Yahoo email between Stangart and her attorney, presumably from a recovered browser cache. Of course, this prompted Stengart to assert attorney-client privilege, demanding that all attorney communications be returned or destroyed. The company balked, and in essence argued that Stengart had waived the privilege by using a company computer.

The trial court found in favor of the employer, but the appellate court reversed.

If I were to play armchair quarterback for a second, I think that the New Jersey Supreme Court will probably find in favor of Stengart as a substantive matter, but the case raises several issues of legal, policy, and practical significance, with no apparent easy answers.

In general, employees have a diminished (ie, nearly zero) expectation of privacy on an employer’s network, especially when the employer has put the employee on notice of that fact. The trial court merely extended this well-established principle to attorney-client communications. After all, an employer must be able to control, protect, and secure its network against a range of threats.

On the other hand, most employers allow company computers to be used for personal reasons. It seems to be bad public policy that an employee would waive the attorney-client privilege simply because she uses a browser on her company computer during her lunch break, rather than a home browser. This is especially true if she happens to e-mail her lawyer about an action against the employer. It seems absurd that a distinction so technical should allow the employer to "rummage through and retain the employee’s e-mails to her attorney," as the appellate court put it.

But if an employee does enjoy some expectation of privacy in personal communications over a company network, how much, and how does an employer write a policy to manage it? Does an employee enjoy the same expectation of privacy for personal email transferred via POP3 or IMAP to a local company version of Outlook, compared to a email recovered from an HTTP browser cache? Does the employer have a duty to not attempt to recover deleted personal emails? Are employers allowed to snoop unless communication appears privileged? I don’t have a good answer, and it will be interesting to see what answer the court comes up with.

Surely an employee cannot enjoy an unqualified expectation of privacy by simply using non-company communications, because employers still have an interest in making sure that employees do not use personal accounts to transfer trade secrets, compete against the company, or download a virus.

We’ll keep an eye on this one.

No Comments