Why You Have Something to Hide

Note: This article originally appeared on The Security Catalyst Blog.

If you have nothing to hide, why do you need privacy? This question, famously attributed to the McCarthy era, has gained currency again in this era of terrorism and national security. The question implies that privacy is a form of dishonesty, that the things people want to hide are the very things others should know about.

I admit that I bristle every time I hear someone say, “You have nothing to worry about if you have nothing to hide.” Baloney. I have everything to hide! When someone says, “I have nothing to hide,” it’s simply not true. What he really means is, “I have nothing to be ashamed of,” which may be true. But shame is only one, limited reason for confidentiality. Confidentiality is not an admission of guilt.

I have much to hide, for one simple reason. I cannot trust people to act reasonably or responsibly when they are in possession of certain facts about me, even if I am not ashamed of those facts. For example, I keep my social security number private from a would-be criminal, because I can’t trust that he’ll act responsibly with the information. I’m certainly not ashamed of my SSN. Studies have shown that cancer patients loose their jobs at five times the rate of other employees, and employers tend to overestimate cancer patients’ fatigue. Cancer patients need privacy to avoid unreasonable and irresponsible employment decisions. Cancer patients aren’t ashamed of their medical status—they just need to keep their jobs.

A person may share intimate secrets with an ecclesiastical leader that they would keep private from parents, because they fear the parents may not act reasonably or rationally when presented with the same information. During World War II, the government acted unreasonably and irresponsibly with Census data about the location of Japanese-American citizens. Privacy from government entities is paramount.

In addition, can you imagine how much damage you would impose on innocent people if you spoke every thought that came into your head? Or if doctors, lawyers, and accountants disclosed everything they knew about you?

The need for privacy is the recognition that most individuals, organizations, or institutions cannot be trusted to act reasonably, responsibly, in the best interest of the person, or in the best interests of society, when in possession of certain types personal information. Humans are biased. We have limited cognitive and analytical abilities, and never know all of the facts. We are infamously poor judges of character. We change our minds, and come to conflicting conclusions. So, the next time someone asks whether you have something to hide, do not hesitate to say, “Yes, of course I do.”

1 Comment

How to Write an ARRA Breach Notification Letter

Note:This article originally appeared on the Jeffrey Neu Blog.

“We’ve had a breach.” It’s a sentence nobody wants to hear, but when it happens to you, what to you do? If you’re in the healthcare industry, new federal regulations probably require you write a letter to the victims of the breach, or more. When and how quickly do you have to send a HIPAA/ ARRA notification? And what does it have to say?

The American Recovery and Reinvestment Act of 2009 (ARRA) requires HIPAA-covered entities to notify breach victims when protected health information has been disclosed to an unauthorized person. The legislation gives liberal exceptions for good faith and inadvertent disclosure. Redaction or encryption is an absolute defense to a breach.

“Protected Health Information” is any stored or transmitted health information which can be tied to an individual. It may include information not directly related to health, such as a full name, social security number, date of birth, home address, account number, or disability code. The law also requires third-party contractors or “business associates” to report breaches to the covered entity.

When a breach occurs, the covered entity must notify victims and the Secretary of Human Services “without unreasonable delay,” and within 60 days of the discovery of the breach. The covered entity must notify the individual directly if possible (ie, by mail), and must also post a notice on its website if the breach involves 10 or more victims who are not directly reachable. If the breach involves more than 500 residents of a single state, the covered entity must also notify statewide media.

A breach notification letter must meet differing but complementary legal and economic goals. They include:

  1. Complying with law
  2. Minimizing Losses

Compliance with Law

Complying with the law is straightforward. In addition to the requirements above, the notification must include a brief description of the incident, including the following information:

  • Date of the breach;
  • Date of discovery;
  • Description of the types of protected health information breached;
  • Steps individuals should take to protect themselves from potential harm resulting from the breach;
  • A brief description of the investigation, efforts to minimize losses and prevent future breaches;
  • Contact information for individuals who wish to ask questions or learn more information, including a toll-free phone number, e-mail address, website, or postal address.

Repairing your Company’s Image

Avoid the natural tendency to clamp up. Of course, the best way to protect your company’s image is to keep bad news out of the public eye. But once the cat’s out of the bag, several studies indicate that more than two-thirds of economic losses arising from a data breach are due to brand diminishment and lost customer trust, rather than litigation or identity theft expenses.

Above all, your company must maintain credibility. Be honest, open, and share enough detail to convince an educated person that you know what you’re talking about, and that you’ve actually fixed the problem. Consider hiring an outside security consultant who can 1. Give you genuine feedback on your security practices, and 2. Vouch for your credibility when you say that your customers are safe.

Rebuilding Customer Trust

Consider your last trip to the Department of Motor Vehicles. It probably consisted of waiting for hours in multiple serpentine lines without any direction, followed by more waiting, followed by spending money. The best part is riding away in your car when you’re done. Surprisingly, Disneyland and the DMV have a lot in common: Long lines, spending money, and rides. What sets the DMV apart from the happiest place on earth? One important ingredient is Customer Empowerment.

One way the Disney folks empower customers is by posting periodic signs in long lines: “Wait Time: 45 minutes from this point.” Though the sign does not decrease wait time, it informs and empowers customers. And as Disney knows, empowered customers are happy customers. Frustrated, angry customers are far more likely to cause trouble or leave altogether.

The best way to rebuild your customers’ trust is to empower them. Too many breach notifications include the unhelpful statement, “We have no reason to believe that anyone has accessed or misused your information.” The statement is faulty because it does not empower the customer to take action. Also, if the statement isn’t completely true, or if it changes in the future, it may inadvertently induce liability under certain circumstances. Further, these types of statements tend to frustrate rather than empower customers, causing some to conclude that the notification is incomplete or disingenuous.

Instead, consider these options:

  • Say, “Although we have no reason to believe that anyone has accessed or misused your information, if you think your personal information has been misused as a result of this breach, please call 1-800-XXX-XXXX so we can investigate…”
  • Include statistics on typical rates of harm for similar breaches, where possible.
  • Actually investigate the breach.
  • Create a website where customers can get up-to-the minute updates on the investigation directly from you, start using the best Managing Leads.

Mitigating Civil Liability

ARRA does not expressly create a private right of action for a HIPAA breach. Other theoretical sources of liability exist, though. For example, an individual may be able to rely upon a notification statute as the basis for a suit alleging negligence per se, where the breach of the duty to notify causes proximate harm to the plaintiff. Next, failure to correct statements (such as privacy policies) which have become false or misleading in light of new events, may create a tortious cause of action if the company fails to warn customers about foreseeable risks to personal information.

In contrast, most breaches are not likely to create privacy liability. Privacy tort actions usually require the breached information to cause extreme emotional distress, or a dilution of the property value of reputation or prestige. In addition, most courts have consistently failed to force companies to pay for credit monitoring services unless:

  1. A person has become an actual victim of identity theft.
  2. The person has found the thief
  3. The person can prove that the thief’s copy of their SSN or other personal information came from the breaching entity, and
  4. The person proves that the entity had a legal obligation to keep that information private.

Instead, it’s important to remember that businesses stand to loose more money from brand diminishment and lost customer trust than from litigation.

No Comments

Stimulus Package Federalizes Health Information Breach Notifications

Note: This article was originally posted on JeffreyNeu.com.

Streamlining medical records has been a recurring theme of the Obama administration. Tucked away in the pending economic stimulus legislation, known as the American Recovery and Reinvestment Act (ARRA), is a provision which would create a breach notification requirement for health information breaches.

Starting in Subtitle D, ARRA takes an unprecedented foray into federalizing data breach notifications. Although ARRA regulates breaches of health information, this legislation will no doubt be front and center of future debates about creating a Federal Breach Notification Law.

Synopsis

Here is a quick analysis: ARRA mirrors most state breach notification laws, in that it requires “covered entities” (ie, Health Plans, Health Care Providers, and Health Care Clearinghouses) to notify each individual if their “unsecured protected health information has been, or is reasonably believed by the covered entity to have been, accessed, acquired, or disclosed as a result of [a] breach.” Business Associates, or subcontractors, must alert the Health Care Provider of a breach. The statute also places additional limits on how health information can be sold and shared.
The statute dramatically broadens the ambiguous state-law concept of “data owners,” and applies to any HIPAA-covered entity that “accesses, maintains, retains, modifies, records, stores, destroys, or otherwise holds, uses, or discloses unsecured protected health information.”

As expected, the Federal law takes a lowest-common-denominator approach to duties. For example, although notifications must be made “without reasonable delay,” the statute allows up to 60 calendar days to comply. This is substantially longer than the longest state requirement, which requires notification within 45 days.
Each state notification law requires direct (ie mail) notification to affected individuals unless the person can’t be found, and allows “Substitute Notice” in cases of large breaches. “Substitute Notice” usually comprises posting an announcement on the organization’s website and notifying the media. Some states do not permit Substitute Notice unless the breach is extremely large (250,000+ in some cases). But ARRA allows substitute notice if the breach involves just 500 people in a single state.

The statute also reaches well beyond traditional “covered entities” to any service provider or vendor of personal health records. Presumably, this would include data warehouses like Google or Microsoft, each of which has or has announced plans to create online consumer health records warehouses. However, these vendors need only report the breach to the FTC, which will treat it as a deceptive trade practice. Individuals should not expect a letter from Google or Microsoft if their health care records are breached.

On one hand, this federal legislation will plug holes in several states statutes by regulating health information. Arizona, California, Hawaii, Michigan, Oregon, and Rhode Island, for example, regulate health care providers and insurers differently from other companies, and may even completely exempt them from notification requirements.

This bill will no doubt spur the national discussion about breach notification laws. But because they mimic existing state laws, the bill comes up short. Breach Notification Laws were a step in the right direction when California passed the first one almost seven years ago. But since that time, they have displayed several shortcomings, which I critique here. Instead of fixing these problems, ARRA will exacerbate many of them.

No Comments

8 Problems and 9 Solutions to College Information Security

This article originally appeared on the Security Catalyst Blog.

Colleges and universities store employment data, financial records, transcripts, credit histories, medical histories, contact information, social security numbers and other types of personal information. Although higher-education institutions should be forums where information and knowledge are easily exchanged, “sometimes the free flow of information is unintentional.” Here are eight policies and behaviors that put personal information at risk:

  1. Administrative Decentralization
  2. Naive Office Culture
  3. Unprotected “Old” Data
  4. Shadow Systems
  5. Unregulated Servers
  6. Unsophisticated Privacy Policies
  7. Improper Use of the SSN
  8. Unsanitized Hard Drives

Administrative Decentralization

In a university setting each college, each department, and often each professor operates nearly autonomously. In an environment where knowledge must flow freely, decentralization is a must. However, it means that new centralized policies to address information security are difficult to implement.

Naive Office Culture

A closely related risk factor is office culture. Staff turnover makes training an ongoing struggle, despite strict policies governing information control. Accidental information leaks can occur, even in the most secure IT environment. In addition, all office cultures resist changing any process, no matter how inefficient. In one example, I called my law school to discuss financial aid. After identifying myself by only my last name, the staff member automatically read my social security number over the phone.

Unprotected “Old” Data

Colleges do a pretty good job of guarding current personal information, but fail to protect older information, which is especially risky if the old data includes social security numbers.

Almost every week a faculty member backs up an old hard drive to his personal web space, unaware that the hard drive contained legacy student grades and social security numbers. Occasionally the professor is aware of the information but mistakenly believes that his university-provided Web space is not available to the public. Often the data sit on the institutional server for up to five years undetected and forgotten—until the information turns up on Google.

Shadow Systems

“Shadow Systems” are copies of personal information from the core system which professors, colleges, departments, and even student organizations maintain independently. Shadow systems can be sophisticated databases under high security or simple Excel spreadsheets on personal laptops. They multiply at an alarming rate because faculty members with administrative access can create their own databases at any time.

Thus, even though a small army of information-technology professionals may guard a college’s core systems, the security perimeter extends much further. And despite strict policies governing information control, employee turnover makes training about privacy and security issues a continual struggle.

Unregulated Servers

Often faculty members and third-party vendors also set up their own unregulated servers outside university firewalls, often for legitimate academic use. Those servers are particularly vulnerable to hackers and accidental online exposure. In one security audit, a private university uncovered 250 unauthorized servers connected to its public internet network, each containing sensitive student information.

Unsophisticated Privacy Policies

Colleges’ privacy policies often demonstrate a basic lack of understanding of the law and, more importantly, how the institution carries out the law through internal processes. Many policies basically say nothing more than “We follow the law,” without explaining what the law is or how they follow it. Even worse, some simply say, in essence, “Trust us, we’ll be good.”

Many institutions’ privacy policies also erroneously mimic commercial policies, which are narrowly tailored to cover only information collected online. Those policies are deficient in a college setting because just a small fraction of personal information that colleges maintain is collected online.

Further, a single institution may have dozens or hundreds of separate privacy policies, each dealing with a different, and incomplete, set of issues. For example, at some highly decentralized institutions, each college, department, and even some facilities like student unions have their own privacy policies. While privacy policies should reflect the practices of each group, inconsistent policies can create confusion among staff members who must explain or carry them out.

Improper Use of the SSN

Even though many colleges don’t now use social security numbers to identify students, they once did. Those old records sit like land mines on old servers. In addition, some universities print them on academic transcripts and official documents. Even though the American Association of Collegiate Registrars and Admissions Officers recommends printing the social security number on transcripts, my January 2007 study indicates that fortunately, most don’t.

Unsanitized Hard Drives

Deleted files remain almost unchanged on the hard drive until it is overwritten or physically destroyed. Once unsanitized hard drives are re-sold, sensitive personal and corporate information can be easily retrieved. Though most universities have a sanitization protocol when retiring old hard drives, enforcing the policy can be challenging.

Solutions

College administrators should consider the following:

  • Regularly scan institutional networks for sensitive information, such as social security numbers, grades, and financial information. Use a combination of public search engines, and internal text- and file-scanning software.
  • Automatically retire “old” data on institutional servers but allow faculty members to un-retire old data they still use. Forgotten information is dangerous information.
  • Establish a “radioactive date,” which is when your institution last used social security numbers as an identifier. Files last modified before this date should be presumed dangerous.
  • Create permissions-based access to core systems. Sensitive personal information should be available to faculty members and departments only on a need-to-know basis.
  • Establish a data-retention-and-access policy by balancing threat, benefits and risks of maintaining the data.
  • Coordinate interdepartmental privacy and security practices with a special committee of information security professionals.
  • Update your privacy policy to reflect all privacy issues arising in a university setting. Explain privacy rights and practices that protect offline employment information and sensitive student records. Also explain work-flow protections (for example, “only director-level employees have access to social security numbers”) and technical practices (for example, “employee data is stored on encrypted hard drives”). Privacy policies should deal with more than just cookies and Web forms.
  • Eliminate social security numbers from official records where possible, or establish a policy whereby students can opt to omit their numbers from transcripts or other records.
  • Physically destroy all old hard drives.

Institutions of higher education must promote the free exchange of ideas while protecting sensitive personal information. Although the academic environment can seem at odds with information security, appropriate practices and procedures can balance information freedom and personal privacy.

Aaron Titus is the Privacy Director for the Liberty Coalition, and runs National ID Watch. A version of this article originally appeared in the October 24, 2008 edition of the Chronicle of Higher Education, and is republished here by arrangement.

No Comments

Cost of Data Breaches Rise

Note: This post originally appeared on JeffreyNeu.com.
ZD Net reports that the cost of a data breach has gone up 2.5% from 2007, according to research published by the Ponemon Institute.

After comparing data from 43 companies (including several repeat offenders), companies loose just over $200 per compromised record. Significantly, lost business due to a lack of customer trust and brand diminishment comprises 69% of the cost.

Forget about the cost of postage… businesses stand to loose much more in sales from customers who read, “We regret to inform you…”

No Comments

What do you Call a “Data Self?”

On page 2 of his book, The Digital Person, Professor Daniel Solove posits that each individual is comprised of “an electronic collage of bits of information, a digital person composed in the collective computer networks of the world.” In other words, a person may now be defined as just a few pieces of data.

A few months ago I argued that each this electronic collage of information comprises a “Data Self”. It was my rather ungraceful attempt to articulate: “Hey! You know all that stuff’ out there? That’s not ‘stuff’ out ‘there:’ That’s you.” Me? “Yeah, YOU.”

Thanks to an enlightening discussion with my very astute friend, Greg Ceton, we were able to identify other possibilities, each of which has its own set of problems.

  • Data Self: Nobody thinks of themselves as a “Self.”
  • Digital Identity: Although most people now understand “Identity Theft,” nobody thinks of themselves as an “Identity.”
  • Digital Self: Same problem, and information doesn’t have to be digital
  • Information Self: Even more abstract than “Data Self.”
  • Digital Clone: Better, because Clone connotes both “me” and “other” simultaneously. However, it’s pretty sci-fi.
  • Digital Me/You: “You/Me” is better than “Self,” but the concept is still too abstract to immediately grasp.
  • Digital/Data Double: Although slightly easier to grasp, the “Double,” as in “stunt double” distances a person from their Data Double. After all, the whole purpose of having a stunt double is so that you don’t get hurt.
  • Digital Twin: Same strengths and weaknesses as above. A “twin” is not “you.”
  • Digital Alter-Ego: A subtle improvement, but still suffers from the problem of detachment.
  • Your Digital Copy: Ditto.
  • Shadow Self: The strength of this phrase is that you can never get rid of your shadow. But the analogy breaks down because 1. You always know exactly where your shadow is. 2. Your shadow can’t act on its own, and 3. Your shadow can’t harm or be harmed.
  • Identity Hostage: I think this hits both points- a data alter-ego whose actions affect you. But as @caparsons put it, the term is loaded and implies that the only thing a digital identity is good for is stealing.
  • I’ll add more as I think of them. I’d appreciate your thoughts here, or on Twitter.

No Comments

The Top 5 Reasons You Won’t Hear About a Breach

Note: This article originally appeared on the Security Catalyst Blog.

I have personally discovered more than a hundred data breaches by schools, companies, doctors’ offices, tax professionals, government agencies, and individuals over the past several years. Unfortunately, very few of the breaching entities proactively announce an average breach, regardless of the law. Here are the most common reasons:

  1. Failure to Detect
  2. Market Devaluation of Privacy
  3. Poor Communication
  4. Ignorance of Law
  5. Notification Difficulty

Failure to Detect

Many organizations do not have proper diagnostic processes to detect breaches when they occur, and many do not keep proper logs. Thus, when a press releases reads, “we have no evidence that the sensitive information was accessed…” it may simply mean that they did not keep any records, and thus literally have “no evidence.”

Market Devaluation of Privacy

The market does not value privacy. Ensuring privacy is expensive, but the costs of violating privacy are small. Doing a simple cost/benefit analysis, organizations often come to the logical conclusion that the PR ‘costs’ of announcing a breach (especially when no hard proof of access exists) far outweigh any benefits.

In addition, most data breach notifications laws only require an organization to say, “Oops.” If the organization is feeling nice, they’ll say, “Oops, sorry.” And if they’re feeling gregarious, they’ll say, “Oops, sorry, and here’s a free report of how much damage has been done to your credit. You’ll still be at risk for years to come, though, so stay vigilant. Good luck.” But they have no responsibility to help you recover from financial identity theft, medical identity theft, or criminal identity theft. Merely getting a credit report does not protect against any of these risks.

Poor Communication

A cruel irony of data breaches is that the only source of information about a breach is filtered, packaged, and presented by the organization with the most incentive to skew the details. The breaching entity’s concern is to minimize perceived liability; therefore it is in their best interest to restrict the flow of information about the breach as far as possible.

I have read dozens of breach announcements, and they almost write themselves: “On X date, we discovered that some personal information was compromised. We acted immediately to make the information unavailable, and we have no evidence that anyone accessed it for inappropriate reasons. You should get a credit report as a precaution.” Keeping a victim in the dark about the details protects only the breaching entity.

Ignorance of Law

Even in states where breach notification laws exist, smaller organizations often assume that the law only applies in limited circumstances, to larger companies, or to particularly large breaches.

Notification Difficulty

For the most part, organizations which choose not to report breaches get away with it. But even under good circumstances, 100% victim notification is impossible. People move, phone numbers change, or addresses are incomplete or not on file. Letters that do arrive at the proper address may be ignored. Multiple contact strategies should be applied over long periods of time to reasonably ensure that most victims are notified.

I have suggested solutions to some of these problems here and with the creation of National ID Watch

Aaron Titus is the Privacy Director for the Liberty Coalition, and runs National ID Watch.

No Comments

Is Personal Information Property?

Note: This article originally appeared on the Security Catalyst Blog

A colleague recently asked me, “When did my personal information become someone’s property?” It’s a question with a vital answer, because if my personal information belongs to someone else, then they can do whatever they want with it. If data is property, then they can buy, sell, license, or give away my identity without my consent. This puts me at risk, because I must rely on the good will of a third party to keep my identity secure.

But if personal information really were property, then I should be able to permanently sell, or “alienate,” it. But unfortunately, I can’t sell personal information like a car. If I sell my car and the new owner paints it purple or runs it into a tree, it’s not my problem. But we all know that if I sell my personal information and the new owner “crashes” my identity, I suffer. Unlike all forms of property, personal information is inherently inalienable. Unless you enter the witness protection program, you’re stuck with your identity no matter how many times you sell it, and no matter how many times it is crashed.

Data is Property

Data behaves like property because 1. Data has value, like property. 2. Data is fungible, like property, and 3. Data is alienable, like property. For most types of information (ie, trade secrets, copyrightable or patentable information, etc) Intellectual Property law treats data like property with no problems, because trade secrets and patents are valuable, fungible, and alienable.

However, the analogy between data and property breaks down when we get to personal information, primarily because personal information is NOT alienable. Consequently, Intellectual Property law does not generally treat personal information as property.1 Most personal information, such as names, addresses, phone numbers, and social security numbers are facts. Facts are not copyrightable.2 You can’t patent personal information,3 and it certainly isn’t a trade secret.4 In short, nobody “owns” my name, including myself. And if someone could “own” my name, it would most logically be my parents, since they created it. But my mom can’t copyright my date of birth, and the government can’t patent my social security number. My phone number is not an AT&T trade secret, nor is it mine.

Personal information is valuable and fungible. Entire multi-billion dollar industries thrive on the sale and exchange of personal information. United States election law requires candidates disclose the value of all in-kind campaign donations, including databases of potential voters.5 Other federal and state statutes, such as the Gramm-Leach-Bliley Act and the Sarbanes-Oxley Act, require corporations to account for the fair market value of assets, which may include customer data. And personal information is extremely fungible, as information in databases can be shared, sold, licensed, stolen, or lost with remarkable efficiency.6

Because personal information is valuable and fungible, it is often treated like property. Tort law implies that some forms of privacy come from a trademark-like ownership of one’s name and likeness.7 Even breach notification laws seem to assert that companies which collect personal information “own” it.8

But that isn’t the whole story. Unlike every other form of property, personal information is not alienable, (such as bank account numbers, credit scores, social security numbers, or police reports) even if a third party creates it. And unfortunately, you don’t have any constitutional right of privacy when you give your personal data to a third party.9

Because personal information is not alienable, it is sufficiently different from traditional “property” that IP law does not provide a helpful framework for managing it.

Self is Data

In the Information Age, you are not much more than “an electronic collage of bits of information, a digital person composed in the collective computer networks of the world.”10 In other words, a person may now be defined as just a few pieces of data. This data is your Data Self. Your Data Self is a collection of your credit report, facebook page, Google results, Bank account numbers, archived e-mails, and an endless parade of other data. Your Data Self is a digital alter-ego, with its own personality, dispositions, fallacies and mortality. Your Data Self also has the power to enter contracts, grant access to your financial assets, have surgery, commit crimes, or be kidnapped.

When your Data Self belongs to someone else, it can be forced to act against your will. If someone makes your Data Self sign a contract, you are bound by it. If your Data Self is convicted of a crime, you can go to jail. If someone forces your Data Self to take out a loan, you must repay it. If your Data Self has an operation, you may no longer qualify for medical insurance. If your Data Self is abused, stolen, sold, manipulated, or forced to act against its will, you suffer the consequences. In this sense, “Identity Theft” might be more descriptively defined as “Digital Kidnapping.” Identity Theft is when someone pretends to be you by “kidnapping” your Data Self, doing something bad, and you get blamed.

Self is Property

In my view, this is a startling development. As long as my Data Self is a third party’s possession, then they can also treat me like property. In other words, if Self is Data and Data is Property, then Self is Property. The now popular crime of Identity Theft is the most visible consequence of this trend. In fact, “Identity Theft” epitomizes the problem with treating personal information as property: The very term recognizes that you have an alter-ego digital “identity” or Data Self. It also acknowledges that your Data Self can be stolen and abused, like property.

Fortunately the 13th Amendment ended human trafficking, and human muscle, once required for agriculture and labor, does not command the same economic premium in a post-industrial society. Instead, a person’s economic value now lies in his access to financial assets and credit. Our Data Selves are easy to coerce, and people are now worth more in bytes than in flesh and blood. As long as Data Selves are digital property, new crimes similar to identity theft will continue to arise, and our society runs the sinister risk of a new form of human trafficking: A type of Digital Slavery, where third parties can own, abuse, and force Data Selves to act against their will.

Facing the possibility of this new class of crimes, the law should neither permit personal information to be treated as property, nor can we afford to go down that path.

Aaron Titus is the Privacy Director for the Liberty Coalition, runs National ID Watch, and welcomes feedback.


Footnotes

1. 19 NO. 7 Intell. Prop. & Tech. L.J. 5, 8

2. Feist Publications, Inc. v. Rural Telephone Service, 499 U.S. 340, 363-64, 111 S.Ct. 1282, 1297 (1991) (Holding that an alphabetized collection of personal facts in a phone book is not copyrightable because 1. Facts are not copyrightable, and 2. The phone book lacks minimally creative selection, coordination, and arrangement. “As a statutory matter, 17 U.S.C. § 101 does not afford protection from copying to a collection of facts that are selected, coordinated, and arranged in a way that utterly lacks originality.”)
3. 35 U.S.C.A. §§ 101-102.
4. Facts in a database may qualify for trade secret protection under state law, but only if the information meets stringent requirements, and remains secret. 19 NO. 7 Intell. Prop. & Tech. L.J. 5, 8.

5. 2 U.S.C.A § 431(8)(a).
6. Identity Theft Resource Center, Press Release – 2007 Breach List; Privacy Rights Clearinghouse, A Chronology of Data Breaches.
7. “Tort” law is common- or judge-made law that allows people to sue others for doing bad things. For example, the tort of Appropriation of Name or Likeness is when someone uses a person’s name or picture for financial gain: Rest. 2d Torts § 652C cmt a. (1977) (The Tort of Appropriation of Likeness gives the individual “exclusive use of his own identity, in so far as it is represented by his name or likeness, and in so far as the use may be of benefit to him or to others. Although the protection of his personal feelings against mental distress is an important factor leading to a recognition of the rule, the right created by it is in the nature of a property right, for the exercise of which an exclusive license may be given to a third person, which will entitle the licensee to maintain an action to protect it.”);

8. See, e.g. Cal. Civ. Code § 1798.81.5(a).
9. United States v. Miller, 425 U.S. 435, 443-44 (1976) (Holding that bank records have no fourth amendment protection, and are subject to government subpoena with no infringement of an individual’s rights).
10. Solove, Daniel J., The Digital Person. New York University Press, New York. 2004. p. 2

No Comments

In Defense of Breach Notification Laws (sort of)

Note: This article was originally published on the Security Catalyst Blog.

Starting with California’s 2003 law, all but a hand full of states have now enacted breach notification laws (BNLs). Though each is subtly different, all notification laws recognize that a if your identity, or Data Self, is treated as mere chattel, it is subject to fraud and abuse. These laws require data stewards to notify an individual when his identity has been lost or kidnapped.

Your identity or Data Self is a digital alter-ego: a collection of personal facts which has its own life, fallacies, and mortality. Data is Self, but data is also treated like property. If Self is data, and data is property, then Self is property. If your Self is the property of others, then it can be bought, sold, traded, lost, stolen, or damaged like any other form of property. Identity Theft is just that: Where a person’s Data Self is stolen and abused.

Measures of BNL Success

With five years of breach notification law experience, it is essential to ask, “Are they working?” My shorthand answer is “yes, sort of.”

I’ll be the first to admit that breach notifications are noisy, and contain a strong element of political theater. Some contend that notification laws may even be harmful, distracting and confusing consumers into thinking they aren’t at risk if they don’t receive a notice. I agree that as currently written, breach notification laws have several shortcomings. But their success or failure should be measured in several ways:

  1. Decreased Incidence of Identity Theft
  2. Increased Awareness and Identity Control
  3. Decreased Risk Behaviors and Incidence of Breach
  4. Increased Victims’ Rights

1. Decreased Incidence of Identity Theft

Q: Do breach notification laws decrease identity theft?

A: Probably not. Several breach notification laws emphasize the need to protect consumers from identity theft and other misuse of a person’s Data Self. However, researchers Sasha Romanosky, Professor Rahul Telang, and Professor Alessandro Acquisti presented a well-reviewed paper which measured the change in the rate of reported identity thefts before and after data breach laws went on the books. Though drawn from incomplete FTC data, the paper convincingly demonstrates that breach notification laws have a negligible effect on reported identity theft rates. Instead, they suggest that a state’s gross domestic product and general fraud rate has a much stronger correlation with ID theft.

2. Increased Awareness and Identity Control

Q: Do breach notification laws increase identity risk awareness? How about consumers’ control over their identities?
A: Yes, to varying degrees. A cruel irony of data breaches is that the responsible organization is the only one who knows exactly what happened, and they have the strongest incentive to hide or skew the details. Many breaches go under- or unreported, regardless of law. Even well-intentioned organizations issue vague, incomplete, blame-shifting or liability-reducing press releases that leave victims in the dark. In order to effectively empower consumers to conduct their own risk analysis, breach notifications must contain the following elements:

  • Who: The class of victims affected by the breach.
  • What: A complete list of exposed information, not just the ones required by law.
  • Where: Exposing entity’s contact information.
  • How and When: Sufficiently detailed information about the how and when the breach occurred.
  • How Much: Total number affected, Sensitivity of information exposed, Duration of exposure, and Distribution method (ie, stolen laptop, online exposure, or dumpster).
  • What Now: A clear statement of consumer’s legal rights (or lack of rights); Concrete actions taken by the organization to fix problems, mitigate risk, or remedy harm; Suggested actions for the victim.

Of course, breach notification laws have much more lax reporting requirements than these. And although I agree that the average breach announcement is “noisy,” I think it would be a mischaracterization to label them as nothing more than “noise.” Even the least specific notifications build public awareness. For better or worse, most public awareness of identity risks come from news bulletins about data breaches. Although none of the announcements may put any particular individual on notice of a personal risk, these “noisy” notifications have a net positive effect of educating the population at large.

3. Decreased Risk Behaviors and Incidence of Breach

Q: Do breach notification laws decrease individual risk behavior?
A: Probably Not, but they have the potential to. An effective notification must contain actionable intelligence, which means Intelligence plus Action. For example, imagine that you are in a life raft in the middle of the ocean, with no hope of immediate rescue. You see bubbles. What do you do? You sink. You were able to gather intelligence, but had no way to act upon it. Intelligence without action breeds inaction.

However, imagine you’re on the same raft, and you see bubbles. But this time you have a patch kit and a hand pump. This time you have actionable intelligence, and you will likely attempt to patch the raft and pump it up.

An alert is only effective when it empowers a person to act. Typical breach announcements usually do nothing to empower individuals. Effective breach notifications require both intelligence and action. If either one of these elements is missing (as is often the case), it will fail to empower victims, and may even engender apathy.

Some suggest that in the current environment of data insecurity, consumers should be on constant high alert for identity theft, even without notice of a breach. After all, your Data Self is constantly being traded without your knowledge or consent in IT and business environments of questionable reputes.

It’s a nice thought, but not very helpful. Being on high alert all the time is essentially the same as not being on alert any of the time.

Q: Do breach notification laws encourage organizations to improve behavior?
A: Probably yes. The Romanosky paper found that notification laws likely encourage businesses to take more stringent safety precautions with personal information, because of the economic incentive to avoid breaches. However, the incentives to secure data do not appear to outweigh the market forces which devalue privacy. Both the Privacy Rights Clearinghouse and the OSF Data Loss Database show a steady, and perhaps even increasing number of breach incidents and lost records each year. While part of this increase may be attributable to better reporting, there is no solid indication that data breach incidents are decreasing.

4. Increased Victims’ Rights

Q: Do Breach Notification Laws Create New Rights for Consumers?
A: Absolutely yes. While not the silver bullet to cure all ails, breach notification laws are an important first step at creating rights for victims of breaches. Before BNLs, nobody had the right to know whether their Data Self had been compromised. Additional legislation will be necessary to address existing and emerging identity threats. Especially as Data Selves are treated as property, our society runs a risk that the unregulated trade of personal information could morph into a new form of digital human trafficking.

Legislative Improvements

Breach notification laws are a first step in regulating the trade of Data Selves. The right information at the right time, given to the right people, coupled with a clear course of action will empower people and catalyze change. Here are six legislative suggestions to effectively protect and empower consumers:

  1. “Stewards,” not “Owners”: Given the tenuous and dangerous legal basis for “owning” personal information, notification laws should replace the concept of “personal information owners” with “personal information stewards.” This change would help sharpen the distinction between Data as Self versus Data as Property, and emphasize that third parties can’t “own” a Data Self. When Self is Data and Data is Property, then we run the risk that Self becomes Property.
  2. Expand Reporting Requirements: Breach notifications should provide actionable intelligence, including who, what, when, how, how much, and “what now?” of each breach.
  3. Standard Measures of Risk: I suggest using Size, Sensitivity, Duration, and Distribution.
  4. Presumptive Loss: In order to successfully sue for a breach, a consumer must 1. Become an actual victim of identity theft, 2. Find the identity thief, 3. Prove that the thief’s copy of their SSN or other personal information came from the breaching entity, and 4. Prove that the entity had a legal obligation to keep that information private (a rare duty). This is an unreasonable and often insurmountable burden of proof. Instead, Tennessee has adopted a small presumptive “ascertainable loss” whenever a breach occurs. These nominal damages would recognize harm to reputation, apprehension, emotional distress, and violation of selfhood. They would also help counteract the market’s failure to value privacy
  5. Require a Data Audit Trail: Stewards of personal information should maintain standard inventory controls on personal information, recording with whom and when the personal information was shared. This data trail would be used for data audits and could help establish causation in the case of a breach.
  6. Automatic Credit Reporting: Consumers should get an automatic notification at any activity on their credit.

Aaron Titus is the Privacy Director for the Liberty Coalition and runs National ID Watch, and welcomes feedback.


Footnotes

Cal. Civ. Code §§ 1798.82-84.
See, e.g. N.H. Rev. Stat. § 359-C:2.
See, e.g. Ga. Code § 10-1-910(4),(7).
See, e.g. Cal. Civ. Code § 1798.81.5.(a).

Tenn. Code § 47-18-2102(1).

No Comments

Florida State University Prof Posts 33 Students’ SSNs Online

TALLAHASSEE, Florida. The personal information of 66 Florida State University students sat on a public FSU Chemistry Department server for more than five years. Several files included names, 33 social security numbers, grades, homework and exam scores. All of the individuals affected by this breach appear to be former students of Dr. Steinbock, an FSU professor.

The Liberty Coalition discovered the files in late January, 2008 and notified the university. FSU quickly removed the files from the server, but they remained available through search engine caches until late March, 2008.

This incident falls into a nationwide pattern where university professors use public university servers to back up sensitive student personal information, either unaware of the sensitive information, or unaware that the information would be available to the public.

Individuals affected by this exposure should immediately visit www.ssnbreach.org and search for their names, to confirm what types of personal information were exposed.

About SSNBreach.org

Sponsored by the Washington, DC non-profit Liberty Coalition, SSNBreach.org provides hundreds of thousands of free personalized Identity Exposure Reports™ as a public service.

Each Identity Exposure Report (IXR) documents what types of personal information were exposed (such as Social Security Numbers, Birth Dates, Addresses, etc.), without revealing them. Each IXR also details the situation surrounding each exposure, and contact information of those responsible for the breach. Armed with this information, victims can further investigate, take action, or correct harm.

Source: https://www.ssnbreach.org/release.php?g=73

No Comments