Highlights From the FTC’s Privacy Roundtable: Part 2

MENTAL HEALTH BENEFITS OF EXERCISE IN TEENS
Exercise can not only be used for physical health benefits but also mental health benefits. Regardless of age or fitness level, studies have shown that making time for exercise provides an abundance of mental benefits. As a teen, developing healthy brain function and mental health is essential. The health benefits of exercise in teens can eliminate the potential risk of mental illness, stress, poor self-confidence, memory, and many other things. An article by The Huffington Post discuss 6 different health benefits of exercise in teens and adults.

Reduce stress and increase relaxation. Taking a walk or getting a workout in at the gym is a great way to relieve stress. One of the most common mental health benefits of exercise in teens is stress relief. Working out can help manage physical and mental stress that may have been built up for a negative experience at school or stressful exam. Exercise can also increase relaxation, benefiting teens who struggle with insomnia or sleep deprivation. Check out the latest exipure reviews.
Alleviate anxiety and depression. Exercise releases endorphins, which are natural chemicals in your body that create feelings of happiness. Studies have shown that the health benefits of exercise in teens can significantly improve depression or anxiety. Even just getting 30 minutes of exercise a few times a week can improve overall mood. Exercising with an anxiety disorder can actually help reduce symptoms in teens and allow them to calm down. Moderate-to-high intensity exercises can reduce anxiety sensitivity.
Improve self-confidence. Physical fitness can boost self-esteem and self-image. Exercising, regardless of size or weight can provide teens with a perception of his or her self-worth. Exercising outdoors can increase self-esteem even more. Finding an outdoor workout that fits your interests is a great way to meet people and build other skills in building up self-esteem or self-worth.
Sharpen memory and prevent cognitive decline. Doing sporadic physical activities can boost memory and learning. Researchers have linked children’s brain development with level of physical fitness. As we get older, our brains have a harder time processing or maintaining information. Another health benefit of exercise in teens, is that it reduces the chances of developing diseases like Alzheimer’s later in life. Working out at a young age boosts chemicals in the brain that prevent degeneration of areas of the brain that are linked to memory and learning. Learn more about Exipure healthy benefits.
Help control addiction. Exercise can help in addiction recovery. Exercise can effectively distract drug or alcohol addicts, de-prioritizing cravings. Exercise also helps reboot the body after going through negative effects from alcohol or drug abuse.
Get more done and tap into creativity. Researchers show that exercise on a regular basis creates higher levels of energy and productivity in more sedentary peers. Health benefits of exercise in teens can also boost creativity. Exercising outdoors or interacting with nature during exercise can create inspiration and creative thinking.

No Comments

Highlights From the FTC’s Privacy Roundtable: Part 1

Note: This article originally appeared on the J.C. Neu & Associates Blog

The FTC’s December 7th Privacy Roundtable assembled a Who’s Who of privacy luminaries, academics, advocates, and industry players. This post highlights some of the more interesting comments from the meeting. I also tweeted the event (@aarontitus, #FTC #Privacy or #ftcpriv) and the FTC has posted the webcast if you missed it.  The next Roundtable is scheduled for January 28, 2010 in Berkeley, CA and will also be broadcast online.

The meeting consisted of five panels. This posts highlights "Panel 5: Exploring Existing Regulatory Frameworks:"

  • During Session 5, Intuit’s Chief Privacy Officer Barbara Lawler posited that existing regulatory frameworks unfairly place the entire burden on consumers to protect themselves. "Consumers should expect a safe marketplace. They shouldn’t be the ones to police the marketplace," she said.
  • Barbara Lawler also noted that "Data is never really at rest," because it’s moving between data centers and backups in multiple locations throughout the globe. It is therefore incorrect to think of data, especially Cloud data, as being in one place. Instead, "data is in one place and many places at the same time," potentially in multiple jurisdictions.
  • Evan Hendricks of Privacy Times and Marc Rotenberg of EPIC suggested that the current model of "Notice and Consent" has failed to protect consumers, and that the FTC (and legislation in general) should return to well-established Fair Information Practices (FIPs), including a prohibition on "secret databases." Mr. Rotenberg went so far as to conclude that Notice and Choice principles are not a subset of FIPs, but instead "stand in opposition to fair information practices." He also joked that "the best part of Graham-Leach-Bliley Act is that you get paper notices you can tape on your window and get more privacy."
  • Ira Rubinstein of New York University School of Law proposed that self-regulation is not binary or "monolithic," and that a self-regulatory scheme would be preferable, especially if viewed as a "continuum, based on government intervention." He argued that self-regulation would be especially appropriate in the United States, which has traditionally been very friendly to e-commerce.
  • Michael Donohue of OECD gave an overview of international legal concepts of privacy which generally agreeing with Marc Rotenberg’s observation that "most countries have come to surprisingly similar conclusions about privacy."
  • J. Howard Beales of the GWU School of Business argued in favor of a "harm-based model," because it is impossible to reach the best solution without first defining the harm. Marc Rotenberg responded that privacy harms are almost never financial.
  • Several panelists emphasized that privacy can be highly (and appropriately) subjective. One cited an example from a balding friend of his, "I don’t care if anyone knows that I use Rogaine, but my 70-year-old grandmother would."
  • Fred Cate of the Center for Applied Cybersecurity Research emphasized that the Notice and Consent model is flawed because some activities should not be consentable. For example, one may not "consent" to be served fraudulent or misleading advertising. Likewise, some uses of personal information should be prohibited and non-consentable. Most importantly, Notice and Choice are only tools– not the goal of privacy.
  • After Panel 5 was done, Bureau of Consumer Protection Director David C. Vladeck said the FTC would investigate whether it is better to give consumers notice how their personal information may be used: 1. At the time of collection, or 2. At the time of use.
  • David C. Vladeck also said that the data broker industry warranted FTC attention because it is "largely invisible to the consumer."

More highlights on the other sessions to come.

No Comments

My Thoughts About Privacy Commons

I spend most of my free time working on Privacy Commons, and so I was excited to see Christopher’s post and critique on the subject. Thanks as usual, Christopher, for your thought-provoking questions and observations. Likewise, Aza, CUPS, and Ralf Bendrath. Great work—each of you. I want to pick each of your brains sometime. I also want to apologize in advance for any incomplete sentences or thoughts. This is a slapped-up post.

Some Problems With Privacy Policies

As Christopher, myself, and many others have pointed out, the problems with privacy policies are myriad. Here are a few:

  • Inaccessible or Unintelligible. many privacy policies are not easily understood or even physically accessible; so complicated and wrapped in legalese that they are “nigh useless” to the average consumer.
  • Complicated Solution. Unless we’re careful, a Privacy Commons may end up equally or more complicated than the status quo.
  • Non-Standard. Privacy Policies are not standardized, making it impossible to compare apples-to-apples.
  • Incomplete. They often fail to address important privacy issues or fail to consider all potential parties
  • Unsophisticated. Many boilerplate privacy policies demonstrate a fundamental lack of understanding of how privacy policies translate to privacy and business practices. Some simply don’t address the most salient issues, which may be unique to their industry. Consequently, many of the policies never translate to practice.
  • Treated as Only Legal Documents. Privacy policies are often treated as “compliance” documents and relegated to the legal department. Consequently, many fail to address or actually contradict field practices.
  • Privacy Waiver. Many privacy policies waive, rather than confer, privacy rights. The medical industry is extremely efficient at this practice.
  • Technology-Dependent. Privacy policies which strictly enumerate technologies quickly become outdated in the face of emerging technologies.
  • Non-Binding. Most importantly, US courts have consistently interpreted privacy policies to be unbinding notices, rather than contracts. As a result, privacy policies generally create no enforceable rights or enforceable expectations of privacy. In this sense, privacy policies can create a false expectation of confidentiality, privacy, or even fiduciary responsibility.

Some Assumptions About Privacy Policies

Based on my experience in technology, advocacy, and the law, I want to air some of my basic assumptions about Privacy Policies. Of course, I invite challenges to these assumptions:

  1. Mitigate Liability. Privacy is the subject of dozens of laws and regulations. The present primary business case for developing, maintaining, and conforming to a privacy policy is to mitigate liability.
  2. Inform Data Subjects. Data Subjects include consumers, employees, or any individual about whom information is collected, stored, or aggregated.
  3. Empower Data Subjects. Mere information is not enough. A privacy policy which produces information overload without actionable intelligence is counter-productive.
  4. Articulate Privacy Practices. For the benefit of both data subjects and the data stewards who must execute the privacy policy, it must explain and reflect real business practices.
  5. People Don’t Read. Anything more than about two paragraphs will never be read. That’s why high-level iconography is so appealing (and achievable).
  6. Must Be Easy-to Understand. Because people don’t read. Fewer words and easy-to-grasp iconography are better.
  7. Short Policies Are Inherently Incomplete. Two paragraphs and pretty pictures may be sufficient to inform consumers on the portions of the privacy policy they find most important, but will always be incomplete. More on this below.
  8. Adoption & Enforcement. A Privacy Commons must be optimized for adoption, rather than enforcement. That’s simply because despite the Federal Government, the states and the FTC’s regulation in the area, a privacy commons must be market-driven to be successful.
  9. Sector-Specific. Different sectors/activities collect different sets of personal information, are regulated differently. In order to ensure that privacy policies are relevant, they must be taylored to specific activities.
  10. Living Documents. A privacy policy which was correct six months ago may not be correct today.
  11. Privacy Policies are Complex. Deal with it. Privacy Policies are complex, just like Creative Commons or the Telephone. More on that below.
  12. Business Documents. Privacy Policies are business documents with legal, practical, business, and ramifications for corporations, their agents and employees, and data subjects.


Thinkers like Christopher Parsons worry that a Privacy Commons will be unnecessarily complex. Non-attorneys are often (justifiably) baffled at why lawyers take 3,000 words to say what can be said in 300 and a handshake. It turns out that a simple handshake is not as simple as most people think. Behind each handshake there is a wide range of assumptions which are not as standard as one might believe. Many (if not most) disputes arise when there is a misunderstanding about an unspoken assumption—the meaning of a word, or silence on a particular issue. That’s why it takes lawyers so many words to say something so simple; simple things are not as simple as we thought.

To demonstrate this point, we need look no further than Creative Commons. While the human-readable version of the “Attribution Non-Commercial Share Alike” creative commons license consists of 5 images and 286 words, the legal version contains 3,384 words. Clearly the unnecessary work of a verbose lawyer who needed to justify his existence, right?

Not so fast. The full Attribution Non-Commercial Share Alike license covers a whole bunch of other stuff that consumers don’t usually take time to think about, unless of course there is a dispute. It’s only at that point that we’re glad we included it. The legalese version covers essential topics like media and language translation, public performance, DRM, collections of works, waiver of compulsory license fees, preservation of moral rights, representations and warranties, limitation on author’s liability, termination, severability, waiver, and entire agreement, just to name a few. Consumers don’t (and shouldn’t) think about this kind of stuff when they proverbially “shake hands” with a licensee. Creative Commons is simple on the surface, but look under the hood and you’ll see the complexity necessary to create the elegance that most people associate with the CC licenses. Saying that the legalese version of a Creative Commons License (or Privacy Commons Policy) is a “necessary evil” is incorrect and misses the point. It’s not evil at all; it’s just necessary.

It’s like a telephone—an elegant piece of equipment which is exceedingly easy to use. The end-user only cares about a few things: Connectivity, line quality, cost, and accessibility. Yet the infrastructure and technology supporting telephony and networking is extremely robust and complex. Consumers pay the telcos to worry about all of the other stuff so they can focus on the four or five things that consumers care about. The millions of miles of copper, routers, substations and central offices aren’t a “necessary evil,” they’re just necessary.

Some Conclusions About Privacy Policies

We’re just going to have to deal with the fact that privacy policies are complex, and will continue to be complex. The best solution (as I see it) is to do three things: ID c.

  • Require Thoroughness. A Privacy Commons-compliant policy is thorough
  • Identify Cultural Notions of Privacy. Identify culturally important notions of privacy, and embody them in easy-to-understand iconography. Christopher Parsons suggests these notions might center on Data Collection, Data Sharing, Data Identification, Data Tracking, Data Deletion, and Aggregation, which I think is a good start. And Ralf Bendrath offers these excellent icons, which are more elegant than any I’ve seen.
  • Embody the Cultural Notions of Privacy in Iconography. Then let the legalese version fill in the (necessary) gaps.

A privacy policy which conforms to Privacy Commons requirements will be complete, informative, easy to understand, and easy to adopt. Like Creative Commons, Privacy Commons seeks to identify common cultural notions of privacy, and embody them in easy-to-understand policy frameworks, with simple high-level iconography.

Note: I usually blog on securitycatalyst.com and jeffreyneu.com, but this post doesn’t fit very well on either.

1 Comment

NJ Supreme Court: Attorney-Client Privilege in Personal Email at Work

Note: This article originally appeared on the J.C. Neu & Associates Blog

Yesterday the New Jersey Supreme Court heard arguments in the Stengart v. Loving Care Agency, Inc. case. The issue is whether the New Jersey attorney-client privilege is preserved, when an employee e-mails her attorney from a personal email account, on a company computer.

The first reaction from most lawyers is, "yikes, I hope so."

Maria Stengart was a senior employee at Loving Care, which provides Home Care Services for children and adults. Among other things, Loving Care’s employee handbook states that “Email and voice mail messages, internet use and communication, and computer files are considered part of the company’s business and client records, we work with many attorneys, each one specialized in each field, and 65% of our cases come for a family lawyer, and How long a divorce takes depends on many factors so it takes a much difficult time to contact our family lawyers. Such communications are not to be considered private or personal to any individual employee.” Stengart was issued a company laptop, on which she occasionally accessed her personal Yahoo account. She resigned in December, 2007 and shortly thereafter filed suit against Loving Care alleging constructive discharge due to sexual harassment and ethnic discrimination.

In April 2008 Loving Care sent an image of her laptop hard drive to a data recovery company, which recovered at least one personal Yahoo email between Stangart and her attorney, presumably from a recovered browser cache. Of course, this prompted Stengart to assert attorney-client privilege, demanding that all attorney communications be returned or destroyed. The company balked, and in essence argued that Stengart had waived the privilege by using a company computer.

The trial court found in favor of the employer, but the appellate court reversed.

If I were to play armchair quarterback for a second, I think that the New Jersey Supreme Court will probably find in favor of Stengart as a substantive matter, but the case raises several issues of legal, policy, and practical significance, with no apparent easy answers.

In general, employees have a diminished (ie, nearly zero) expectation of privacy on an employer’s network, especially when the employer has put the employee on notice of that fact. The trial court merely extended this well-established principle to attorney-client communications. After all, an employer must be able to control, protect, and secure its network against a range of threats.

On the other hand, most employers allow company computers to be used for personal reasons. It seems to be bad public policy that an employee would waive the attorney-client privilege simply because she uses a browser on her company computer during her lunch break, rather than a home browser. This is especially true if she happens to e-mail her lawyer about an action against the employer. It seems absurd that a distinction so technical should allow the employer to "rummage through and retain the employee’s e-mails to her attorney," as the appellate court put it.

But if an employee does enjoy some expectation of privacy in personal communications over a company network, how much, and how does an employer write a policy to manage it? Does an employee enjoy the same expectation of privacy for personal email transferred via POP3 or IMAP to a local company version of Outlook, compared to a email recovered from an HTTP browser cache? Does the employer have a duty to not attempt to recover deleted personal emails? Are employers allowed to snoop unless communication appears privileged? I don’t have a good answer, and it will be interesting to see what answer the court comes up with.

Surely an employee cannot enjoy an unqualified expectation of privacy by simply using non-company communications, because employers still have an interest in making sure that employees do not use personal accounts to transfer trade secrets, compete against the company, or download a virus.

We’ll keep an eye on this one.

No Comments

Aaron Titus Speaking at ICAMISS

Note: This article originally appeared on the J.C. Neu & Associates Blog

Aaron Titus will be presenting at the International Conference on Applied Modeling & Information Security Systems (ICAMISS) on October 10, 2009 at the University of Alabama, Birmingham.

The speech will focus on the risks associated with personal information management, especially in an institution of higher education, where information is supposed to flow freely.  These are among the policies and behaviors that put information at risk:

  • Administrative Decentralization
  • Naive Office Culture
  • Unprotected “Old” Data
  • Shadow Systems
  • Unregulated Servers
  • Unsophisticated Privacy Policies
  • Improper Use of the SSN
  • Unsanitized Hard Drives and Insecure Laptops

The International Conference on Applied Modeling & Information Security Systems is sponsored by the Department of Defense, Krell Institute, NASA-Ames Research Center, Institute of Applied Science & Computation, Eastern Illinois University and University of Alabama at Birmingham.

 

No Comments

HIPAA Breach Notification Requirements Effective September 23, 2009

The department of Health and Human Services (HHS) and the FTC have issued a new interim final rule governing health information breach notification requirements. I blogged on this issue back in March 2009, just after the stimulus package, American Recovery and Reinvestment Act of 2009 (ARRA), passed.

This rule, issued in response to ARRA, goes into effect on Wednesday. At that point, all HIPAA-covered entities and their business associates must notify individuals and HHS when personal health information has been breached. HIPAA-covered entities include health plans, health care clearinghouses, or health care providers. The rule also covers “business associates” which include billing companies, transaction companies, lawyers, accountants, managers, administrators, or anyone who handles health information on behalf of a HIPAA-covered entity.

A breach is when individually identifiable health information is acquired, used, accessed, or disclosed to an unauthorized party, in a way that compromises its security or privacy. A “breach” does not include inadvertent disclosures among employees who are normally authorized to view protected health information. A breach also does not include exposure of encrypted personal health information, for example.

When a breach occurs, the covered entity must notify victims and the Secretary of Human Services “without unreasonable delay,” and within 60 days of the discovery of the breach. The covered entity must notify the individual directly if possible (ie, by mail), and must also post a notice on its website if the breach involves 10 or more victims who are not directly reachable. If the breach involves more than 500 residents of a single state, the covered entity must also notify statewide media.

In certain limited circumstances a vendor might be subject to HHS and FTC notification rules. In this case, a vendor which serves the public and HIPAA-covered entities may comply with both rules by providing notice to individuals and the HIPAA-covered entity. In many instances, entities covered by this rule must also comply with applicable State notification laws. The test for pre-emption is whether the State law is “contrary,” to the federal law or whether “a covered entity could find it impossible to comply with both the State and federal requirements.”

Compliance

Of course, the best way to comply with the law is to avoiding breaches altogether. The most straightforward way to avoid having a breach is to encrypt personal health information. But if a breach does occur, complying with the law is straightforward. In addition to the requirements above, the notification must include a brief description of the incident, including the following information:

  • Date of the breach;
  • Date of discovery;
  • Description of the types of protected health information breached;
  • Steps individuals should take to protect themselves from potential harm resulting from the breach;
  • A brief description of the investigation, efforts to minimize losses and prevent future breaches;
  • Contact information for individuals who wish to ask questions or learn more information, including a toll-free phone number, e-mail address, website, or postal address.

Beyond that, you’ll have to minimize your losses by repairing your company’s public image, regaining your customers’ trust, and mitigating civil liability.

References: 45 CFR parts 160, 162, and 164.

Note: This article was originally published on the J.C. Neu & Associates Blog.

No Comments

Visualizations of Identity

~IDENTITÄT – The »Gestalt« of digital identity

~IDENTITÄT – The »Gestalt« of digital identity is the bachelor’s thesis of Jonas Loh and Steffen Fiedler. The students at University of Applied Sciences Potsdam, Germany crawled more than 100,000 personal raw data sets on the web and analyzed their contents, including parameters of time. They developed methods for visualizing and comparing the data, resulting in a series of “personal interpretation[s] of the dig­ital identity as an amorphous sculpture.”

The results are striking embodiments of complexity, movement, incongruity, and finiteness; much like the average identity. These sculptures are successful because they capture the movement and growth of one’s identity, convoluted and tied in messy knots of contradictions, incompleteness, and experimentation.

~IDENTITÄT is a reminder of the simultaneous complexity and finiteness of human identity, and a warning that our digital identities are nothing more than a collection of credit reports, Facebook pages, Google results, bank account numbers, and archived e-mails.

As an odd mashup of Geek, Identity Guy, and Architect/Designer, I couldn’t help but give this project a shout-out. And though I think that the description “gestalt” is a little overstated, the provocative sculptures teach us new ways to abstract something as indeterminate and personal as your identity. Bravo, Jonas and Steffen.

Hat tip: Identity Woman.

No Comments

Dear Legitimate Companies: Stop Acting Like Phishing Rings

Danger Wrong Way Turn Backby Aaron Titus

As a privacy and consumer advocate, it ruffles my feathers when otherwise legitimate companies force the public to disregard common-sense online safety practices in order to use their services. Among the many safety tips are:

  1. Only give confidential personal information to people you affirmatively contact, never to anyone who spontaneously contacts you.
  2. Don’t click on URLs in unsolicited e-mails.
  3. If you want to click on an e-mail link, never click “dishonest” links – links that don’t match the displayed URL.

Bad Practices

American Student Assistance (ASA) is a non-profit organization which helps students keep track of their student loans. It’s also an example of a legitimate organization with some irresponsible privacy practices.

Earlier this year I received an unsolicited e-mail from the ASA. I had never heard of the ASA, but the e-mail insisted that they were “the guarantor of [my] federal student loans.” To this day my bank has not introduced me to the ASA. Of course, this spontaneous contact from an “authoritative” organization made me suspicious. Red Flag 1: Unsolicited e-mail claiming to be from an authoritative source.

The letter instructed me to follow a link to log in with my FAFSA PIN. I was also notified that I have a “Profile,” and was invited to Update my profile by clicking on a link. The link took me to an insecure and unbranded website which automatically filled out my name, e-mail address, and indicates that I have been opted-in to receive a newsletter. Red Flag 2: Unsolicited authoritative e-mail, requesting that you “log-in” using sensitive information on an unsecured, no-name server. Spam newsletters are a bonus.

But before clicking on the links, I moused over each of them to see where they led to. A link which purported to go to “www.amsa.com/bor” actually links through “http://click.email-asa.org/?qs=33c40ef691b275c8d3b7e7d0430ce34d0980241c6c7eb313b745465bb515d8d5″. In fact, each of the eight links in the e-mail were “dishonest,” in that the actual URL was different from the displayed URL. Red Flag 3: Dishonest links.

This e-mail screamed “Phishing Scam,” so I called the toll-free phone number listed in the e-mail. A woman answered the phone. She immediately asked for sensitive personal information. I gave her my first and last name, but refused to give her any additional information since they had contacted me and I had no way to verify who they were. Red Flag 4: Unsolicited third party requesting personal information over the phone.

ASA’s Privacy Policy contains the following promises:

We do not disclose any nonpublic personal information about you or our other current or former customers, except as permitted by law…. We restrict access to nonpublic personal information about you to our employees, contractors, and agents who need to know the information in order to provide service to you…. We maintain physical, technical, and administrative safeguards in compliance with federal regulations to safeguard your nonpublic personal information. (Accessed August 27, 2009.)

But ASA’s privacy policy didn’t translate to privacy practices. After I refused to share personal information the lady on the phone asked, “Is your name Aaron [X] Titus, or Aaron [Y] Titus?” Uncomfortable, I replied, “Aaron [X]…” She asked for my date of birth. When I refused to give it to her, she read it to me over the phone. When I refused to give her my address, she repeated my full address including street, number state and zip code. She told me which school I attended and that she had access to my social security number on her screen. Red Flag 5: A representative sharing sensitive personal information over the phone without first authenticating.

Since I had no idea who this organization was I asked, but never got a straight answer. She and her supervisor variously described the organization as a “government agency,” “not a government agency,” “a non profit government agency,” and a “non profit organization which receives federal funds.” They relied on some relationship with the federal government to gain credibility. Red Flag 6: A fishy and inconsistent story designed to earn your trust.

My Advice: Quit it

After filing a complaint with the company, I talked with ASA’s Privacy and Compliance Director, Betsy Mayotte. Ms. Mayotte was kind enough to apologize for the behavior of her organization, and convinced me that the ASA is a legitimate organization, albeit one with uneducated and dangerous privacy practices. Apparently the representative was re-trained. But they did not plan to change anything else.

The dishonest links were designed to measure click-throughs: A common marketing practice. The unbranded and insecure server which asked me to update my “profile” was the result of bad practices, laziness or poor training. The other blatant violations of their privacy policy and outrageous behavior by the representative was more of the same.

I wish I could say that this is an unusual event. But unfortunately I’ve seen similar behavior by my bank, and even former employers. When legitimate companies force consumers to be irresponsible, the online public becomes irresponsible. Forcing consumers to ignore common-sense safety practices may save you a buck in the short run, but they make your customers irresponsible and erode overall online public safety. So here’s my advice to legitimate companies who behave like phishing rings:

Quit it.

Seriously, stop training the public to be irresponsible. If you want to track click-throughs for an e-mail marketing campaign, set up a virtual redirect on your main server. If you got sensitive personal information through a third party, make sure to have that third party introduce you to the customer. Don’t send unsolicited e-mail, and don’t cold-contact potential customers to request that they share personal information. Once and for all, encrypt your website. If your marketing department isn’t all that tech-savvy, hire someone who is. Train your customer service representatives never to give out personal information without first authenticating the identity of the person on the other end of the line.

Privacy policies are not just legal boilerplate which you can write and forget. Make sure that your privacy policy matches your privacy practices. This means that your customer service representatives should be as familiar with it as your general counsel.

Note: This article originally appeared on Security Catalyst.

1 Comment

Your Data Self

Note: This article was originally posted on Securitycatalyst.com.
seurat-la_parade_detail

by Aaron Titus

Georges-Pierre Seurat was a 19th century French painter credited with starting Neo-impressionism and developing a painting technique called “pointillism.” His famous painting, La Parade, contains the detail on the right: A complicated series of blue, orange, pink, red, black, and yellow dots that together create a man’s profile.

This detail is the single best visualization of your “Data Self” I have seen. Your Data Self is a collection of your credit report, Facebook page, Google results, Bank account numbers, archived e-mails, and an endless parade of other data. Like pointillism techniques, which juxtapose contrasting dots to create vibrant masses of shaded tones, each piece of personal information is a single dot. Perhaps one is your address, your middle name, your pet’s name, or your favorite color. Maybe some represent your family, and others represent your friends or religious beliefs. Some represent your travels, magazine subscriptions, and purchase habits. Still others are intimate thoughts.

Taken individually or in small groups, they do not mean much- they may even seem to contrast or contradict one another. But all together they form your profile, or Data Self: A pretty good, but not 100% accurate representation of who you are. And this profile is exactly what data brokers, government actors, and marketers (among others) are trying to determine.

We leave trails of dots as we interact with others, especially online. As Gregory Conti, a computer science professor at the United States Military Academy at West Point, explained, “Free Web services aren’t free. We pay for them with micropayments of personal information.”

Since your Data Self is a digital alter-ego, with the power to enter contracts, grant access to your financial assets, have surgery, or commit crimes, you should actively shape and control access to your Data Self.

1 Comment

Data Breach Notification Requirements in the United States and European Union

Note: This article originally appeared on Jeffreyneu.com

This brief analyzes more than 40 United States Breach Notification laws, the American Recovery and Reinvestment Act, and compares those requirements with EU Directives 2002/58/EC, 2002/21/EC, and the Data Protection Working Party Opinion 1/2009 on 2002/58/EC proposed amendments. This brief does not address individual EU member states’ implementations of EU Directives 2002/58/EC and 2002/21/EC.

Executive Summary

Both the United States and European Union require certain entities to notify individuals when their personal information has been breached. In the United States, State Breach Notification Laws (BNLs) require persons and organizations to notify individuals whose personal information has been "breached." BNLs generally apply to any entity which possesses certain classes of personal information, such as social security numbers or account numbers. The usual elements of a breach are as follows, with common variations in parentheses:1. (Reasonable likelihood of) Unauthorized and Bad Faith 2. Acquisition of3. Unencrypted or Unredacted 4. (Computerized) Personal Information, 5. (Which is likely to cause harm).Absence of one or more of these elements will defeat the notification requirement, whereas mere knowledge of a potential breach will often trigger a duty to investigate.[1]With the exception of certain health information breaches, [2] breach notification requirements are not yet Federalized.

Read the rest of this entry »

No Comments