Wei-Chen Hung

http://bits.blogs.nytimes.com/2014/03/28/microsoft-to-stop-inspecting-private-emails-in-investigations/

http://www.nytimes.com/2014/03/21/technology/microsofts-software-leak-case-raises-privacy-issues.html

The issue arising here is the legitimacy of Microsoft’s investigation which accessed the Hotmail content of a user who was tracking in stolen Microsoft source code. The purpose of Microsoft’s internal investigation is to search for evidence of theft of its trade secrets in a Hotmail account.

The search appeared to be legal and in compliance of Microsoft’s terms of service. The term of the service allows Microsoft to access user’ contents to protect the rights and property of Microsoft, and the Electronic Communications Act allows Microsoft to disclose customer’s communication if it is necessary to protect the right or property of the service provider. This raises a question that does a company need to obtain court orders to search their own service? If the company only searched the employee’s account to meet the standard to obtain a court order, will the search triggered consumer’s privacy concern?

The scope of search seemingly beyond the expectation of privacy that general public considers reasonable for internal investigation. In this case, Microsoft not only searched the account of its former employee, but also the outsider’s French Hotmail account. It reaches the account of a third party and the substantial contents in the email. Criticism from privacy advocates, therefore, warned that it would discourage bloggers, journalists and others from using Microsoft communication services.

In this case, Microsoft decided to take the approach that referring to law enforcement means. Despite that Microsoft might lose control over the entire process, the reaction from press freedom and privacy advocate’s was very positive. For the technology companies in their future decision making, this case shows that it is important to have the awareness of the public’s privacy interest, and to consider the need of the customers who have less resource and less control over the security of internet service they use.

 

 

Hunter Haney

No Strict Liability in New York For Medical Employee’s Breach of Confidentiality

http://www.law360.com/articles/499864/shielding-of-clinic-in-ny-gossip-case-spurs-privacy-worries

http://www.newyorklawjournal.com/id=1202637353576/Clinic+Not+Liable+for+Nurses+Breach+to+Patients+Girlfriend%3Fmcode=0&curindex=0&back=TAL08&curpage=ALL

http://dritoday.org/post/New-York-Court-of-Appeals-Firmly-Narrows-a-Medical-Corporatione28099s-Fiduciary-Liability-for-the-Unauthorized-Disclosure-of-Confidential-Patient-Information-by-a-Non-Physician-Employee.aspx

Early in 2014, the New York Court of Appeals grappled with adapting New York tort law to changing technologies and conceptions of medical privacy in the case of Doe v. Guthrie Clinic Ltd.   Six of seven justices ultimately came down on the side of the health care provider, Guthrie Clinic Ltd., declining to hold the defendant financially accountable after a nurse allegedly gossiped about a plaintiff’s sexually transmitted disease.

The appeal originated in federal court, where a “John Doe” plaintiff filed against a clinic that employed a nurse who allegedly recognized the plaintiff as the boyfriend of her sister-in-law, and accessed his medical records and sent text messages to her regarding his condition.  After rejecting Doe’s other claims, the Second Circuit certified a question to New York’s high court as to whether Doe could assert a specific and legally distinct cause of action against the defendant for breach of the fiduciary duty of confidentiality in the absence of respondeat superior.

The Court of Appeals said “no”, holding that New York common law does not impose strict liability on a medical business for a breach of fiduciary duty of confidentiality when the employee’s acts are outside the scope of his or her employment and not reasonably foreseeable.  As the Court noted, however, the plaintiff may still assert claims for negligent hiring, training and supervision, and for failure to establish adequate policies and procedures for safeguarding confidential information.

While some praised the decision for its restraint in not reaching what might amount to an extremely burdensome prospect of liability for medical companies, the Court’s lone dissenter, Judge Jenny Rivera, opined that allowing a cause of action against a provider for its employee’s actions would “ensure the fullest protections for patients” in an advanced technological age.  Privacy law scholars similarly lamented the lost opportunity to improve privacy practices in a time where, as here, information can be so quickly and easily disseminated.  Professor Mary Anne Franks, of University of Miami School of Law, suggested that the dissent’s argument would have had more force had it suggested that technological advances have transformed our “outdated conception of what should be considered ‘reasonable foreseeable’” with regard to health privacy disclosures.  Nonetheless, the Doe majority saw the dissent’s reasoning as a slippery slope, noting that a medical corporation could face damages if their receptionist told someone at a cocktail party that a patient had been in their office to see a doctor.

In sum, the Court restricted fiduciary liability for an employee’s acts under state law, but left open the door for plaintiffs with other direct causes of action, suggesting the Court is, at least to some extent, assured that sufficient incentive exists under state law (if not federal law) for providers to establish and enforce privacy policies regarding health information.

 

 

Katie Stork

http://www.ctvnews.ca/canada/stop-sharing-suicide-attempt-info-privacy-commissioner-tells-police-1.1774883

http://www.sunnewsnetwork.ca/sunnews/politics/archives/2014/04/20140414-171556.html

http://www.cbc.ca/news/canada/windsor/canadians-mental-health-info-routinely-shared-with-fbi-u-s-customs-1.2609159

Ontario information and privacy commissioner Ann Cavoukian released a report this week that disclosed that police reports about Ontarians’ suicide attempts were being uploaded into the Canadian Police Information Centre (CPIC) database, which is accessible to the FBI and the Department of Homeland Security (which includes US Border Control).  This practice has resulted in numerous Ontarians being denied entry into the US because of suicide concerns.

The issue is in the manner in which some police forces were uploading such reports into the CPIC database.  For instance, according to reports Toronto automatically uploads the reports, without regard to the specifics of each situation, while Waterloo, Hamilton and Ottawa appear to use at least some discretion.  According to Cavoukian, 19,000 mental health episodes have been uploaded to the CPIC database.  While some suicide attempts, such as those that could harm others or were intended to also harm others, may warrant being accessible to US Border Control, Cavoukian said that the police should (and are legally able to) use discretion when uploading suicide attempts to the database, to prevent oversharing of particularly personal and sensitive information when it is not relevant and only harmful to those involved.  Cavoukian recommended that suicide attempts only be shared when: (1) the attempt included threat of or actual serious violence or harm against others, (2) the attempt intended to provoke a lethal police response, (3) the individual had a history of violence against others, or (4) the attempt occurred while in police custody.

It is worth noting that, while this story was widely reported in Canadian media, there did not appear to be any mention in American media.  It would be interesting to find out whether there is any reciprocity in such sharing.

 

 

Jordan Joachim

Google Invites Geneticists to Upload DNA Data to Cloud

Google recently announced that they are beginning an initiative to make genomic information available for search on their cloud infrastructure.  The project has enormous upsides; enhanced genomic searching and processing can reveal deadly mutations and aid researchers to find life-saving cures.  The global market in genomic information is also rapidly growing.

Nonetheless, genomic data can be especially sensitive.  As genetic analysis becomes more accurate and widespread, making this information publicly available can have potentially disastrous consequences for health privacy. Genetic information not only reveals sensitive personal information like diseases, but gets to the very heart of who a person is.

Therefore, in order for genomic searching to develop, Google is developing strong privacy standards for the handling of this data. Aided by the Global Alliance for Genomics and Health, they are developing polices for the ethics, data storage, and security of this data.  Nonetheless, genomic information is different than any other type of data, and therefore may require a different approach than other data, including other health data.

Genomic data has the potential to create huge strides in combatting disease.  Hence, it is essential to make this data accessible to researchers and scientists.  On the other hand, this data can be potentially dangerous, meaning that it must be guarded through effective privacy policies.  Google will have to find a way to reconcile these two goals in order for this project to be a success.

 

 

Catherine Owens

http://www.renalandurologynews.com/fax-sent-to-wrong-number-results-in-hipaa-violation/article/305022/

This article details an incident very similar to the cases we read last week (e.g. Doe v. SEPTA). The article’s title says it all – “Fax Sent to Wrong Number Results in HIPAA Violation.” A patient, Mr. M, was moving to a new town and needed his medical records transferred to his new doctor. His former doctor however mistakenly faxed them to Mr. M’s employer, who subsequently found out that Mr. M was HIV-positive. What’s even worse is that the fax did not have a cover sheet indicating that it contained sensitive information.

This case is a great illustration of how technology makes communications among health care providers easier but also opens the door much wider for potential privacy intrusions. I can only imagine the privacy implications as doctors being to digitize medical records in general let alone just fax them to another doctor!

 

 

Sam Zeitlin

Does the Obamacare website violate HIPAA?

Hidden in the source code of the Obamacare website is an ominous warning: users have “no reasonable expectation of privacy about communication or data stored on the system.”  This warning is never displayed to users.  But during last October’s hearings about the rollout of the ACA, congressional Republicans asked the Administration whether the Obamacare website complies with HIPAA, (a.k.a. the Health Insurance Portability and Accountability Act of 1996), the law that protects the privacy of Americans’ health information.

As it turns out, the Obamacare website and the data systems behind it are not compliant with HIPAA—nor are they meant to be.  The Department of Health and Human Services contends that the service doesn’t need to follow HIPAA because it doesn’t fall into any of the three categories of entities covered by the Act: healthcare providers, health plans, and healthcare clearinghouses.  Health care providers are doctors, nurses, pharmacists, clinics, and other groups that directly provide care.  Health plans, like HMOs and insurance companies, actually pay for care.  Healthcare clearinghouses are contractors that process and reformat health information as it moves between other groups like medical providers and insurers. Instead, because the Obamacare website merely vets applicants before referring them to insurance companies, the government argues that HIPAA does not apply.

So does this mean that the Obamacare website is going to create a significant hole in the privacy protection provided to Americans by HIPAA?  Probably not.  First, the Obamacare website doesn’t collect any medical information from applicants beyond whether or not they smoke (it doesn’t have to, because the ACA bans insurer discrimination against people with preexisting conditions).  And second, the website still has to comply with the Privacy Act of 1974, which protects personal records held by administrative agencies (like the Department of Health and Human Services).

 

 

Antti Härmänmaa

Distressed Babies, HIPAA and AOL’s Health Privacy Ruckus

Natasha Singer of the New York Times writes about a recent health privacy stir at AOL following a remark by the CEO Tim Armstrong at a conference call why the company had to cut employees 401(k) benefits because it had paid two million dollars for the medical treatment of two of its employees’ “distressed babies”.

Armstrong’s blurt rightfully raises questions on the extent employers are disclosed their employees’ sensitive health details. It is precisely these kinds of disclosures on potentially identifiable private health information that the Health Insurance Portability and Accountability Act (‘HIPAA’) was supposed to prevent.

According to Lisa J. Otto, a privacy lawyer interviewed by the NY Times, Armstrong was likely not authorized to see the employee data he publicly discussed in the first place.  The HIPAA regulation governs the use and disclosure of the patients’ medical information by hospitals and health insurers. Generally, the law does not give the right to disclose health information to employers without the employee’s permission, but it does allow self-insured employers to receive health care information from the company’s group health care plan. The purpose is to give the employer a detailed picture of the health care expenses, so that they can channel employees toward more cost-efficient care.

Companies agree contractually with their group health plans on the types of employee information that can be shared and the people who may receive the data. Usually the information inside the company is shared only to HR executives and managers, who have received training on the confidentiality requirements of such data. These named recipients of the information are not allowed to disclose the information further inside the company.

The problem is also partly because group health plans do not use a uniform format for sharing information. The varying practices currently used can lead to situations where a report discloses information that allows executives to identify an individual employee. This is especially a concern with rare cases such as premature babies or HIV.

 

 

Rachel Goodwin

http://articles.latimes.com/2014/jan/10/news/la-pn-obamacare-data-breach-house-vote-20140110

The Obamacare website security breaches raised enough concern for even an incredibly inactive House of Representatives to pass a bill to address it. The situation highlighted the particular concerns surrounding sensitive health information. It also highlighted differences between government and corporate action.

At the same time that people were raising concerns about the Obamacare website’s security, Target suffered a breach of thousands of consumers’ data. However, as the congressmen noted, Target consumers willingly interacted with Target and shared their information. While we may argue over the level of choice involved in interacting with different companies, it is certainly higher than in most of our interactions with the government. In this case, many were compelled by their employers to obtain coverage through the Obamacare website. The government also compelled the interaction in a sense, by leveling a penalty on those that did not register. To the extent that we care about consumer choice in such privacy matters, the Obamacare security breaches were particularly concerning.

The breaches were all the more concerning because they involved health information. Because information about people’s health feels particularly intimate, these breaches felt particularly threatening.

In order to sign up for health coverage people had to turn over information they would never want their employers to know for fear of discrimination. While the plethora of sensitive data on our consumption patterns has spurred committee meetings and vague resolutions, the potential breach of health information felt private, personal, and threatening enough to spur a dormant House to action.

 

 

Poonam Singh

Health Privacy in a Big Data World

http://healthitsecurity.com/2014/04/15/new-jersey-explores-health-big-data-potential-privacy-risks/

http://www.washingtonpost.com/national/health-science/scientists-embark-on-unprecedented-effort-to-connect-millions-of-patient-medical-records/2014/04/15/ea7c966a-b12e-11e3-9627-c65021d6d572_story.html

We live in a “big data” world. But what does that mean, and what particular implications does this have for our health information? The federal government, states, technology companies, and policy wonks have all been debating this idea recently. Big data is a buzzword used to “describe a massive volume of both structured and unstructured data that is so large that it’s difficult to process using traditional database and software techniques” as well as the technology that actually processes, analyzes, manages, and ultimately stores this data.[1] At a recent conference at Princeton University, scholars and industry experts weighed in on the merits and potential pitfalls of the drive towards aggregating patient data in order to improve wider public health and achieve goals in wellness on the state level. The conference has wider implications, however.

In the wake of the Affordable Care Act, Congress created its own body, the Patient-Centered Outcomes Research Institute (PCORI), to aggregate millions of patient’s data in order to use the pull of big data to draw better conclusions than found in traditional patient samples used for conventional clinic trial data. The hope is that this data will allow for better improvements in patient care, and more efficient resource allocation towards treatments and medicines that prove incrementally more effective than others but might otherwise go unmeasured with standard data collection and reporting methodologies.

In response to both the state and federal efforts, however, remains a deep concern about the effect that this aggregation of data will have on individual patients, and it is clear that committing to the anonymization of the data and on ongoing protections for the storage of the data must remain a priority. A clear problem that the PCORI has is funding – a mere $500 million versus the whopping $30.4 billion the National Institute of Health receives. As states like New Jersey join the drive to harness the power of big data in regards to health information, funding, staffing, and ongoing rigorous maintenance of systems as well as a robust series of protocols regarding access to data by third parties are all going to be questions that must be answered; otherwise, there is a very real potential for harm to the very patients this strategy is meant to help.

 

 

Kristina Harootun

Being Punished for Bad Genes, New York Times,

The Genetic Information Nondiscrimination Act of 2008 (“GINA”) primary purpose is to prohibit discrimination in premiums or contributions for group health coverage (“underwriting purposes”) by preventing employers and health insurers from accessing identifiable genetic information. In 2013, the Health Insurance Portability and Accountability Act (“HIPAA”) Omnibus Rule added genetic information to the definition of Protected Health Information. However, GINA contains a major omission that has created an immense dilemma for folks with “bad genes”—the law’s protections exclude long-term care insurance, including life and disability plans.

The harms society seeks to prevent by having privacy laws protecting health data are particularly salient in the context of genetic information. Genetic testing has invaluable benefits, including advanced medical research and detection of genetic mutations or markers that predispose the patient to diseases such as Alzheimer’s and breast cancer. Although costs in genetic testing have gone down–making them accessible to a wider population–people who are likely to have genetic markers avoid getting these tests in fear of being denied coverage or paying extraordinarily high premiums for long-term care insurance plans.  According to the New York Times article Fearing Punishment for Bad Genes, people who have a genetic predisposition for Alzheimer’s are five times more likely to seek long-term health plans. Inadequate protections in GINA have forced many people to choose not to be genetically tested for fatal diseases because they do not want to risk being denied coverage for these plans. Advances in genetic research are also potentially impeded because research participants refuse to be genetically tested due to these same insurance fears.

The age of digitized medical records exacerbates the problem of keeping genetic information confidential. Genetic information is a uniquely sensitive type of data because it cannot be “de-identified” by stripping it of the 18 factors HIPAA lists—like a Social Security number–that would comply with de-identification.[2] Further, once the genetic testing happens, it is increasingly difficult for that information to be separated out if it needs to go into a patient’s medical records. These technicalities are something the health care industry needs to confront. But even if the information is kept secure and private, insurers are already admitting to penalizing applicants for omissions on questions about genetic markers by assuming they are “guilty by omission”.

Although GINA forbids employers from using genetic information for underwriting purposes, Wellness Programs can still offer incentives that induce employees to “voluntarily” provide their genetic information. These incentives raise questions about how voluntarily the sharing of information is, and can also lead to more and more genetic information being shared and converted into electronic form, with questionable protection.

GINA’s focus on protecting genetic information based on the type of entities it deems should be permitted to access the information is part of the problem. Although GINA is a law that seeks to prevent discrimination instead of protecting data privacy per se, it is based on the principle that genetic information is something that requires protection to advance its primary purpose. If what’s underlying GINA is the proposition that genetic information is highly-sensitive by nature, then that information should be given more thorough protection by virtue of its sensitive nature. Rather than providing blanket protection to information based on its type and level of sensitivity is an ongoing deficiency in the form and structure of current privacy laws.[3] HIPAA also has a focus on “covered entities”, rather on the sensitivity health information itself.[4]  The shortcomings in both HIPAA and GINA’s protections are exemplary of the problem seen in health privacy.

 

 

 

 


[1] http://www.webopedia.com/TERM/B/big_data.html

[2] Electronic Frontier Foundation, Genetic Information Privacy, available at https://www.eff.org/issues/genetic-information-privacy.

[3]Id.

[4] Id.