Let's say you have had discussions about your breast cancer diagnosis on Facebook, a useful forum for comparing treatment options with others. There's only one problem: Facebook has now categorized you as a patient, and you constantly receive targeted ads about local cancer services that show up on your computer screen at work for all your co-workers to see, right when you're up for a big promotion.
Many users experience a version of this scenario when they receive creepily personalized ads while browsing on Facebook. When those ads follow users onto sites outside Facebook, it feels like an invasion of privacy. But how do you regulate data privacy in an age of big-data black boxes?
Mark Zuckerberg's testimony about the Facebook-Cambridge Analytica scandal alerted users about what personal data Facebook routinely collects and shares with third-party apps. But many questions were left unanswered. How many apps are collecting this data, and what are they doing with it? Are there more Cambridge Analyticas out there? It took a political scandal to get the attention of Congress this time. Where will it happen next, and how long will it be before the public finds out?
While Zuckerberg claimed that even he is not fully aware of everything that happens in the Facebook digital economy, the evidence suggests that health-care information may lead to the next major data-related crisis.
In early April, MSNBC reported that Facebook recently launched a project based in its secretive "Building 8" group to get hospitals to share anonymized patient data with them. The project was reportedly put on hold in the wake of the current scandal, but the stated plan was to match hospitals' patient data on diagnoses and prescription information with Facebook so the company could combine that data with its own to construct digital profiles of patients.
Even setting aside the voluminous evidence showing that true anonymization of data is virtually impossible, Facebook's stated intent was never to leave the data anonymized. But requesting the hospitals' data in that form would allow Facebook to sidestep the issue of obtaining patients' consent, as required by federal law.
The company has reason to believe that, if asked, patients would not consent to this practice. In 2016, Facebook was sued by a metastatic cancer patient who accused the company of violating his privacy by collecting data about his participation on cancer websites outside of Facebook. The case was dismissed and is under appeal, but this clearly has not stopped the company from pursuing data initiatives in health care.
Indeed, Zuckerberg admitted in his congressional testimony last week that Facebook does collect some medical data from users. Considering the large number of patient support groups on Facebook that use the site for peer-to-peer health care and social support, there is plenty of medically-relevant data to be mined. Membership in some patient groups numbers in the tens of thousands, with average daily posts of several hundred or more. A sampling of the types of data users post includes "tests, treatments, surgeries, sex drive, and relationships" on a breast cancer support site. Other data include location, personal profile information such as age, race, sex, educational background, employment, even cellphone numbers. In addition, many posts include photos that can be subjected to facial recognition software.
It is not surprising that Facebook wants to move into the digital health market: So does Amazon.com, Google, Apple, Uber and all of the other big tech companies. These businesses see an opportunity to profit from user's personal health data because, unlike narrowly defined medical data, health and wellness data is not considered protected health information and therefore is not protected by privacy laws.
In contrast to the personal data users might post on Facebook, patient records at hospitals and other covered entities are protected by privacy laws such as the Health Insurance Portability and Accountability Act (HIPAA). Yet, for years, patient data has been sold to medical data miners and brokers in a multibillion-dollar global trade. The pharmaceutical industry is a major player in this marketplace, spending billions of dollars annually on direct-to-consumer advertising aimed at influencing physicians' prescribing practices and patients' requests for brand-name drugs. But social media companies such as Facebook struggled to attract this lucrative advertising market.
In June 2017, Facebook addressed this challenge by convening drug marketers at the inaugural Facebook Health Summit, an event where the company wooed the pharmaceutical industry with new features designed to address their specific needs. Danielle Salowski, Facebook's industry manager for health, said the company re-engineered their advertising features so pharmaceutical advertisers could turn off comments on their Facebook brand pages and on their ads, to help them avoid the negative publicity associated with the legal requirement of reporting adverse reactions to drugs.
This activity could simply be understood as advertising in the age of big data, a practice our society has so far agreed to accept. But the unique risks from health- and medical-data mining and digital profiling of patients suggests greater stakes worthy of regulatory attention. Congress accused Facebook of illegal marketing of opioids and noted that many other types of drugs are also sold on the platform. Patients with life-threatening conditions or limited means to pay for health care are at greater risk of harm from targeted advertising of products or services by unscrupulous vendors.
In addition, the lack of transparency, increasing interconnection of health and technology, and growing reliance on risk modeling in health care means that combining health- and medical-data sources to target patients could harm anyone who has a serious illness or the risk of developing one. The insurance industry uses social media intelligence and profiling to inform algorithms that automate pricing, claims handling and fraud detection. When health risks are understood as financial risks, contextual information from users' digital profiles can be used against them to raise premiums or deny claims. That means everyone is at risk.
Members of Congress were criticized for displaying through misguided questions their lack of understanding about how Facebook works. But in the current data economy, Facebook and other big tech companies operate as black boxes that are impossible for outside users to fully understand. We really don't know most of the time what is being done with the data we post online, except when housing discrimination, exploitative advertising practices or foreign interference in elections is exposed. The Cambridge Analytica scandal revealed that at least 87 million of us are at risk of being exploited or discriminated against.
And really, why should we trust Facebook and third-party app developers to respect users' privacy when their business model depends on doing the opposite? Facebook clearly recognizes the enhanced value of its user data when combined with medical data. That should tell lawmakers that the definition of "protected health information" needs an update. If we can still think of big data as something that could be harnessed for social benefit, we need to create regulations that allow users to consent to participate, like they do in clinical trials. We don't need to lock down all the data; we just need to distribute the power to access and share it so patients and future patients receive benefits, rather than harms, from sharing their experiences online.
- Ostherr is a media scholar and digital health technology researcher at Rice University.