Back when we all lived in villages, knowing individual merchants allowed you to identify and avoid the cheaters. As we urbanized, however, brand names became the reputational markers. But when you are dealing with merchants on the internet, where no one “knows you’re a dog,” what guarantee do you have?
A current case involving a Dutch physician who argued that her licensure suspension should be forgotten highlights the growing dilemma of privacy and reputation in health care in the digital age. Here is an overview: A Dutch surgeon’s license was suspended for poor post-operative care, on appeal that suspension was made conditional and she was allowed to continue practicing under supervision. The Dutch government’s health care regulator noted the events in the official public record as did a “blacklisting” website that mentioned her, along with other physicians as people to avoid. She sued Google, asking them to remove the links to the “blacklisting” website. Google refused. The Dutch Data Protection Agency (DDPA) agreed that “it was important for future patients to be able to find the information.” They noted that the information was not manifestly incorrect and at the time probation was on-going, making the information relevant.
The plaintiff appealed to the courts, arguing that the “blacklisting” website was not representative and therefore not reliable source and that publication of these findings, even by the government’s regulators represents “blaming and shaming"—a form of “digital pillory.”
Under the strict privacy rules of the EU Article 17, an individual has a right for their digital traces to be forgotten with some broad exceptions. Specifically, exceptions requiring data retention include information "…necessary for historical, statistical and scientific research purposes, for reasons of public interest in the area of public health, for exercising the right of freedom of expression, when required by law or where there is a reason to restrict the processing of the data instead of erasing them."
Several years ago, the Court of Justice for the European Union (CJEU) ruled that for data playing an essential role in public life, and arguably that includes malpractice litigation, the "preponderant interest of the general public in having ... access to the information in question" may overcome a right to be forgotten.
Further, Spanish authorities in a case involving Google and a real estate transaction refined and extended the right to be forgotten. They found that search engines, specifically Google, by the mere linkage and display of information were “controllers” of private information and were required to remove links, when challenged by individuals, under the EU privacy rules. The website that contained the data to be forgotten had to be addressed separately.
Since Google processes about 63,000 search requests each second, being a “controller” of private information is a natural conclusion. About 90 percent of searches begin and end on the front page. In fact, most end after the first five listings on that page other than ads. How we search has spawned an industry “gaming the algorithm” of moving you up in the page rankings for marketing and “reputation” managers who seek to bury your less favorable links on pages 2 or 3.
In the case of the Dutch physician, the courts subsequently found the DDPA regulatory decision to be incorrect, the links “irrelevant and excessive,” and ordered Google to remove them. Among the reasons cited:
- Both Google and the Court agreed that professional disciplinary proceedings are not “special criminal data” that can never be forgotten.
- The government kept the same information publicly available so that removing the links would not impair the public interest.
- That the board had never stopped her from working during her probationary suspension implying she was not a threat to patients.
- Europeans feel that burying the link, as we do in the U.S. is insufficient. The 2014 Spanish decision obligated controllers, like Google to resolve three issues,
- Whether truthful information should be treated differently to false information and, if so, how to determine which information falls into which category.
- How to classify information as "old" versus "new" and at what point does the "staleness" of information require its removal on request.
- The relevance of the original source of the publication to a removal request.
Intuitively, these are all good arguments. But in an initial case brought by a plastic surgeon, in which he lost, he wanted an article written about an accusation of malpractice to be forgotten. When the article was written, it was true: it was when he was found innocent that the information became false. A similar argument could be made about our youthful indiscretions: is 10 or 15 years sufficient for them to be forgotten? That debate played out in the confirmation hearing for Justice Kavanaugh and the same issues that lead to the Cleveland Clinic firing a medical resident for anti-semitic statements made five years previously. And as for the argument of a source, does a scathing Yelp review by an angry patient deserve to be removed or retained more than the same report sent to the state’s medical board prompting an investigation?
Even the best of algorithms with human oversight are no guarantees. After all, at some point, Google will be sued for not posting information, that a patient who feels their injury could have been avoided if “only they knew about those prior cases.”
A Google search for position statements on physician’s online reputation from the American Medical Association, American College of Surgeons, American College of Physicians and American College of Cardiologists found no statements. Even on pages 2 or 3, there were no positions.
The right to be forgotten can be couched as the right to have only correct information, but changing times and conditions make “relevant” and “correct” ambiguous. U.S. law balances those issues by the First Amendment and laws governing both libel and invasion of privacy. Search engines are not the judge, the government through the courts makes those decisions. But that is an expensive undertaking, and the laws do not always align with the best interests of patients or physicians. In the EU, the responsibility has been uneasily shifted and I would argue, unstable alliance, of government regulators and large corporations.
How technology companies, and that would include Facebook and all the rest, have managed privacy concerns and separating true from false should give us all pause. Rather than writing another set of clinical guidelines, shouldn’t we, along with our patients, the real stakeholders, develop a way that expresses truthful reputation that is both truthful and permits redemption.
Charles Bosk’s classic description of managing “medical failure” provides the direction we should take: forgive and remember. It is time we figure out how to do this on the internet before the regulators and the corporate interests decide for us.
Charles Dinerstein, MD, MBA, FACS is a retired vascular surgeon and senior fellow at the American Council on Science and Health where he writes on contemporary health issues. His more philosophical thoughts can be found at Surgical Analytics. He is a 2017–2018 Doximity Fellow.