Our Definition of Harm Is Harmful – FunnyMonkey
Our Definition of Harm Is Harmful
by Bill FitzgeraldMay 15, 2023
Sunlight shining through leaves.
In April 2023, the class action lawsuit against Illuminate Education was thrown out because the judge in the case determined that the people whose data was impacted by the breach could not show any harm, or any instances of identity theft, from the breach. This decision is both fully in line with past situations where companies have been let off the hook, and completely misrepresents and underestimates the various, different ways people get hurt by data breaches.
To put it in a different way: the judge’s decision shows how, in some cases, things that are defined as legal don’t come close to what is right. The way we define harm is harmful.
Some background on the Illuminate data breach, and what can only be described as an epically inept response stretching across months. The impacts of this breach were first observed in New York City public schools in early January.
The California Attorney General maintains a site that publishes data breach announcements that are required under California law. A search for “Illuminate” returns a half-dozen districts impacted by the breach that occurred on December 28, 2021. The notifications of this breach tricked out between May 13, 2022, and July 29, 2022 – or about a half a year after the event, and when the impacts were seen in New York City.
But the California data is a small subset of the damage. The full list of schools impacted by the breach is pretty stunning.
In a statement cited in July 2022, Illuminate claimed that “it had ‘no evidence that any information was subject to actual or attempted misuse'”. Claims like this coming from companies that have failed to protect information are common, and should never be taken seriously without detailed supporting information, for these reasons:
Without transparency into the reasons that caused the breach and their subsequent followup investigation, how can a company be trusted to have the technical competence to investigate if or how the information has been misused?
When a company claims they have “no evidence” of data misuse, that claim provides no insight about how they attempted to get any evidence.
Due to how “harm” is defined legally, companies have a perverse incentive to not uncover any evidence of abuse. Internal investigations can tread a fine line between being thorough enough to look plausible, yet lax enough to minimize the chance of finding anything – hence, the statement that a company has “no evidence” is only as meaningful as the company’s transparent disclosure of how they looked for evidence.
But let’s assume, hypothetically, that a company responds very well to a data breach. The company immediately discloses the issue. The company is transparent about how they have looked for misuse of the impacted data, and those methods appear legitimate and thorough. Even if all those things happen, determining a causal link — or even correlation — between a specific data breach and a specific data incident is incredibly difficult, and the range of potential harms are broad.
Is data used to open a fake account? Is data used to compromise existing accounts? Is information in a breach used to inform phishing attacks on friends, colleagues, or acquaintances? Is information stored and cross referenced against data from other breaches to be used in the future? Is information incorporated into training datasets for machine learning models? If a person is targeted immediately after a breach, or in the future — by criminals via identity theft or phishing, by biased hiring algorithms, by biased credit-rating schemes, by biased housing tools — connecting a specific harm to a specific breach will be impossible — yet, in many cases, that is the exact connection people need to show when demonstrating harm.
Until we shift our thinking to explicitly acknowledge that the first harm is the data breach, and subsequent harms — including the unfair diligence required from regular people, aka you and me, to protect against the potential of multiple different harms set into motion by companies — accrue as a direct result of the initial harm, we will continue to maintain systems that privilege companies over people.
Our definition of “harm” hurts people, and based on the latest chapter starring Illuminate, the lawyers working for companies who are sloppy with data like it that way.
by Bill FitzgeraldMay 15, 2023
Sunlight shining through leaves.
In April 2023, the class action lawsuit against Illuminate Education was thrown out because the judge in the case determined that the people whose data was impacted by the breach could not show any harm, or any instances of identity theft, from the breach. This decision is both fully in line with past situations where companies have been let off the hook, and completely misrepresents and underestimates the various, different ways people get hurt by data breaches.
To put it in a different way: the judge’s decision shows how, in some cases, things that are defined as legal don’t come close to what is right. The way we define harm is harmful.
Some background on the Illuminate data breach, and what can only be described as an epically inept response stretching across months. The impacts of this breach were first observed in New York City public schools in early January.
The California Attorney General maintains a site that publishes data breach announcements that are required under California law. A search for “Illuminate” returns a half-dozen districts impacted by the breach that occurred on December 28, 2021. The notifications of this breach tricked out between May 13, 2022, and July 29, 2022 – or about a half a year after the event, and when the impacts were seen in New York City.
But the California data is a small subset of the damage. The full list of schools impacted by the breach is pretty stunning.
In a statement cited in July 2022, Illuminate claimed that “it had ‘no evidence that any information was subject to actual or attempted misuse'”. Claims like this coming from companies that have failed to protect information are common, and should never be taken seriously without detailed supporting information, for these reasons:
Without transparency into the reasons that caused the breach and their subsequent followup investigation, how can a company be trusted to have the technical competence to investigate if or how the information has been misused?
When a company claims they have “no evidence” of data misuse, that claim provides no insight about how they attempted to get any evidence.
Due to how “harm” is defined legally, companies have a perverse incentive to not uncover any evidence of abuse. Internal investigations can tread a fine line between being thorough enough to look plausible, yet lax enough to minimize the chance of finding anything – hence, the statement that a company has “no evidence” is only as meaningful as the company’s transparent disclosure of how they looked for evidence.
But let’s assume, hypothetically, that a company responds very well to a data breach. The company immediately discloses the issue. The company is transparent about how they have looked for misuse of the impacted data, and those methods appear legitimate and thorough. Even if all those things happen, determining a causal link — or even correlation — between a specific data breach and a specific data incident is incredibly difficult, and the range of potential harms are broad.
Is data used to open a fake account? Is data used to compromise existing accounts? Is information in a breach used to inform phishing attacks on friends, colleagues, or acquaintances? Is information stored and cross referenced against data from other breaches to be used in the future? Is information incorporated into training datasets for machine learning models? If a person is targeted immediately after a breach, or in the future — by criminals via identity theft or phishing, by biased hiring algorithms, by biased credit-rating schemes, by biased housing tools — connecting a specific harm to a specific breach will be impossible — yet, in many cases, that is the exact connection people need to show when demonstrating harm.
Until we shift our thinking to explicitly acknowledge that the first harm is the data breach, and subsequent harms — including the unfair diligence required from regular people, aka you and me, to protect against the potential of multiple different harms set into motion by companies — accrue as a direct result of the initial harm, we will continue to maintain systems that privilege companies over people.
Our definition of “harm” hurts people, and based on the latest chapter starring Illuminate, the lawyers working for companies who are sloppy with data like it that way.