A Data Breach Is Bad, But Disclosing Too Much Could be Worse
A Data Breach Is Bad, But Disclosing Too Much Could be Worse
When state and local government suffers a cyber attack, officials are faced with a dilemma: How much is the public entitled to know? How much can you reveal while keeping systems secure?
Oct. 16, 2022 • Adam Stone
Facebook
LinkedIn
Twitter
Email
Print
GT_OctNov_webart_10.jpg
Shutterstock.com
When state and local IT systems get breached, there’s a balancing act to be struck. How much can and should the public be told?
Some advocates of transparency and accountability say anything that happens in the public realm ought to be public knowledge. On the opposite extreme, some IT leaders worry that anything they disclose can and will be used against them by the bad actors: Better to say little or even nothing about a cyber incident.
Some are ready to codify the latter view. Recent legislation passed in Georgia, for example, puts limits on what government has to share about cybersecurity incidents. It provides for “certain information, data and reports related to cybersecurity and cyber attacks to be exempt from public disclosure and inspection.” That’s vague, and possibly ominous: state legislatures telling IT leaders what they can and can’t say about a breach.
Pushing in the other direction are trends in the private sector, things like the Cyber Incident Reporting for Critical Infrastructure Act of 2022, which requires companies to report significant cyber incidents and offers protections incentivizing them to report. The Cybersecurity and Infrastructure Security Agency (CISA) likewise encourages information sharing, saying it is “essential to the protection of critical infrastructure and to furthering cybersecurity for the nation.”
Between the extremes, there is the vast murky middle ground where state and local IT leaders actually live. Many seek to strike a balance between the public’s right to know and the need to keep some information secret in order not to throw fuel on the fire.
The Public Interest
At a fundamental level, most IT leaders will acknowledge there is a public interest to be served in having some level of candor about cyber breaches. “We are public servants. We work for the government,” said Colorado CISO Ray Yepes. In that context, “there’s clearly some obligation to release information.”
We are public servants. We work for the government ... there’s clearly some obligation to release information.
Transparency advocates say that openness in the wake of a breach can elevate safety and security. “We’re much better off when everybody can have an understanding of how something is set up, how it’s configured, how it’s secured, how to lock things down — and what it looks like when they’re not locked down,” said Alex Howard, director of the Digital Democracy Project.
Disclosure can, for instance, help to prevent repeat incidents. “If everybody else is running the same system and a vulnerability led to that system being compromised, you can increase the health of everything overall by publicizing that,” he said. “If everybody clams up and tries to not talk about what’s happening, then we’re all on our own.”
Some will also argue that a tight-lipped strategy is too often self-serving.
Government agencies may want to clam up in “cases of negligence or significant mismanagement in which agencies are severely lacking in generally accepted security controls,” said Adam Goldstein, a professor and academic program director for Information Technology and Sciences at Champlain College in Burlington, Vt.
“People want to obfuscate anything where a human error is at fault,” Howard said. “They may correctly view that they have legal liability. If you leave the door open and somebody walks in and takes something, then you’re on the hook for any harms that come out of that.”
Some will argue that even this is no reason to hold back. “We should be as transparent as possible,” said city of Phoenix CISO Shannon Lawson. If mistakes are made, “we need to really be honest with the public and say: Hey look, we messed up here. We learned our lesson and this is how we’re implementing changes going forward.”
On the other hand …
The Case for Secrecy
Even as an advocate for openness, Howard will admit that “there’s a lively, robust conversation about when it’s ethical to hold back disclosures” in the case of public systems breaches.
Some will argue that announcing a hack is tantamount to inviting in the bad actors. “If you tell people that you’ve had an attack and are down, it is a flag to certain criminals to say: Hey, they’re hurt, let’s go at them too,” said an IT leader at a major city who asked to remain anonymous due to the sensitivity of the topic. (Going forward, we’ll refer to him here as “the City IT Leader.”)
“Some sophisticated malicious actors will do multiple attacks on you to draw your attention to one area so that they can get at you from a back angle while you’re not watching. Usually that’s a DDoS attack paired with some kind of exfiltration,” the City IT Leader said.
He goes further, arguing that any disclosure can be potentially problematic. “Anything that is in the public domain creates a growing body of knowledge about you as an organization, who your players are, the technologies you’re using, even how you respond,” he said. “All that allows someone to attack you even better.”
Others say there is merit in this reasoning. “You may want to protect, or not completely reveal, either the attack method or even how the attack was detected,” said Javed Ali, a professor at the University of Michigan’s Gerald R. Ford School of Public Policy. “Knowing the sources and methods can give adversaries insight and intelligence.”
We can add to the argument in favor of secrecy the fact that most “cyber incidents” end up being a big nothing-sandwich. “It’s the boy who cried wolf,” Yepes said. “I cannot tell you how many times my team has come to me and said: ‘Oh my God, the sky is falling, we’ve been breached.’ One of the things that I learned is that as a leader, you have to remain extremely calm. Because when you look at it, oh, there wasn’t a breach.”
If IT went public with every one of those alerts, he suggested, the public would drown in useless information.
We need to really be honest with the public and say: Hey look, we messed up here. We learned our lesson and this is how we’re implementing changes going forward.
There’s also a law enforcement argument in play. “Maybe federal law enforcement is tracking what these people are doing,” Lawson said. “You start announcing things to the public, the public doesn’t care, and now you have just exposed what you know to the bad guys. Maybe that hampers law enforcement or the intelligence community and what they do behind the scenes.”
Lawson noted, too, that disclosure sometimes can make a bad situation worse. During the social turmoil after George Floyd’s death, his city’s police department (like many others) was the target of attempted cyber exploits. Calling out those incidents in public would have only fanned the flames, he said, and potentially invited others to pile on.
Weighing all those pros and cons, most state and local IT leaders will conclude that public disclosure in the wake of a breach comes down to a balancing act.
Striking the Balance
When you start to dig down into the details with state and local IT leaders, as well as advocates and experts, the conversation around breach disclosure typically comes down to a matter of what to tell and when to tell. By weighing these considerations, most say, it’s possible to strike an appropriate balance.
First rule of thumb: If personally identifiable information has leaked, you need to tell people who may have been impacted. “There is some inherent governmental obligation to inform people of that,” Ali said. Most cities will have a pre-planned notification process in place, and it typically will include an offer of some months of free credit monitoring, as a hedge against identity theft.
But that’s just the beginning. Suppose there hasn’t been PII exposed, but an IT leader still feels an obligation to let people know there’s been a cyber incident.
“Then it is a balancing act,” Yepes said. “You have to be able to release information, but you also have to consider whether any information can have more consequences down the road. For example, you don’t want to publicize the arsenal of defensive tools that you have. You can say, ‘we are using an advanced protection tool,’ but you don’t say the brand.”
Likewise, it makes sense to be tight-lipped about the actual mechanics of the attack. “I would not talk about the specific vulnerabilities,” Lawson said. “You could certainly say, ‘We had a vulnerability in a particular system, which was exploited.’ But you’re not saying what the vulnerability was, or how they actually got in. I would not transmit that information.”
As Howard put it: “You can say somebody got in to the vault without explaining how they picked the lock.”
Timing also is a major consideration. When to inform people of a breach? In general: After it is really, truly over.
“If the bleeding is still ongoing, don’t report,” Yepes said. “Because if the incident is not contained, and I’m telling people that I’ve been breached, other entities can now come after me.”
Howard said there is precedent for this in other types of emergency notifications, such as among first responders. “There is a really old exemption around freedom of information,” he said. “If there is an ongoing investigation, we don’t want to tip anyone as to what we’re doing. But when it’s done, we will tell you what happened.”
When you know what you are going to disclose, and when, that just leaves “how.” The methodology for emergency communications, including around a cyber incident, ought to be baked into the incident response program.
“It is part of your disaster response methodology, same as for an earthquake or a fire,” said the City IT Leader. “There is someone designated to be the mouth of the event. For us, it would be our communications office, in collaboration with the incident commander, who in our case would be the CISO.”
As to the message itself, “we have a lot of that scripted out, and we would keep the technical detail to a minimum,” he said. “Here are some steps that we’re taking, here are the protections that we have for you, and we’ll keep you up to date.”
There’s also a special case to consider: What to disclose when a third-party vendor who supports the city gets hacked? Lawson had to ponder this when the city’s parking meter application vendor declared it had been the victim of a cyber exploit.
“They had an incident where account numbers and credit cards were not exposed, but people’s usernames, passwords and email addresses were exposed,” he said. “While it wasn’t a city system or city application, we still wanted people to realize that there was this problem. ‘If you’re using this system, go in and change your password.’ We were just trying to be transparent, without dumping fuel in the fire.”
The Ransom Equation
Another special situation worth considering, given the rise of ransomware attacks: If you pay the ransom, do you tell the public?
Some say disclosure just invites trouble. “If you transmit that you’re going to pay a ransom, then you’re sort of dumping chum in the water,” Lawson said. “You’re signaling to bad guys: Come and attack us, because we’ll pay.”
If you transmit that you’re going to pay a ransom, then you’re sort of dumping chum in the water. You’re signaling to bad guys: Come and attack us, because we’ll pay.
Some also see potential legal liability in disclosure. From a legal point of view, paying money to cyber criminals could be seen as giving financial support to a criminal act. No one wants to find themselves in that position.
Yet there’s a strong transparency argument here. “If you’re talking about taxpayer dollars, that answer is obvious,” Howard said. “It’s our money. It’s our government making decisions on our behalf. If you’re paying criminals money to get back access to something that they’ve compromised, that is relevant. The fact of the money is relevant.”
The City IT Leader said he’d rather never pay a ransom, but he would do so to save lives: for example, if the hackers disabled a medical system. He’s run tabletop exercises and determined that if that were to happen, he would disclose.
“It seems the most honest and frank thing to do, so say: ‘Here’s the situation and, God, we really hated it, but we felt like we had to save lives.’ If we have to cross that line, we just need to be honest with people,” he said.
In Colorado, Yepes said he’d disclose: “That money’s not coming from your own pocket, its coming from the taxpayers.” But he’s hoping never to have to make that decision. He’s pushing for a law that would prohibit state agencies from ever paying ransom. If the agencies are forbidden from paying, he said, the bad actors will no longer have incentive to hold their systems hostage.
When state and local government suffers a cyber attack, officials are faced with a dilemma: How much is the public entitled to know? How much can you reveal while keeping systems secure?
Oct. 16, 2022 • Adam Stone
GT_OctNov_webart_10.jpg
Shutterstock.com
When state and local IT systems get breached, there’s a balancing act to be struck. How much can and should the public be told?
Some advocates of transparency and accountability say anything that happens in the public realm ought to be public knowledge. On the opposite extreme, some IT leaders worry that anything they disclose can and will be used against them by the bad actors: Better to say little or even nothing about a cyber incident.
Some are ready to codify the latter view. Recent legislation passed in Georgia, for example, puts limits on what government has to share about cybersecurity incidents. It provides for “certain information, data and reports related to cybersecurity and cyber attacks to be exempt from public disclosure and inspection.” That’s vague, and possibly ominous: state legislatures telling IT leaders what they can and can’t say about a breach.
Pushing in the other direction are trends in the private sector, things like the Cyber Incident Reporting for Critical Infrastructure Act of 2022, which requires companies to report significant cyber incidents and offers protections incentivizing them to report. The Cybersecurity and Infrastructure Security Agency (CISA) likewise encourages information sharing, saying it is “essential to the protection of critical infrastructure and to furthering cybersecurity for the nation.”
Between the extremes, there is the vast murky middle ground where state and local IT leaders actually live. Many seek to strike a balance between the public’s right to know and the need to keep some information secret in order not to throw fuel on the fire.
The Public Interest
At a fundamental level, most IT leaders will acknowledge there is a public interest to be served in having some level of candor about cyber breaches. “We are public servants. We work for the government,” said Colorado CISO Ray Yepes. In that context, “there’s clearly some obligation to release information.”
We are public servants. We work for the government ... there’s clearly some obligation to release information.
Transparency advocates say that openness in the wake of a breach can elevate safety and security. “We’re much better off when everybody can have an understanding of how something is set up, how it’s configured, how it’s secured, how to lock things down — and what it looks like when they’re not locked down,” said Alex Howard, director of the Digital Democracy Project.
Disclosure can, for instance, help to prevent repeat incidents. “If everybody else is running the same system and a vulnerability led to that system being compromised, you can increase the health of everything overall by publicizing that,” he said. “If everybody clams up and tries to not talk about what’s happening, then we’re all on our own.”
Some will also argue that a tight-lipped strategy is too often self-serving.
Government agencies may want to clam up in “cases of negligence or significant mismanagement in which agencies are severely lacking in generally accepted security controls,” said Adam Goldstein, a professor and academic program director for Information Technology and Sciences at Champlain College in Burlington, Vt.
“People want to obfuscate anything where a human error is at fault,” Howard said. “They may correctly view that they have legal liability. If you leave the door open and somebody walks in and takes something, then you’re on the hook for any harms that come out of that.”
Some will argue that even this is no reason to hold back. “We should be as transparent as possible,” said city of Phoenix CISO Shannon Lawson. If mistakes are made, “we need to really be honest with the public and say: Hey look, we messed up here. We learned our lesson and this is how we’re implementing changes going forward.”
On the other hand …
The Case for Secrecy
Even as an advocate for openness, Howard will admit that “there’s a lively, robust conversation about when it’s ethical to hold back disclosures” in the case of public systems breaches.
Some will argue that announcing a hack is tantamount to inviting in the bad actors. “If you tell people that you’ve had an attack and are down, it is a flag to certain criminals to say: Hey, they’re hurt, let’s go at them too,” said an IT leader at a major city who asked to remain anonymous due to the sensitivity of the topic. (Going forward, we’ll refer to him here as “the City IT Leader.”)
“Some sophisticated malicious actors will do multiple attacks on you to draw your attention to one area so that they can get at you from a back angle while you’re not watching. Usually that’s a DDoS attack paired with some kind of exfiltration,” the City IT Leader said.
He goes further, arguing that any disclosure can be potentially problematic. “Anything that is in the public domain creates a growing body of knowledge about you as an organization, who your players are, the technologies you’re using, even how you respond,” he said. “All that allows someone to attack you even better.”
Others say there is merit in this reasoning. “You may want to protect, or not completely reveal, either the attack method or even how the attack was detected,” said Javed Ali, a professor at the University of Michigan’s Gerald R. Ford School of Public Policy. “Knowing the sources and methods can give adversaries insight and intelligence.”
We can add to the argument in favor of secrecy the fact that most “cyber incidents” end up being a big nothing-sandwich. “It’s the boy who cried wolf,” Yepes said. “I cannot tell you how many times my team has come to me and said: ‘Oh my God, the sky is falling, we’ve been breached.’ One of the things that I learned is that as a leader, you have to remain extremely calm. Because when you look at it, oh, there wasn’t a breach.”
If IT went public with every one of those alerts, he suggested, the public would drown in useless information.
We need to really be honest with the public and say: Hey look, we messed up here. We learned our lesson and this is how we’re implementing changes going forward.
There’s also a law enforcement argument in play. “Maybe federal law enforcement is tracking what these people are doing,” Lawson said. “You start announcing things to the public, the public doesn’t care, and now you have just exposed what you know to the bad guys. Maybe that hampers law enforcement or the intelligence community and what they do behind the scenes.”
Lawson noted, too, that disclosure sometimes can make a bad situation worse. During the social turmoil after George Floyd’s death, his city’s police department (like many others) was the target of attempted cyber exploits. Calling out those incidents in public would have only fanned the flames, he said, and potentially invited others to pile on.
Weighing all those pros and cons, most state and local IT leaders will conclude that public disclosure in the wake of a breach comes down to a balancing act.
Striking the Balance
When you start to dig down into the details with state and local IT leaders, as well as advocates and experts, the conversation around breach disclosure typically comes down to a matter of what to tell and when to tell. By weighing these considerations, most say, it’s possible to strike an appropriate balance.
First rule of thumb: If personally identifiable information has leaked, you need to tell people who may have been impacted. “There is some inherent governmental obligation to inform people of that,” Ali said. Most cities will have a pre-planned notification process in place, and it typically will include an offer of some months of free credit monitoring, as a hedge against identity theft.
But that’s just the beginning. Suppose there hasn’t been PII exposed, but an IT leader still feels an obligation to let people know there’s been a cyber incident.
“Then it is a balancing act,” Yepes said. “You have to be able to release information, but you also have to consider whether any information can have more consequences down the road. For example, you don’t want to publicize the arsenal of defensive tools that you have. You can say, ‘we are using an advanced protection tool,’ but you don’t say the brand.”
Likewise, it makes sense to be tight-lipped about the actual mechanics of the attack. “I would not talk about the specific vulnerabilities,” Lawson said. “You could certainly say, ‘We had a vulnerability in a particular system, which was exploited.’ But you’re not saying what the vulnerability was, or how they actually got in. I would not transmit that information.”
As Howard put it: “You can say somebody got in to the vault without explaining how they picked the lock.”
Timing also is a major consideration. When to inform people of a breach? In general: After it is really, truly over.
“If the bleeding is still ongoing, don’t report,” Yepes said. “Because if the incident is not contained, and I’m telling people that I’ve been breached, other entities can now come after me.”
Howard said there is precedent for this in other types of emergency notifications, such as among first responders. “There is a really old exemption around freedom of information,” he said. “If there is an ongoing investigation, we don’t want to tip anyone as to what we’re doing. But when it’s done, we will tell you what happened.”
When you know what you are going to disclose, and when, that just leaves “how.” The methodology for emergency communications, including around a cyber incident, ought to be baked into the incident response program.
“It is part of your disaster response methodology, same as for an earthquake or a fire,” said the City IT Leader. “There is someone designated to be the mouth of the event. For us, it would be our communications office, in collaboration with the incident commander, who in our case would be the CISO.”
As to the message itself, “we have a lot of that scripted out, and we would keep the technical detail to a minimum,” he said. “Here are some steps that we’re taking, here are the protections that we have for you, and we’ll keep you up to date.”
There’s also a special case to consider: What to disclose when a third-party vendor who supports the city gets hacked? Lawson had to ponder this when the city’s parking meter application vendor declared it had been the victim of a cyber exploit.
“They had an incident where account numbers and credit cards were not exposed, but people’s usernames, passwords and email addresses were exposed,” he said. “While it wasn’t a city system or city application, we still wanted people to realize that there was this problem. ‘If you’re using this system, go in and change your password.’ We were just trying to be transparent, without dumping fuel in the fire.”
The Ransom Equation
Another special situation worth considering, given the rise of ransomware attacks: If you pay the ransom, do you tell the public?
Some say disclosure just invites trouble. “If you transmit that you’re going to pay a ransom, then you’re sort of dumping chum in the water,” Lawson said. “You’re signaling to bad guys: Come and attack us, because we’ll pay.”
If you transmit that you’re going to pay a ransom, then you’re sort of dumping chum in the water. You’re signaling to bad guys: Come and attack us, because we’ll pay.
Some also see potential legal liability in disclosure. From a legal point of view, paying money to cyber criminals could be seen as giving financial support to a criminal act. No one wants to find themselves in that position.
Yet there’s a strong transparency argument here. “If you’re talking about taxpayer dollars, that answer is obvious,” Howard said. “It’s our money. It’s our government making decisions on our behalf. If you’re paying criminals money to get back access to something that they’ve compromised, that is relevant. The fact of the money is relevant.”
The City IT Leader said he’d rather never pay a ransom, but he would do so to save lives: for example, if the hackers disabled a medical system. He’s run tabletop exercises and determined that if that were to happen, he would disclose.
“It seems the most honest and frank thing to do, so say: ‘Here’s the situation and, God, we really hated it, but we felt like we had to save lives.’ If we have to cross that line, we just need to be honest with people,” he said.
In Colorado, Yepes said he’d disclose: “That money’s not coming from your own pocket, its coming from the taxpayers.” But he’s hoping never to have to make that decision. He’s pushing for a law that would prohibit state agencies from ever paying ransom. If the agencies are forbidden from paying, he said, the bad actors will no longer have incentive to hold their systems hostage.