Security breach reporting: a common good

Standard

Here is a post about a controversial part of network and information security legislation: Security breach reporting obligations.

safetyfirstFirst some background: In 2009 EU legislation for the telecom sector was changed to include two types of security breach reporting obligations:

  • Security breaches with an impact on personal data  have to be reported (Article 4 in 2002/58/EC as amended by 2009/136/EC)
  • Security breaches with a significant impact on the operation of telecom networks and services have to be reported (Article 13a in 2002/21/EC as amended by 2009/49/EC).

The full text of the EU telecom package, incorporating the 2009 changes, can be found on the EC’s website (and because it has moved in the past, here is a local copy).

In 2011 there was a security breach at the Dutch CA Diginotar, which allowed a massive man-in-the-middle attack on Iranian internet users and indirectly caused a long outage of the Dutch e-Government sites (declared by the minister ‘unsafe to visit’). The impact on Iranian citizens was of the highest order: @Mikko Hypponen (F-Secure) argued that Iranians probably died as a consequence of the breach. Dutch government made a 8.7 million euros claim for damages (i.e. its total costs for handling the breach). At the time I wrote a critical paper about the flaws of HTTPS and the entire ecosystem of PKI certificates and CAs.

The Diginotar incident triggered politicians across the EU to call for new legislation, particularly: media_xl_934599

  1. Security breach reporting rules, because Diginotar was slow to notify authorities, and
  2. Better legal footing for the government to intervene in case of security breaches.

Security breach reporting obligations are now present in three different pieces of (proposed) EU legislation:

  • Security breaches at e-trust and e-signature services providers have to be reported (Article 15 in COM 2012/238, the proposed regulation for e-trust and e-sig services)
  • Security breaches at personal data processors have to be reported (Article 31 & 32 in COM 2012/11, the proposed regulation for data protection)
  • Security breaches at critical market operators have to be reported (Article 14 in COM 2013/48, the proposed network and information security directive).

I should remark here that security breach reporting legislation is not only an EU affair; in many EU member states there are similar national laws or proposals. In the USA most states have security breach reporting laws (often similar to the law in California), but there is not yet a federal law.

Having discussed the legal background, let’s look at the general principle of security breach reporting.

In 2010 Google reported being hit by the Aurora cyber attacks.IllegalFlowerTribute1 Aurora is the nickname for a series of cyber attacks against major IT companies in the USA starting in 2009. After Google’s report, also Adobe, Juniper, Rackspace confirmed they had been hit in the past. Allegedly also Yahoo, Symantec, Northrop Grumman, Morgan Stanley and Dow Chemical were hit but they never reported this in public. After the Aurora attacks Sergey Brin, co-founder of Google, gave an interview at TED, discussing a range of issues related to censorship and cyber security. In particular he remarked: “If more companies were to come forward with respect to these sorts of security incidents and issues, I think we would all be safer.” 

Pause for a second. Google is one of the largest IT companies in the world with vast resources, excellent security teams and a trove of information about security threats. If Google is vexed by the silence of others about common security incidents, imagine how smaller companies feel.

Note that without legal backing it is not trivial for IT companies to be open about security breaches. Suppose one of their customers is attacked. Reporting about the breach could have an impact on the customer. The customer does not have a direct interest in revealing the breach. They are usually more interested in solving the matter and showing the outside world that they are doing fine. Indeed the situation is exactly as Brin described it: If all companies would be more open about breaches, we would all be safer. But there is little incentive for one company to be open about a breach. Security breach reporting is a common good.

In comes security breach legislation: The idea is to break the deadlock and force companies to report about security breaches. Industry lobbyists often criticize breach reporting legislation. This criticism is of course part of a more general aversion to any kind of legislation for the IT sector, but often we hear very specific arguments against the principle of breach reporting legislation. Let us look at some of the arguments:

  • Move along, folks. Nothing to see here. The argument here is that security breach reporting would useless because the government has no clue about technical security matters and that industry knows much better how to deal with security. Clearly Diginotar showed there are cyber security issues industry is not addressing properly. But to see the fallacy of this argument, compare with medical surgery: Of course doctors are experts about medical surgery. Does this mean that government should not be told when patients die after surgery? Citizens (and companies) expect from government to keep track of security breaches – not only to be able to intervene if needed, but also to inform the public about the overall risks of using network and information systems.
  • Just red tape. The argument here is that security breach reporting would only take away valuable resources from the task at hand: mitigating the breach. The assumption here is that already in the incident response phase lengthy forms and reports have to be filled in. An simple solution to this problem would be to keep breach notification during the incident response phase simple and easy, and leave more complete incident reporting for a later time.
  • Sure you want reports about all our breaches? The argument here is that security breach reporting would generate an avalanche of breach reports, flooding regulators, creating high costs both for government and the industry. Sometimes to add weight to this argument industry experts explain that their systems are subject to hundreds or thousands of ‘scanning’ and ‘penetration attempts’ a day. The solution here is to use thresholds and metrics. Thresholds can be agreed between government and industry so that only breaches with significant impact are reported. Metrics can be used to classify a large set of breach reports allowing a regulator to focus only on a subsect of breaches, for example the severe breaches or the breaches which could be prevented in the future.
  • Making matters worse. The argument is that security breach reporting would create security risks in itself, by disclosing sensitive information to outsiders. It is indeed easy to see that indiscriminate publishing of information about past or ongoing security breaches would be foolish. Attackers would learn understand the success or failure of their attacks and learn to mount even better attacks. The answer to this argument is anonimization and aggregation. Breach reporting frameworks should have different layers of anonimization and aggregation. Detailed and fresh information should only be shared on a need-to-know basis in a closed circle. Later on, when the breach is mitigated, breach reports should be shared more widely, with much less information in an aggregated form. Typically the information which needs to be anonimized is ‘when exactly did the breach occur’, ‘where did the incident occur’, system names, IP addresses, customers affected, et cetera.
  • No visible improvement. The argument here is that security breach reporting does not in practice reducing the number of incidents. In this argument people take some settings where breach reporting has been carried out and then they argue that nothing really improved: there are no less breaches. This argument is interesting, because of course the number of security breaches and their impact is growing day by day, not only because attackers are getting better, but also because society depends more and more on network and information systems. Of course breach reporting is not a magic solution, and it is rather naive to expect a sea-change in network and information security just by reporting security breaches. Reporting security breaches is only the first step to understanding the problem. Reported breaches need to be analyzed, relevant threat information needs to be shared with customers, customers have to act on this threat information and improve their defenses, IT vendors need to act on this threat information and improve their products, et cetera, et cetera. Breach reporting is only a first step not the full solution.

Breach reporting can help us all to understand better the type of network and information security issues we are facing. Currently government regulators, politicians and IT companies are mostly relying on reports in the media. Reports from antivirus companies do not give the full threat picture and media tends to focus on spectacular attacks (see Flamer for example). Breach reporting is needed, first, to understand if we can improve and prevent certain incidents, and secondly, even if we cannot, to at least understand what are the overall risks for customers. Just as with train accidents, airplane accidents, or surgery accidents, society needs to know the risk associated with network and information systems, and for this we need to know about the frequency and impact of security breaches.

If we stop discussing about if security breach reporting should be mandatory then we can discuss (in my view) the more interesting question of how incident reporting should be done. Below I will highlight some issues which are interesting to debate:

  • Which systems and service should be in scope? Only critical systems and services? How do we define critical in the current interdependent and complex ICT landscape. How do we deal with open source software without a clear owner. See Heartbleed, for example.
  • Which thresholds for reporting should be used? All breaches with significant impact? How do we define significant? Is the wider impact always clear to the system owner Both the Diginotar breach and the RSA breach, for example, had cascading effects across the globe.
  • How do we incentivize breach reporting? If there are sanctions for breaches it may be attractive to look the other way. Most breaches are hard to see anyway. See the Verizon data breach report, for example. The European parliament voted for a legal immunity clause for the reporting organization in the proposed NIS directive, similar to the exemption for near-miss reports in airline safety.
  • How do we foster a culture inside an organization which makes employees aware about breaches and which allows them to be open about mistakes and errors – even if they were responsible. Do we need hotlines and protection for whistleblowers on this?
  • Information about near-misses and information about when security protection did work could be very helpful for the industry. How do we incentivize information sharing between organizations about such cases. What is the overlap and difference between voluntary incident sharing in ISACs and mandatory incident reporting to regulator?
  • Which information in incident reports should be shared when and with whom? How should incident reports be anonimized and aggregated to provide enough information to stakeholders, without putting security, reputation or business secrets of companies unnecessarily at risk.
  • What should be the role of the regulator, the cyber crime unit and the national CERT? In most countries information in the hands of government is in principle public (Freedom of information act). Should incident reports be exempted from this. Should the role of regulator and CERT be clearly split to avoid that companies are afraid to seek help from a national CERT for fear of a regulator or law-enforcement? Communications between doctors and patients are protected from access by law-enforcement. Should a similar confidentiality rule be made for information exchange in the incident response phase?

I believe a lot can be learned from safety issues in other sectors, such as offshore safety, nuclear safety, airline safety, et cetera. At the same time, it should be said that there is something very different about network and information security incidents: While a plane crash is hard to deny (a plane is sticking out of the ground, people are dead or wounded), cyber security incidents are often hard to spot. Especially when attackers take due care to prevent detection. A quick look at the annual Verizon databreach report shows that the vast majority of incidents go undetected for months. So more transparency and a better understanding about security incidents is absolutely vital. Security breach reporting is one important step to achieve this. We should discuss and agree now about how to implement breach reporting and how to leverage breach reporting to improve network and information security.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s