by Joe Hedegaard Ganly

Information Security Adviser

Connect with Joe Hedegaard Ganly on Linkedin Connect with Joe Hedegaard Ganly on Linkedin

Dealing with Incidents

Incidents. In the context of cyber and information security, they’re rarely a positive thing. Data breaches, intrusions, defacements, denial of services, outages…… enough to make you shudder. Critical incidents, like compromised systems or ransom/wiper malware attacks can be particularly anxiety inducing. Trying to establish facts and understand just how much is impacted – the unknowns are normally the hardest part to deal with.

So… how bad are things actually?

Read a first hand account of the NotPetya attack on Maersk, or the WannaCry account from DLA Piper, or the attack on Norsk Hydro and you’ll find some common denominators:

  1. They had all prepared for outages, having run exercises to practise business continuity & disaster recovery scenarios.
  2. In all three attacks, communications were disrupted due to systems being interconnected more than they thought.
  3. Attackers had been in the network a minimum of 3 weeks.

Now, pointing out the obvious, but attackers are naturally asymmetrical in their tactics. There is no established doctrine or rules of engagement in 99% of cyber incidents. A challenge that all three firms faced was that they were dealing with a relatively new threat which wasn’t clearly understood. MITRE ATT&CK mapping wasn’t widely available in tooling and indicators of attack had to be manually correlated.

We know after the fact that all three firms had cyber programs which were well invested, but all had relatively flat networks and a focus on preventative tools. Fast forward a few years, we know now by mapping hundreds of incidents that preventative controls remain essential, but detection and response capabilities are crucial to catching an attacker early. When we analyse where most intrusions are detected, it’s in the persistence and privilege escalation phases.

In all three cases, unfortunately this wasn’t possible. Three major catastrophes, IT teams working 15,000 hours of overtime, factories closed, ports shut, billions in revenue lost. I had the absolute privilege of listening to Matt Finn, Head of Information Security at DLA Piper, tell his story in person. I am not exaggerating at all when I say it was tough to listen to. No facts, no robust communications capabilities – teams of people working 24/7 to try and get the firm back up and running, it was clear what a human toll it had taken.

All three firms, while suffering terrible incidents, are all thriving today. If there is one thing that is almost an undisputed fact, it’s that communicating well during an incident is the difference between coming out of an incident stronger, or struggling to rebuild a reputation. That, and as Matt Finn called out – having the right partners on hand for when it hits the fan to support you in that communication.

In the past I’ve called out Norsk Hydro’s incredible public response to their incident. They had a temporary website up where they could communicate with the press. They told their staff, hiding no details. They had daily webcasts and press briefings with the most senior staff talking through exactly what was happening and answering questions. Their CEO, 2 days into the job, spent all his time with the IT team, learning about what they were doing rather than scapegoating.

Result? Their share price went up 2.2% in the first 3 days during the incident.

Compare that to Capita, undergoing an incident at the moment (and my sympathies go out to the staff that I’m sure are up against it, struggling as much as the aforementioned companies) but who denied anything was happening, even in the face of blueprints of their offices being leaked online as proof. As the Times headline quoted today “the silence is deafening”, “attack is far worse than initially suspected”, “Capita finally admit incident” – this is not what you want to do.

Now, I’m not privy to the details of Capita’s inner business and working practices, but I’m confident they won’t have practised or rehearsed responding and communicating about a ransomware attack. Insiders admit “no one has a clue who is responsible for anything”.

In fairness to them, they aren’t the only ones either. The UK Criminal Records Office suffered a similar cyber incident, blaming “planned maintenance” for 3 weeks. Travelex, back in 2020 a similar story – denying anything was wrong while in secret paying a $2.3m ransom.

Going back to Norsk Hydro et al, like anything you can often find a silver lining. In this instance, thanks to their willingness to share the details and the stories – as a community we have learned some really valuable lessons from their hardship.

  1. Prepare, then keep preparing – a well invested cyber program built on good governance will go a long way. Ensure you’re monitoring as many systems as possible and mapping known attacker behaviours to try and trip up adversaries.
  2. Test, test and test some more– it’s not enough to just run the same BCP test over and over again. Run exercises specifically focused on cyber incidents, with senior stakeholders from every part of the business to get a solid incident response plan in place (fyi, this can’t just be “call the insurers and hope for the best”)
  3. Have the right provisions and partners on hand – a really touching moment during Matt Finn’s recollection of the DLA incident was how sincere he was when he thanked the security vendors and partners they had built relationships with. Staff flying in from all over the world and rallying behind their internal team to ensure they were supported.
  4. Measure cyber resilience, not compliance – I’m not knocking the business value of a good compliance program, but it doesn’t make you secure (Okta, Equifax, Twilio, Uber etc, all had ISO27001 certification). Consider using a framework like the NCSC’s Cyber Assessment Framework, to assess how well your governance, people and tooling is actually reducing risk and being useful in an incident.

Contact Us

If you’d like to talk further about any of the issues raised in this blog, then please do reach out to the Solutions Team at Saepio on