The Art of War: Military lessons for IT security

We hear a lot about threats, we hear a lot of advice from vendors but just how do today’s CSO’s synthesise the complex threat environment?

The Chief Security Officer of Oracle, Mary Ann Davidson, talks about securing the Internet of Things, M2M and understanding the difference between capability and intent.

Supratim Adhikari: Can you give us an idea of what a day in the life of a CSO is like?

Mary Ann Donaldson: First, I’d have to start by saying that I’m not sure if there is such a thing as a typical CSO. Certainly more and more companies have them, but the job function isn’t always the same; it changes from company to company. It used to be more physical security, now it can be physical or operational security or both. Also, the CSO doesn’t always report to the same place in a company: sometimes it’s the CIO, sometimes it’s higher, sometimes it’s elsewhere.

My job is a little different too because my remit isn’t IT security per se, it’s product security and by that I mean rather it’s about the security of our software. Some might say, “That’s odd, why would you do that?” Well, for a couple of reasons.

Oracle builds products and, of course, cloud offerings, and also we run Oracle on Oracle, so if we don’t do our security jobs properly we would be the first ones to suffer.

So coming back to your question, I don’t know if I have a typical day, as there are things that arise externally we have to address. Sometimes it’s about seeing a potential problem, and what we need to do to get ahead of it while it’s smaller and manageable before it becomes big and intractable. 

How do today’s CSO’s synthesise the complex threat environment?

You need to try and strike a balance between the things that are urgent and the things that are important but not urgent.  

People write books about threats versus risks, and probabilities and so forth. I don’t want to say it’s simple, because it isn’t, but there are certain basic questions you should be able to answer. One of them is: what are the things that are most critical for me? and have a more systemic way of looking at what the risks are. That’s why companies have security policies and programs. I have a hacking team, and they attempt to break things because we’re doing it for a good cause: We want them to break our products and services before a bad guy does.

More importantly we take lessons from that, not just about the particular problem, but whether there is something more systemic about what they found that we can use to make changes elsewhere.

And sometimes a threat is foisted on you -- for example, there have been very well publicised vulnerabilities in SSL libraries that everybody uses, so you need to look at, you know, are we using that, what is vulnerable and what isn’t? You’ve got to fix it where it is vulnerable, patch your own systems and ensure customers are notified.

You also need to look at changes in the immediate threat environments and determine if there is something beyond the immediate that portends a long term trend.  Five years ago, many people used third party libraries. There is nothing wrong with that per se; it helps you be nimble and focus on your core competencies. Also, it didn’t always matter that somebody used an older version. Now it does, because you have people going after those, thinking they can find a vulnerability, so you have to  know where you’re using that and be able to react very quickly to it.

The next thing is to be much more proactive. If you don’t already know what’s being used where, you better get a handle on that, have very good listings and in some cases you’re going to want to be more aggressive about making sure that people are using the latest and greatest. There wasn’t a penalty for using something old five years ago: that’s not true anymore. 

What sort of security challenges will the Internet of Things (IoT) and M2M present?

More digital enablement means more systems, and there is more data available, which is cheaper to collect, and it’s more valuable, especially when linked together.  From the enterprise perspective, I have more ‘doors and windows’ that I need to worry about people coming through.

The other thing is that people want to push the frontiers on the use of technology. Sometimes it’s a good thing, and sometimes it’s a ‘what-were-you-thinking?’ thing. We have so many people who want to IT-enable so many things like your household appliances. Some are good, but sometimes I look at them and say:  is there really a problem that you’re really going to solve with that? I’m not making this up; there is an app for checking to know if your laundry is dry, as if it’s really that much trouble to just listen for the buzzer.

The thing is, if someone didn’t get the application security right, if the code isn’t secure, if the app wasn’t created with in-built security right from the start versus it being added on at the end as an afterthought. Then it can open up risks. It could expose you to man in the middle attacks where someone exploits flaws in, for example, an app on your smart TV and grant access to your Cloud, home network, or to the social profiles of any account you’re signed into from the smart TV. 

Or somebody cruising the neighbourhood can figure out that, you checked that remotely and then, gosh, you’re not actually at home, or that you haven’t opened you’re refrigerator in three weeks. Ergo, they are using that “remote access” to figure out which houses to break into. I suspect the designers of these devices are most likely not thinking about how they can be abused and misused.

My point here is not that new technology is bad; people need to think carefully about the consequences related to deploying applications and devices that may create systemic (and thus unmitigateable) risk. 

With so many connected devices potentially in play, how do we standardise the security protocols that govern the connected ecosystem?

Standardisation is important in terms of auditing events, because without them, it’s like 'you say po-tay-toe I say po-ta-toe,' and somebody has to do a translation, which is not the best use of scarce resources.   

So, there didn’t used to be any standards around security events or notifications; there are some emerging standards in that area. Why would that be important? First of all, if you have a lot of audit data on the network, and it’s collected and nobody does anything with it, how useful is it? If you do want to do forensics, it assumes that you have audit records from six months ago and they were nice enough to only break in in the window of time for which you actually keep audit records. But to actually have that more standardized enables you to collect those records, and have them in some common language where that can be understood and do something sensible with them and something that is closer to real time. That’s just foundational for being able to then do analytics and say wait a minute, now I see some interesting patterns here and I at least have the dots, and they’re the same size, so I can begin to connect them.

So standards in that area help you.

Standards in other areas can be a mixed blessing. They enable a lot of things to work together better, which is a plus. But on the other side of the equation, if everything is the same you run into the “risks of a monoculture,” meaning if you have something that is broadly deployed, and there’s a lot of it, it’s certainly a bigger attack surface. I’ll give you a real-world example of the Irish potato famine.

In Ireland there was one type of potato that was grown and it wasn’t resistant to potato blight, so when they had a problem, the entire crop failed. People were disproportionately reliant on that food source, and that’s why they had mass starvation.

What does that mean to computers? If you have a lot of one thing, and that thing isn’t resilient to something, whether it’s a virus or something else, it creates a far bigger problem than if there is more diversity in the environment. This is what we have just seen with Heartbleed.

However, you wouldn’t want to carry on that diversity to a ridiculous extreme. You also wouldn’t want to say, hey I’m really resistant to someone hacking into my systems because I’ve got one of everything. You can’t manage it very well, because now you need an expert in every single one of everything that you have. There’s some happy medium. 

The spate of recent attacks on high profile organisations highlight just how one weakness in the perimeter is enough to let hackers in. Faced with such a scenario, is there any security posture that can realistically safeguard an entire ecosystem?

Of course not! I use this example a lot and people are probably tired of hearing it, but I like to use military analogies partly because they’re applicable and partly because they’re things that some people are familiar with. Frederick II of Prussia, said, “He who defends everything defends nothing.” Realistically, you can’t defend everything, even if you try.

A more realistic risk posture is you’re never going to have perfect security. There’s going to be some way that someone will eventually get in. What do you do that is more constructive than saying we have to have zero intrusions?

I often cite the Marine Corps ethos ‘every Marine a rifleman’. What I mean by that is that every single Marine knows how to defend himself or herself should their perimeter get breached, and they work on the assumption that this is something that could happen, so they are prepared for it. That is where network components need to go. Instead, most IT elements assume they never will be attacked, and in fact they may not be designed for it, given that companies buy expensive “sentries” to try to protect against attacks. In my opinion, the Marine way works better and that’s where we need to go.

A better model would be to say, you’re going to get attacked, some of those are going to be successful, what are you going to do? What can you do to mitigate that? Can you have smarter networks that can know, for example, I’m a network element and somebody has tried to do something that is inappropriate? That’s actionable intelligence. I can do something with that. Maybe I can tell other network nodes, maybe they can raise their posture. Maybe, because some of the attacks are automated, I can make my defences automated. When I know that I’m under attack, I can do something sensible like, raise my security posture. Maybe my network can segment itself so that certain parts of the network are less accessible.

We already have biological systems that already know how to do this, like pine trees, when they’re under attack by pine beetles. They send out chemical signals to the trees around them that basically say raise your defences, the beetles are after me. If a tree can do that, we should design networks that are smart enough to say, I’m under attack and I can do something constructive with that, instead of sitting around waiting for more beetles to take bites out of it.

So does a self-defending network exist?

I first talked about this probably eight years ago, and there are people working on it; creating sensor aware networks.  For example, there is an organization called OWASP that has done work on ‘sensor-enabling applications”, making applications smart enough to know that they are being probed or attacked. I have no doubt that we’ll get there. It would make a lot of sense to do that, and frankly it’s the only thing that’s going to enable us to survive. We just can’t sit there and wait for someone else to break in and act surprised that that happened. It shouldn’t be a surprise at all: it’s going to happen.

And if a 'security day of reckoning' is truly at hand, just what shape and form will this take?

I really don’t like it when people start talking about digital Pearl Harbor because they draw the wrong lesson from it. It’s not that your entire fleet is going to go down (and it wasn’t the entire fleet but quite a bit of it). It’s more that before Pearl Harbor, there were a lot of discussions about whether the Japanese could attack Pearl Harbor. And the discussion got to be, well yes they could, but they would never do that.  It was a failure to understand the difference between capability and intent.

If we can take anything away, it is that where there is capability, an enemy may develop intent. This is a discussion that I have all the time when someone says to me, well no one would ever do that. I say, you have to assume that they will. People who like to break systems are very persistent, and they will find a way to break it. If you didn’t plan for that eventuality, you can’t fall back on “oh they would never do that.” Of course they’re going to do that because it’s possible to do that. And they may develop intent later. 

In a way, every day is the “day of reckoning,” because we still have people who don’t understand where there is capability, somebody will develop intent. 

Related Articles