On the economics of ransomware

We blinked, and the world changed on us.

This [long] post is not meant as doom&gloom on the scourge of ransomware, but rather a look at some basic economic aspects of this type of attack, and some recommendations for the future.

So far,  2016 is definitely the year of ransomware. Every vendor is talking about it in their pitches, the media is all over it (good articles here and here), etc. This blog just adds to that cacophony, though hopefully adding a different perspective.

“Prior Art”: Lots of people are now talking about ransomware, and I’m sure many have in the past too. I’d be remiss if I didn’t point out that Anup Ghosh of Invincea wrote a scaringly prophetic blog post on this back in July of 2014! Check it out here. Also, I liked Daniel Miessler’s piece here.

Note : as I discuss these topics, I may sound insensitive to the plight of the victims. It’s absolutely not that: I think ransomware is a scourge that should be eradicated, that we bring to bear the full force of law enforcement, but I’m pessimistic it can be done.

There are several aspects of ransomware that make it interesting from an economic angle. Let’s explore some of them.

The “Taming” of Externality

First and foremost, to me, ransomware is the first major, widescale threat that significantly reduces the inherent externality of poor security practices. What does that mean?

Up until now, poor security practices by end users resulted in relatively light consequences for the users themselves. In some cases, being used as a spam relay might have been not noticeable, or at worst there was a rare circumstance where malware resulted in having to reformat one’s PC. Yes, annoying and potentially painful, but manageable. From a behavioral economics perspective, biases such as mental accounting made it even less painful.

The broader costs of that infection – spam being generated, systems that had to be wiped, etc… – were largely invisible to the user. In market economics terms, all these costs were externalities. This means that the agent in the transaction – the user – was not taking those costs into consideration when making their choice – in this case, poor security practices that let to an infection.

Enter ransomware. Now, the user is faced with the painful choice of paying the ransom – actual monies being stolen – or facing the imminent destruction of their data. Worse, depending on how that strain of ransomware behaves, it infected network drives and potentially backups as well. This triggers several well-known behavioural quirks/biases, including:
  • The salience of paying. It’s pretty clear that there is money being lost, and it’s your money (or your organization’s).
  • The immediacy of the request. It’s not something that can be postponed. Criminals know this, and exploit it: in many cases, ransoms increase as time passes.
  • Loss aversion. From Kahneman and Tversky’s work, we know the tendency of people to be loss averse.

All of this is, naturally, horrible for the user. From an economic perspective, though, it is interesting that this, in a way, “reduces” the externality of a poor security choice. The user now knows full well that their poor choice/practice may result in a non-negligible cost. [Edit: as someone provided feedback to me, just another way of saying “the chickens come home to roost”.] They’re understandably concerned, and rightly so. I don’t see this diminishing soon.

“To Pay or Not To Pay, that is the Question”

The second interesting point is analyzing the dilemma of deciding to pay the ransom or not. Even law enforcement seems ambivalent: recent advice has included both “pay” and “don’t pay”.

There’s two things to look at:
  • First, from a societal perspective, the issue is similar to the Tragedy of the Commons, a well-known economic problem. In the traditional Tragedy of the Commons, individuals overconsume a shared resource, leading to depletion. In the case of ransomware, it’s not the same: to me, it is close to the “Unscrupulous Diner Dilemma”, a variation of the more traditional Prisoner’s Dilemma, but where a group ends up paying more, even though they all wished they couldn’t. In the case of ransomware, the individual decision to pay negatively affects the community by supplying the criminals with additional funds to reward them for the crime, along with funds to reinvest in future capabilities for the tools, thus costing more in the future.
  • Individually, people and organizations should recognize that the rational economic decision is not just simply “is the cost of paying the ransom less than the loss associated with losing the data”. The decision should be based on that cost, sure, but also taking into account:
    • Is that the end of it? Will paying the ransom this one time be an exception? In most cases, hardly… As ransomware proliferates, different gangs will keep attacking.
    • Will paying the ransom even get the data back in the first place? As @selenakyle nicely pointed out recently, there’s little recourse if things goes wrong…

At the end of the day, we’re back to externalities:
  • Those recommending “don’t pay” don’t bear the cost of the advice: lost data, etc…
  • Those choosing to “pay” don’t [immediately] bear the indirect cost of enabling the criminals to continue their efforts.

A more realistic approach to handling of ransomware should keep these in mind.

“Thy ransomware’s not accidental, but a trade.”

There seems to be consensus that what has enabled the rise of ransomware is, among other things, the maturity of bitcoin. That was the point clearly made by @jeremiahg and @DanielMiessier (here). I agree: bitcoin seems to have tipped it, but along with other changes to the overall ecosystem that appear to have made ransomware a more viable attack.

Like legitimate business, criminals have explored ‘efficiencies’ in their supply chain. As the main example: bitcoin (the peer-to-peer exchange, not the currency itself) has removed significant “friction” from the system. Whereas before, the steps needed in the cashout scheme might include several steps – all of which incurred fees to the criminal – the ubiquity of bitcoin has made the cashout process faster, cheaper. Taking out the middlemen, if you will.

Regarding bitcoin specifically, there’s a couple of interesting points:
  • more than anonymous “enough”, bitcoin is a reliable and fast payment system. Even though it doesn’t provide full anonymity – the transactions on the blockchain can be traced to wallets – bitcoin is sufficiently opaque that the tradeoff of limited tracking with ubiquity/speed made it the currency/payment system of choice.
  • This leaves an interesting question about the bitcoin exchanges: Can we expect the exchanges to work against their own self-interest in restricting these transactions? What sort of defensive approach can we expect the exchanges to take? The danger of people equating bitcoin with ransomware is real, and the industry is right in defending itself.

All in all, from looking at the underground ecosystem, it looks like ransomware is a ‘killer app’: profitable, easy to use, etc…

“Much Ado About Nothing”? Maybe…

Finally, ransomware seems to have exploded into our collective attention, but is it really such an epidemic? While we deal with the onslaught of news/articles/posts about ransomware (including, of course, this post …), let’s recognize that there is vey little incentive to “underreport” ransomware infections. To wit:
  • InfoSec vendors can point to ransomware as the new ‘boogeyman’ that every organization should spend more resources to protect against.
  • Internally within organizations, like with “Y2K”, “SOX”, and “PCI” before it, we can now possibly start to see “ransomware” as the shibboleth that enables projects to be funded.
  • Media sites latch on to the stories, knowing the topic draws attention. As an example, a lot has been made of the incident where a Canadian university opted to pay $20,000 CAD . Would there have been the bombastic coverage if the cause of the loss was, say, a ruptured water main caused by operator error? Not likely…

I can’t help but wonder if this is not a manifestation of a couple of things:
  • one, a variation of what’s called the Principal-Agent problem: in an economic transaction where there is an expectation that an agent will act on behalf of a principal, but instead acts on their own benefit. In this case, bolstering the issue of ransomware above and beyond other relevant topics.
  • two, just your garden variety ‘availability bias’ from behaviour economics, where the ease with which we recall something inflates its perceived rate of occurence.

In either case, we can take a peek at the well-known Verizon Data Breach Report. What do we see? Verizon’s DBIR shows that ransomware, even as a subset of crimeware, is not as prevalent as other attacks. See figure 21 on page 24 of the 2016 report.

 
“’Advice’ once more, dear friends, advice once more”

Wrapping up, then. There is a fantastic paper by Cormac Herley, from Microsoft Research – So Long, and No Thanks for the Externalities – in which he discusses how users ignoring security advice can be the rational economic decision, when taking into account the costs of acting on some security advice. The paper is from 2009 and is still extremely relevant. I consider it mandatory reading for any security professional.

Taking that into account, how should we frame security advice about ransomware?
(One could argue whether ransomware is not exactly the change in cost that invalidates the conclusions. Might be an interesting avenue to pursue…)

At least to me, too much of the security advice we see about ransomware is not taking into account the aggregate cost of acting on such advice.

Ultimately, the protection methods have to be feasible to be implemented. With that in mind, here’s a few recommendations.

For individuals:
  • Be aware of your own limitations and biases as you interact online. To the extent that it is possible, incorporate safe practices.
  • Leverage the automated protections you have available – modern browsers have sophisticated security features, your email provider spends a ton of resources to identify malicious content, etc…
  • Devise and implement a backup system that fits your comfort level, balancing the frequency of backups with their associated hassle.
  • Periodically check and possibly reduce your exposure by moving content to off-line or read-only storage. Just like you wouldn’t walk around at night in a risky neighbourhood with your life savings in your pockets, make it a practice of limiting how much data is exposed.
  • If infected, don’t panic. Keep calm and, if you choose to do so, act promptly to avoid the increases in demands.

For corporate users, similar advice applies, boiling down to “don’t base your security architecture on the presumption that users are infallible at detecting and reacting to security threats”. Back it up with technology. On a tactical level, a few extra things come to mind:

Prevention
  • Verify that current perimeter- and endpoint-based scanning of executables/attachments is able to identify/catch current strains of malware (ask your vendor, then check to make sure). It might be a sandbox approach, endpoint agents, gateway scanning, whatever. Belt & suspenders is a good approach, albeit costly.

Detection
  • Consider application-level monitoring for system calls on the endpoints. This includes watching for known extensions, as well as suspicious bulk changes to files.
  • Consider monitoring data-center activity for potential events of bulk file changes such as encryption. Yes, there may be false positives.

Response/Containment
  • Re-visit the practices that allow users to mount multiple network shares.
  • Make sure the Incident Response playbooks include ransomware as a specific scenario. Prepare for single-machine infection, multiple machines hit, as well as scenario where both local and networked files are encrypted. While I’m skeptical of survey data such as this, getting familiar with how bitcoin transactions work might be a worthwhile investment.

To me, ransomware is here to stay: it leverages too many human and economic aspects to simply vanish. As with many other “security” issues, this is just another one that was never just a technology problem, but a social and economic one. InfoSec professionals should keep that in mind, remembering that the solutions are not always technical…

And to purveyors and enablers of ransomware, “a plague on your houses!

Professional Certifications & [Behavioural] Economics

I was thrilled to see many people responded well to my earlier post on certifications in the context of information economics (particularly information asymmetry). There was lots of interesting feedback, including some that were somewhat critical of certifications in general.

This led to an interesting question – what are the negative aspects of professional certifications, if any?

Again, we can use economics. There’s quite a few things to keep in mind…

First, there’s no denying that pursuing certifications can have significant costs, both explicit and implicit. Some of the clear explicit costs include preparation materials, tuition, travel costs, not to mention exam fees themselves: some run into thousands of dollars. It pains me to see how expensive it can be to prepare for some certifications, knowing that in many cases candidates may be latching too much hope in just having “that” cert.

There’s also implicit costs. Think of the hours of studying, preparing materials, etc… It is not uncommon for some of the more advanced certifications to eat up hundreds if not thousands of hours of preparation. The ‘opportunity cost’ of missing out on months or years of ‘regular’ life can be staggering.

Are these costs worth it? It depends, of course. In many cases, I think the answer is yes, but I want people to know what it is they’re getting themselves into. More than ever: Caveat emptor (buyer beware!).

I also wanted to explore something else: how can having a certification negatively affect you? This brings us to the extremely interesting field of behavioural economics

Behavioural Economics

Behavioural Economics is not one of the ‘foundational pillars of economics’ – those would be macroeconomics and microeconomics (of which information economics is a subset). Rather, it is more of a multi-disciplinary application of several fields – sociology, [micro]economics, psychology, finance, … . It looks at how sociological, psychological, cognitive, or emotional factors can affect economic decisions and processes.

Behavioural economics has taken the world by storm for the past few decades, notably with the work of Dan Kahneman and Amos Tversky on Prospect Theory, then many others. It was Nobel-worthy (though sadly Amos passed away before they were awarded the prize, and Nobels are not awarded posthumously). For those that are interested, I highly recommended the works of Dan Ariely, a popular researcher from Duke University. Dan has several books, blog posts, online courses, and even movies on Behavioural Economics.

Within behavioural economics, one area of great interest is cognitive biases – how our quirky little minds often behave in non-rational but predictable ways. There’s dozens of biases that have been identified – I recommend this Wikipedia page as a starter…

There’s lots of discussions about why these biases exist. My simplistic take is that the human mind evolved over millions of years and is not yet adapted to the changes that civilization introduced over the past 10,000 years or so. The behaviour that would save you from being eaten by a wild animal in the savannah or help you survive a harsh winter is the same that nowadays makes you susceptible to bad products on late-night TV and binge eating…

Let’s look at just a few biases, effects, …:
  • Endowment Effect. This is the notion that if you happen to “own” something, you value it more than if you don’t.
  • Loss Aversion. Somewhat related to endowment, this is the key insight that one feels the pain of loss of a certain amount ‘x’ as greater than the pleasure of gaining the same amount ‘x’.
  • Availability Bias.  You’ll attribute more importance/frequency to information that you have come across recently.
  • Cognitive Dissonance. The stress caused by holding contradictory thoughts and the rationalizations that are done to resolve this.
  • Social Proof and variations (group bias & others). When one assumes the behaviours of others to be correct.
  • Sunk Cost Fallacy. Continuing to invest in something because so much as been spent on it already.
There’s *tons* of information on biases, influence, manipulation, etc… Too many to list here. A particularly popular author on the topic is Robert Cialdini. Well worth taking the time, trust me.

Cognitive Biases & Professional Certifications

So, how does all this apply to professional certifications? Quite well, actually.

Note: I’m more familiar with the network security space, so that’s where my examples come from. When thinking of certifications, I’m thinking of the likes of Cisco, Microsoft, VMware, Juniper, Novell, or groups such a SANS, CompTIA, or the (ISC)2. In this space, many vendors have formal certification programs, often with multiple levels of certifications (associate, junior, senior, master, …) and regular recertification requirements. This makes sense, as today’s technology darling ends up as tomorrow’s legacy option, supplanted by a new option.

In the IT industry, a company that sells, distributes, or provides services for products with ‘certifications’, often receives benefits from the vendors that are tied to certifications: better margins, warranties, marketing dollars, easier access to support resources, etc… This incentivizes having and maintaining a healthy number of professionals on staff with the required level of certifications. This, in turn, means that someone working at these companies is strongly incentivized (or even required) to obtain these certifications.

Even if you don’t work for an IT reseller/distributor/integrator/…, there’s a strong message from vendors incentivizing you to certify, to show your skills, etc…

Why is that?

Because, amongst other things, having a professional certification from a vendor influences you, even if just a little.

If you, as a professional, worked to obtain a professional certification from a traditional “vendor”, you can expect the following to occur unconsciously:
  • due to a desire to resolve any cognitive dissonance, you’ll hold a generally more positive opinion of that vendor. “If I went through the effort of certifying on that vendor’s product *and* I consider myself a good person, then that vendor must be good too.”
  • because of the the endowment effect, you’ll likely hold a more positive opinion of others who have the same certification. This may come through on sales calls, hiring, etc…
  • the availability bias will kick in when thinking of alternatives, meaning you may have an easier time recalling a specific vendor’s offerings or technology, particularly if they refer to [re]certification topics.
  • social proof will kick in when you see that certification in prominent display by vendors when visiting trade shows, elections, … Vendors often offer certification exams at their shows (sometimes even waving the exam fees): it is extremely convenient for the test taker, but the visual of hundreds or thousands of your peers taking those exams is a shining example of social proof in action..
  • it’ll likely be really difficult to let go of that cert, or that particular vendor. That communication saying “your certification has now expired” is really painful. Such is the impact of the sunk cost effect (and loss aversion).

Now, this does NOT mean that everyone is going to mindlessly give in to their biases, but that these biases exist, and some will give in sometimes. Given enough nudges, that’s a powerful effect…

Vendors know this, and use it as an instrument. It helps them sell more product – be it an IT product, training, or a certification. It helps them maintain their base of customers, it helps them maintain a wide network of partners, which expands their reach, and so on…

They’re well within their rights to do so, just as you are within your rights to be aware of it and judge things on their proper merits.

Wrapping up

We’re all biased and susceptible to manipulation at different levels. (Yours truly included: among my many, many failures, I once fell – hard – for the “free Disney tickets pitch”. It hurt, it cost me money and stress, but I learned my lesson and moved on.)

I think professional certifications can be wonderful things:
  • They can provide a roadmap for learning, checkpoints for measuring your skill.
  • They can be a very effective (though not perfect) means of resolving the information asymmetry inherent in professional situations, both as signals and as screens.
  • They can help establish relationships with like-minded professionals.
That being said, we just saw how there’s potential negative aspects to certifications: explicit &implicit costs, along with being more vulnerable to cognitive biases that may work against your best interests.

Again, I’m absolutely not against professional certifications, quite the opposite actually! It is precisely because I value them that I want people to be cognizant of what the benefits and yes, the pitfalls, of certifications might be. That guy Sun Tzu said it best… 🙂

On Plane Hacking… We’re missing a point here.

[ Target Audience: Our InfoSec industry… ]

Yes, I know we have been inundated with the discussion on Chris Roberts’ saga. Feel free to read the thousands (closing in on a million as I write this) of links about the whole situation to catch up. FBI affidavits, ISS, “sideways” (yaw?), NASA, … I particularly liked Violet Blue’s summary of the recent history of airplane security. Another very interesting post is Wendy Nather’s, here.

Yet I think a critical point is being lost in the debate of whether he was able to do what he did or not.

I don’t care whether he was actually able to interfere with avionics. Being uninformed about it, I prefer the heuristic of believing the aviation experts that have, in great numbers, called out ‘B.S.’ on the claims.

What I *do* care about is the alleged pattern of behaviour of trying this with disregard for the possible consequences to the public.

I am NOT against security research, holding those responsible to task, responsible disclosure, “sunlight is the best disinfectant”, … I think all those doing responsible research on car hacking, medical devices, avionics (read Violet Blue’s excellent summary), etc… deserve our gratitude and support.

What I AM strongly against is the apparent complete disregard for the well-being of fellow passengers. It is alleged that this was done ’15 or 20 times’ on several flights. I don’t know if the flights were mid-air or not. I don’t know if anyone noticed, or should have noticed. What I do know is that the consequences of those security tests could have affected innocent bystanders. That is NOT ok.

“Oh, but if he couldn’t really affect the plane, it’s ok, right?” NO, it is not. What if there were adverse consequences? What if the pilots noticed something and – being safety conscious – decided to divert flights?

Some might say – “ok, that is the price we must pay for security”. It was not his call to make, was it?

As an industry, we can’t carry around this sense of entitlement and be seen in good light by the broader public.

He has apparently shown poor judgement in other occasions – talking to the FBI on his own without legal representation is another example, but that just affects him.

That being said, I echo Bill Brenner’s sentiment on moving forward. I’ve never met Chris (hope to one day) and I wish that he is able to learn from this debacle and grow as a professional.

For the broader industry, let’s look at the mess this has created and learn a few lessons too…

Legal is the new black (DtSR #141)

Once again I was listening to the DtSR podcast and came across interesting things I want to comment on.

(No, I did not create this blog just to comment on Rafal and others, it’s just that their show gets me thinking. Thanks guys! 🙂 )

As an industry, we’ve spent years complaining that the powers-that-be – senior management, elected officials, law enforcement, … – don’t take InfoSec seriously. Much like the dog that barks after the car, we chased attention to our issues, in the well-meaning hope that proper attention would mean more support, more resources, and more results.

( I recall reading Gunnar’s excellent post back in Feburary and thinking that it was – and still is – solid advice for the security teams.)

[W|H]ell, it should not surprise us, then, that when they listened, they listened with their own perspective on things. And, as it turns out, their perspective includes not only looking at Cyber/InfoSec issues in the broader context of Risk Management, but also taking their own incentives in mind, be they re-election, reputation, or not rocking the boat on ‘how things are done’ ‘at that level’.

Now we’re seeing the other side of all that attention. Cue in the law of unintended consequences.

Now, we have eager lawmakers making rules that may or may not make sense. Now, we have security services on edge over cyberthreat claims. Now, we have lawyers interjecting themselves into security disclosure. Now, we have vendors increasing lobbying efforts and touting liability shields as benefits of their solutions (though, as Steve Ragan suggests, perhaps not as broad as initially thought).

What to make of all this? To me, the trend is clear: now that we’ve been successful in getting the CxO/BoD/… spotlight shining on us, this will be the new normal. Expect much more involved conversations with legal/counsel, expect organizations using more tools in their arsenal to address the risk. Don’t be surprised if, instead of appealing to ‘the better angels of our nature’ and collaborating with researchers, your organization chooses to deploy strongly-worded cease&desist letters. This is how conflicts get resolved by those we’ve asked for help.

So, dust off that suit, brush up on your Latin, lose the lawyer jokes, and embrace a more complicated but ultimately broader and more impactful set of responses to CyberSecurity.

Alea iacta est.