Converge and BSides Detroit 2017

This past week I had the privilege of presenting at both Converge and BSides Detroit. It was great to see the energy and commitment from the local community, as well as the practical and insightful content from so many presenters.

Thanks to the organizers! It was also great to have video recordings from Irongeek. Converge videos and BSidesDetroit videos are already available.

This is just a quick post with the links to the content from the sessions I delivered:

  • Converge – The 4 Eyes of Information Security. TL;DR: An introduction to the 4 eyes framework from ClearerThinking.org and some example applications to defensive infosec. Slides. Video.
  • BSides – Navigating Career Choices in InfoSec. TL;DR: A description of useful career planning concepts and methods, referencing Wardley maps, PersonalMBA, Cal Newport, and more. Slides. Video.

I’d love to hear your feedback. Comment here, reach out on Twitter, LinkedIn, etc…

 

The “Four Eyes” in Information Security

This is not a post about:4eyes
  • Middle-school-level verbal abuse of those that wear glasses.
  • The notion that a specific transaction must be approved by at least two people.
  • Clever wordplay about the dynamics of the relationship between the “Five Eyes” nations – US, Canada, UK, Australia, New Zealand – as it relates to surveillance and any recent bad blood between leaders.

Rather, it’s about a different way of looking at problems. Two questions inspired this post:
 
What if we broaden the arsenal of tools/methods we use for making progress with security initiatives?

What if we have been misdiagnosing issues and thus keep applying the wrong remedies?

ClearerThinking is an initiative/site/movement/… founded by Spencer Greenberg, a mathematician and entrepreneur, with a simple mission: “help people make better decisions”. I came across his site as I researched cognitive biases in the context of behaviour economics, and have been an avid reader ever since. He’s got tonnes of material on all sorts of topics, from helping to cross the political spectrum, to evaluating how rational you are, to – one of my favourites – an analysis of just how much is your time really worth. If you have the time, you might want to take a look. If you don’t have the time, then you absolutely must…

In “Learn a simple model for tackling complex problems“, Spencer describes the “4 Is” framework that he advises when looking at issues. His post includes a link to a short video of a presentation he gave on the topic. In essence, his message – advice to other entrepreneurs – boiled down to:

When looking at a “persistent problem” (something that is important, looks insurmountable, and has not yet been resolved), it is critical to understand where other have failed. This can apply both at a societal/world scale, as well as within organizations. The failure will usually derive from one (or a combination of) the following causes:
  • Individuals or groups were not exposed to the right incentives – positive or negative – to solve the problem.
  • There is ignorance about how to handle the problem, or that was impeding the process to continue.
  • While other elements were in place, the initiative had a severe lack of resources due to limited investment into the issue.
  • Finally, while all elements might have been in place, human irrationality – through cognitive biases or a poor decision-making process – impeded action.
Hence, the “4 Is”: incentives, ignorance (or information), investment, irrationality.

Once the issue has been properly diagnosed, then there are different types of remedies for each:incentives
  • Incentives. Well, create the right incentives: these can be positive (monetary rewards, recognition, etc…) or negative (introduce regulations/rules). There’s a
    famous example of how FedEx solved issues with delivery delays by changing the compensation model for the workers, so that it would reward them for finishing the job faster.
  • Ignorance. In this case, identify ignorancehow to provide the additional information. Is it a matter of simply educating the participants about something they didn’t know or thought incorrectly? Spencer uses the example of AIDS-prevention campaigns that fail because local participants held wildly incorrect views about how contraceptives work. Or is it a matter of the information needed not existing in the first place? In that case, the answer might be basic research, or data collection outside of the organization.investment
  • Investment. Here the answer is, quite simply, find ways to redirect more resources to the problem. It might be an issue of justifying additional budgets, or perhaps redirecting resources from elsewhere. The example Spencer uses is poignant: depending on your values, you should care that a lot more resources are spent saving pets than non-pet animals from cruelty. Should the money and attention be redirected?
  • Irrationality. Finally, the irrationalityway to address human irrationality (be it cognitive biases or flaws in decision making) can include the use of checklists (to reduce mental strain during stressful times), or proper design of system defaults. This is what behaviour economics practitioners refer to as “choice architecture”, and there are fantastic examples of the effect of this with organ donations and medical prescriptions.
To be clear, it might be that a particular issue results from a combination of these, but without applying this kind of clearer thinking, we’re bound to miss out on addressing the problem.

This is a great framework for looking at problems. I love it!

Applying it to Information Security

To me, the applications of the “4 Is” framework to security is direct, simple, and essential. Let’s look at a few scenarios:
  • Effectiveness of Security Awareness training. Security awareness is a common component , but often it is structured heavily on the “information” side of things. Could the answer to better security behaviour be better incentives (again, positive or negative)? Or perhaps the matter is irrationality, and we need to review the choice architecture (defaults)?
  • Deficiencies in rolling out patches. Is poor patch deployment a matter of information (teams don’t know when to roll them out), investment (it’s too onerous to roll them out using current mechanisms), incentives (no one other than security cares, “don’t fix what ain’t broken” mentality), or even irrationality (patches are too low on checklist of things to do).
  • Non-compliance to internal or external requirements. This is another area where a deeper look into the issue using the 4 “I”s framework can yield interesting results. In many cases, we seem to jump to conclusions and infer the cause of failure from our pre-conceived notions. Is that really the case?

The list goes on. We could cover software quality/security, risk management, technology adoption, security culture, …

Moving forward

I really like looking at other areas of knowledge for how we can apply their learnings to information security. This post was an example of that.

Hopefully, this post gives you another tool in your toolset when going about your work with security.

When looking at a security issue, think it through: how much of it is a matter of incentives, information, investments, or irrationality? The answer might not be obivous, and will likely help you …
Note: all images on this post are from clearerthinking.org.

O’Reilly Security Conference

Disclaimer: I was a speaker at the conference. As such, O’Reilly Media covered most of my travel expenses, as well as provided me with a Speaker pass. If you think such benefits, nice though they were, had a significant impact on my opinion, to me it just means we don’t know each other very well yet. Trust me when I say that they do NOT… Happy to discuss as needed…

TL;DR: The experience of being part of the inaugural O’Reilly Security Conference was amazing. The content I watched was excellent, the venue/logistics worked really well, and I really liked the “vibe” on the conference. 10/10!

security_newyork_2016_dsc_4822

Source: O’Reilly Media – click for license details.

This longish post is about my experience on the O’Reilly Security Conference. I summarize what I learned from each session I attended, as well as general opinions. I can’t think in prose, so this is mostly in list format. Without further ado:

Format and Venue

  • 4-track conference, held at the New York Hilton Midtown.
  • Pre-conference training and tutorials, an Ignite session, then 2 days with morning keynotes followed by morning and afternoon sessions.
  • Good breaks in between sessions (ranging from 15 minutes to 1 hour)
  • No idea on attendance, likely in the mid hundreds.

Tutorials and Ignite

I attended Jim Manico’s half-day tutorial on “Modern IdM” hoping to learn more about Web authentication and I was not disappointed. He covered OAuth in detail, as well as session management, and recommendations around password storage. He’s a very energetic and engaging speaker, and time flew by.

The afternoon was reserved for the Apache Drill tutorial led by Charles Givre, from Booz Allen Hamilton. Charles took us through the rationale for Apache Drill – basically a SQL-supporting unifying front-end for disparate back-end data stores – and led exercises on data manipulation. Drill can be a fantastic tool for a data scientist to easily get at disparate data sources.  I’m a SQL newbie and struggled with some of the exercises, but that is on me and not on the tutorial. He also based the exercises on a pre-configured VM that has other data science tools. This will come in very handy…

In the evening, Jerry Bell and Andrew Kalat hosted the Ignite talks (lightning fast talks with auto-advancing slides). Jerry and Andrew host the Defensive Security podcast , probably my favourite security podcast. It was a privilege to chat with them. The talks were interesting, ranging from the need to shy away from hero-focused security work, to how we can do better at training/education, to the use of existing intelligence/data sources. Great talks, easy-going format.

Karaoke…

Then there was… karaoke… For those that are not familiar, “slide karaoke” is a fun-filled/terrifying (depending on your point of view) format where someone is presented random slides at a fixed-time interval and the they have to “improv” their way to a somewhat coherent talk structure. Andrew and Jerry asked for 5 volunteers…. and I was one of them….

I don’t quite remember what all my slides were, but there were references to llamas, some sort of potato-based disease, and rule breaking. 🙂  I’m just hoping I made it entertaining for the audience…

Lesson learned: Courtney Nash is a master at this: she was funny, coherent, engaging, … She’s a very tough act to follow, which just happened to be my spot in the roster… You have been warned 🙂

Seriously, though: it was great fun, and I hope others join in. It was a great environment, people were having fun, and part of being in this industry is this sense of community that we build. It was a privilege to be able to take part in that.

Keynotes

On day 1, following the intro from Allison Miller and Courtney Nash, Heather Adkins from Google kicked things off by showing us how some of the main classes of security incidents – be they insecure defaults, massive theft, or instability – have been happening in different forms since the 1980s. After pointing to the increased siloization(sp?) of our industry as a possible cause, she urged us to think about broader platforms, and to design with a much longer timeframe in mind.

Richard Thieme took us through a sobering view of the psychological challenges in our career. Drawing parallels to the intelligence community and the challenges faced there, Richard rightfully reminded us to stay mindful of our needs as individuals and building adequate support networks in our lives.

Becky Bace did a great job of comparing the challenges of infosec today with the early days of the auto industry, and how we can use some of the lessons learned there to improve it. Given my interest in economics and incentives, I was silently clapping pretty much all the time.

Unfortunately I missed most of the day 2 keynotes – I look forward to watching video later. What I did catch was the latter part of Cory Doctorow‘s impassioned and cogent plea for more involvement from us as individuals into the immensely important debate about the very nature of property and democracy in modern society. There are key discussions and precedent-setting court cases taking place now, and many of the key societal instruments we take for granted are at risk.

Day 1 Sessions

Speak Security and Enter. Jesse Irwin led a great session focused on how to better engage with users when it comes to discussing security and privacy. She laid out very well defined steps for improving. If I could summarize her session in one idea would be: have more empathy to your user community. From using relatable examples, to framing the issue positively or negatively, and many other suggestions. Hearing her tell of the adventure of teaching security to 8-year-olds was priceless!

Notes from securing Android. Adrian Ludwig from the Google Android team took us through a data-driven journey into the Android security ecosystem. After reminding us that Android security must accommodate from $20 phones to modified units used by world leaders, Adrian focused on three aspects: active protections made by the Google ecosystem, options available for enterprise decisions (such as allowing external app stores or not), and details about the Android OS itself. He made a very compelling case that the security architecture of a modern Android-powered device such as the Google Pixel rivals what other options exist in the mobile ecosystem (iOS, WindowsPhone). This was one of the best talks I attended.

Groupthink. Laura Mather has had a very interesting career, including time at the NSA, eBay, founding SilverTail (where I had the pleasure of working for her), now leading Unitive. Her talk was not a ‘security’ talk, but rather a look into the issue of groupthink, often caused by unconscious biases. Fundamentally, the variety of challenges in modern security environment should be met by having a diverse workforce generate ideas based on diverse points of view. In order to achieve this, we need to work on the issue of lack of diversity. Laura pointed out specific ways to avoid unconscious bias in hiring, particularly being aware of, as an interviewer/hiring manager, not looking for someone “just like me”. Hiring decisions should be matched on values, not on superfluous characteristics that lead to biased outcomes.

UX of Security Software. Audrey Crane leads a design firm, and made the case for proper UX design taking into account the people who will actually use the product. Her firm conducted research into usage habits related to SOC roles, and came up with a few personas (different from the typical ‘marketing’ personas) and then showed an interface design that takes those personas into account. Her recommendations are for vendors to take this aspect of the product creation process seriously, and for buyers of software to not only demand better software from a usability perspective, but to actively try out any software being purchased with the intended audience.

Social Scientist. Andrea Limbago brought a “social scientist” perspective to the broad issues around information security. She framed the discussion in terms of Human Elements, Geopolitical trends, and Data-Driven Security. The human elements section looked at an structure-agent dynamic (top-down versus behavioural) and advocated approaches to evolving the security subculture. Very interesting, as were the comments around security still having a cold war framework, and that there is a gap on the usage of data within security conversations.

Day 2 Sessions

Are we out of the woods?. Kelly Harrington from the Google Chrome team talked about Web security issues. She covered some key issues – how updates are not universal, how older devices get attacked, and the scourge of what Google calls Unwanted Software – and delved into details about the exploit kits (Angler, Rig, and others), trends of attacks on routers, plus examples of malicious behaviour by Unwanted Software. She wrapped up by sharing a little about what Google’s Safe Browsing API does and by giving actionable advice on web security. This was a great talk to complement the one on Android security. Finally, extra points for her for the Jane Austen references… 🙂

Criminal Cost Modelling. Chris Baker – a data scientist at Dyn – took us through a whirlwind tour of some underground markets and the actual data he found there for pricing stolen goods, exploit kits, or DDOS services. It was refreshing to see someone dive beyond “oh, underground markets exist” into actual markets, prices, goods, and the possible economic issues that exist in those markets. I loved this session. If there was one session I wish could have been longer, it is this one. I’ll be watching the video when it comes out, many times over.

Economics of CyberSecurity. This session was delivered by yours truly. Happy to announce that slides are available here. I focused on how a brief understanding of economic concepts – Marginal Cost of Information Goods, Information Asymmetry, Externalities, and concepts from Behaviour Economics – can help us rethink some of the broad challenges we face. I hope the audience liked it. I was happy with my delivery and did pick up on a few things I want to improve. I really hope to have the opportunity to keep bringing this message to others.

No Single Answer. Nick Merker – now a lawyer but formerly an infosec professional – and Mark Stanislav – now a security officer with experience as security consultant – focused on cyber insurance. Their session went into the difference between first-party and third-party insurance, then delved into the details of what cyber insurance options exist, what they typically cover (or not), and how these products are currently priced and sold. They also covered some misconceptions around the role of insurance in a risk management program, how infosec should play a role when purchasing cyber insurance products, and how a well-defined and executed security program can help with insurance premiums. I learned a ton, and really liked the session.

Sponsors/Logistics/Others

The sponsor area was relatively small (maybe 10-15 sponsors total) but the people I spoke to were knowledgeable and the selection was varied. Not so much your typical security vendor, but more those offering solutions that fit into a more modern architecture view of security. There were options for web app security, container security, source code security, etc… I did not focus much on it, given my role as individual contributor.

The conference schedule and details were available via the O’Reilly app (iOS and Android) and things worked well. One suggestion I have is that the app could offer a toggle for ‘hide past events’ on the Full Schedule view, as that would help people choose their next sessions without having to scroll around so much…

Food options during the breaks were varied and quite nice. I loved that we had sushi available on one of the food stations.

As a Speaker

My “field report” would not be complete without a comment about my experience proposing the talk and later as a speaker.

The submission process was well defined, the guidelines for what should go in the submission were clear, and the timelines were very fair. I followed the process via the website and the questions I asked the speaker management team were answered promptly and efficiently. Major thanks to Audra Montenegro (no relation) and her team.

The organizing committee has been very transparent about what their side of the selection process was like. This is tremendously insightful and helpful for future proposals. I particularly liked the use of blind reviews. Blind reviews help us as an industry increase the quality of the content that makes it into the stage, AND increase the chance of hearing from a more diverse pool of contributors. What’s not to like?

Prior to the event, I was able to connect with Courtney Allen and we collaborated on a short email-based interview (which you can find here). She was fantastic to work with and has a keen insight into the role that O’Reilly Media can play in the security landscape.

Bottom line is: If you have defensive-focused security content you want to present, you’re open to be being evaluated on the merits of your content, and want to work with great people putting it together, O’Reilly Security should definitely be on your short list of conferences to submit to.

MSSP Blues and the Theory of Agency

Introduction

I like the approach of listening to a good podcast and then using it to expand on a particular idea. This time, I listened to Brakeing Down Security’s fantastic episode where they discussed the fallout from a very rocky response to a security incident by an unnamed Managed Security Services Provider (MSSP). Bryan Brake talked to Nick Selby and Kevin Johnson, based on Nick’s original blog post. Please read the original post and listen to the podcast, but here is the summary:
  • Nick helped an unnamed customer respond to a security incident.
  • This customer had a long-standing contract with an MSSP for monitoring their network, which included having dedicated gear on-site.
  • When Nick & customer got the MSSP involved, they had a number of nasty surprises:
    • The monitoring gear on-site was not working as expected, and had actually not worked for a long time.
    • The customer-facing employees at the MSSP were not only not helpful but almost evasive. Bailing out on phone calls, not giving straight answers, …
    • The actual value the customer was getting from the MSSP was far less than what they imagined, and was not useful during the incident response.

In short, a series of horrible news and interactions. Bryan, Nick, and Kevin make a TON of excellent points on the podcast. Worth the listen.

This whole incident reminded me of a topic I’d been meaning to write about…

 

“Agents” have “Principals”, but do they have “Principles”?

How do you feel about hiring someone to do something for you? Maybe it’s an employee you bring in to your company, maybe it’s a mechanic you hire to look at your car, maybe it’s a lawyer you call on to help you with a contract negotiation.

This is a very common economic transaction. When looking at it, we often use specific terminology: those doing the hiring are ‘principals’ while those being hired are ‘agents’.

In an ideal scenario, the person/company you hire (the ‘agent’) is having their interests met with the compensation they’re receiving, and will perform their tasks in a way that meets your interests (you’re the ‘principal’). In all those cases – and pretty much any relationship like it – there’s always a potentially thorny issue: despite being compensated for their efforts, are those ‘agents’ acting on a way that is aligned with the ‘principal’s’ interests? What happens when interests don’t align? This happens all the time:
  • Is a mechanic over-estimating the effort to fix a car?
  • Is the lawyer extending the negotiation because they bill by the hour?

Say hello to the “Principal-Agent problem“, a well-known problem in economics (and political science). It is also known by other terms, such as “theory of agency” or the “agency dilemma”. Fundamentally, it is the study of the dynamics between principals and agents with distinct self-interests in a scenario where there is significant information asymmetry.

Information asymmetry, you may recall, is the situation when one of the parties in an economic transaction has much more material knowledge about it than the other.  There are further nuances on whether the information asymmetry develops before a contract is established – the agent has superior information to the principal from the get-go – or that asymmetry develops post-contract – as the agent begins to work, they realize the discrepancy. These lead to slightly different solutions.

Principal agent

 (source: wikipedia)

Another common example of Principal-Agent problems is the conflict between a company’s shareholders – who have limited information about how it is run – and the company management. Depending on how that management team is compensated, they may make decisions that are not in the shareholders interest: maybe boost stock price by playing accounting tricks, for example.

Both economics and politics have identified a series of mechanisms to help address Principal-Agent issues, but they fundamentally come down to a combination of:
  • Contract design – how compensation is dispensed (deferred), fixed versus variable, profit sharing, etc…
  • Performance evaluation – both objective and subjective
  • Reducing the information asymmetry – having more information to make informed decisions

 

Back to the MSSP debacle…
 
Now that we have this notion of Principal-Agent fresh in our minds, looking into the unfortunate MSSP incident we see the clear issues caused by the agency dilemma: there’s indication that the MSSP did not perform their tasks with the interests of the customer in mind. That is very unfortunate, and well deserving of the criticism they got …

Still, let’s look a bit deeper into the whole thing. As we do, we see there’s plenty of potential blame to go around (again, I suggest reading Nick’s blog for deeper background):
  • First of all, did the original security team at the customer that chose the MSSP do so with the organization’s best interest in mind? Were they trying to actually implement a proper monitoring solution or were they just trying to check off a ‘have you contracted with a managed security vendor for monitoring?’ item from some compliance checklist?
  • There was plenty of blame for the MSSP not following up a poorly deployed solution, but what about on the customer side? Why was there no oversight?
  • When the new security team started at the customer, what level of diligence was done on taking on a new infrastructure?
  • Did the management team at the MSSP care that a particular customer was not deployed properly? Did the team/individuals that created the on-boarding run-books for new customers care? Was the implementation team at the MSSP side properly measured on how to do on-boardings?
  • During the initial calls, were the employees of the MSSP acting on their own self-interest of “just get this customer off my back”? Were they empowered to do something but chose not to?
  • Back to MSSP management: did they structure internal operations to empower their employees to handle the exceptions and urgent requests?
One minor point I differ from Bryan, Nick, and Kevin on their well-deserving roasting of the MSSP is that they seem to assume that the individuals at the MSSP had lots of freedom to deviate from the established procedures. I’m not so sure: it’s one thing for senior, knowledgeable professionals to do so, but it may be radically different for others. Again, what did the MSSP empower their team to do?

I’m being overtly picky here to drive a point that there’s potential for agency issues at multiple levels of the event chain, both within each organization (customer and MSSP) and between them. There can be agency issues between employees and employers, as well as between separate commercial entities.

 

The broader impact

The point for this post is broader than the MSSP debacle. By the very nature of our industry, it is extremely easy for Principal-Agent issues to appear:
  • There is tremendous information asymmetry in InfoSec to begin with: There are too many details to go wrong, things change too fast, too many moving parts, etc… Those who hire us are often not aware of what we do.
  • We have tendencies to compartmentalize information about security itself (“sorry, we can’t talk about this”). This leads to further information asymmetry.
  • With “security” being a latent construct – it is difficult/expensive to observe/measure – our principals have a hard time measuring the effectiveness of security efforts.
  • With the difficulty & cost in hiring for security – be it employees, contractors, or businesses – there is less flexibility and interest in exploring details of contract design.
How do we – as an industry – get better? How do we deal with this? I think it comes down to:
  • First, we need to be aware of the issue and recognize it for what it is: a well-defined economic problem for which there are broad classes of solutions.
  • Then, we should recognize our roles within the transaction:
    • Sometimes as a buyer – hiring outsourcers, buying security solutions.
    • Sometimes as a seller – employee/contractor providing security services/expertise to someone, or selling a security solution/service.
  • Finally, within our roles, we should expand beyond the technical nuance – networks, encryption, appsec, etc… – and delve into:
    • clearly define and deliver reporting
    • pay more attention to contract design, service level definitions
    • perform periodic evaluation of the services
    • anticipate where principal-agent issues might arise and address early on. Maybe it is creating a better report, maybe it is having a lunch&learn on the solution, etc…
  • Lastly, we should continue to grow as community by sharing information – blogs, podcasts, conferences, … All that helps to reduce the underlying information asymmetry.
On that final point, I salute Bryan, Nick, and Kevin for their excellent podcast episode, and all the other community participants from whom we all learn so much…

If I had to summarize things:
  • Know what you’re buying. Educate yourself as needed.
  • Know what you’re selling and help your customer understand it as well.
As with so many other things, it’s not only an InfoSec issue , it’s an economic one…

On the economics of ransomware

We blinked, and the world changed on us.

This [long] post is not meant as doom&gloom on the scourge of ransomware, but rather a look at some basic economic aspects of this type of attack, and some recommendations for the future.

So far,  2016 is definitely the year of ransomware. Every vendor is talking about it in their pitches, the media is all over it (good articles here and here), etc. This blog just adds to that cacophony, though hopefully adding a different perspective.

“Prior Art”: Lots of people are now talking about ransomware, and I’m sure many have in the past too. I’d be remiss if I didn’t point out that Anup Ghosh of Invincea wrote a scaringly prophetic blog post on this back in July of 2014! Check it out here. Also, I liked Daniel Miessler’s piece here.

Note : as I discuss these topics, I may sound insensitive to the plight of the victims. It’s absolutely not that: I think ransomware is a scourge that should be eradicated, that we bring to bear the full force of law enforcement, but I’m pessimistic it can be done.

There are several aspects of ransomware that make it interesting from an economic angle. Let’s explore some of them.

The “Taming” of Externality

First and foremost, to me, ransomware is the first major, widescale threat that significantly reduces the inherent externality of poor security practices. What does that mean?

Up until now, poor security practices by end users resulted in relatively light consequences for the users themselves. In some cases, being used as a spam relay might have been not noticeable, or at worst there was a rare circumstance where malware resulted in having to reformat one’s PC. Yes, annoying and potentially painful, but manageable. From a behavioral economics perspective, biases such as mental accounting made it even less painful.

The broader costs of that infection – spam being generated, systems that had to be wiped, etc… – were largely invisible to the user. In market economics terms, all these costs were externalities. This means that the agent in the transaction – the user – was not taking those costs into consideration when making their choice – in this case, poor security practices that let to an infection.

Enter ransomware. Now, the user is faced with the painful choice of paying the ransom – actual monies being stolen – or facing the imminent destruction of their data. Worse, depending on how that strain of ransomware behaves, it infected network drives and potentially backups as well. This triggers several well-known behavioural quirks/biases, including:
  • The salience of paying. It’s pretty clear that there is money being lost, and it’s your money (or your organization’s).
  • The immediacy of the request. It’s not something that can be postponed. Criminals know this, and exploit it: in many cases, ransoms increase as time passes.
  • Loss aversion. From Kahneman and Tversky’s work, we know the tendency of people to be loss averse.

All of this is, naturally, horrible for the user. From an economic perspective, though, it is interesting that this, in a way, “reduces” the externality of a poor security choice. The user now knows full well that their poor choice/practice may result in a non-negligible cost. [Edit: as someone provided feedback to me, just another way of saying “the chickens come home to roost”.] They’re understandably concerned, and rightly so. I don’t see this diminishing soon.

“To Pay or Not To Pay, that is the Question”

The second interesting point is analyzing the dilemma of deciding to pay the ransom or not. Even law enforcement seems ambivalent: recent advice has included both “pay” and “don’t pay”.

There’s two things to look at:
  • First, from a societal perspective, the issue is similar to the Tragedy of the Commons, a well-known economic problem. In the traditional Tragedy of the Commons, individuals overconsume a shared resource, leading to depletion. In the case of ransomware, it’s not the same: to me, it is close to the “Unscrupulous Diner Dilemma”, a variation of the more traditional Prisoner’s Dilemma, but where a group ends up paying more, even though they all wished they couldn’t. In the case of ransomware, the individual decision to pay negatively affects the community by supplying the criminals with additional funds to reward them for the crime, along with funds to reinvest in future capabilities for the tools, thus costing more in the future.
  • Individually, people and organizations should recognize that the rational economic decision is not just simply “is the cost of paying the ransom less than the loss associated with losing the data”. The decision should be based on that cost, sure, but also taking into account:
    • Is that the end of it? Will paying the ransom this one time be an exception? In most cases, hardly… As ransomware proliferates, different gangs will keep attacking.
    • Will paying the ransom even get the data back in the first place? As @selenakyle nicely pointed out recently, there’s little recourse if things goes wrong…

At the end of the day, we’re back to externalities:
  • Those recommending “don’t pay” don’t bear the cost of the advice: lost data, etc…
  • Those choosing to “pay” don’t [immediately] bear the indirect cost of enabling the criminals to continue their efforts.

A more realistic approach to handling of ransomware should keep these in mind.

“Thy ransomware’s not accidental, but a trade.”

There seems to be consensus that what has enabled the rise of ransomware is, among other things, the maturity of bitcoin. That was the point clearly made by @jeremiahg and @DanielMiessier (here). I agree: bitcoin seems to have tipped it, but along with other changes to the overall ecosystem that appear to have made ransomware a more viable attack.

Like legitimate business, criminals have explored ‘efficiencies’ in their supply chain. As the main example: bitcoin (the peer-to-peer exchange, not the currency itself) has removed significant “friction” from the system. Whereas before, the steps needed in the cashout scheme might include several steps – all of which incurred fees to the criminal – the ubiquity of bitcoin has made the cashout process faster, cheaper. Taking out the middlemen, if you will.

Regarding bitcoin specifically, there’s a couple of interesting points:
  • more than anonymous “enough”, bitcoin is a reliable and fast payment system. Even though it doesn’t provide full anonymity – the transactions on the blockchain can be traced to wallets – bitcoin is sufficiently opaque that the tradeoff of limited tracking with ubiquity/speed made it the currency/payment system of choice.
  • This leaves an interesting question about the bitcoin exchanges: Can we expect the exchanges to work against their own self-interest in restricting these transactions? What sort of defensive approach can we expect the exchanges to take? The danger of people equating bitcoin with ransomware is real, and the industry is right in defending itself.

All in all, from looking at the underground ecosystem, it looks like ransomware is a ‘killer app’: profitable, easy to use, etc…

“Much Ado About Nothing”? Maybe…

Finally, ransomware seems to have exploded into our collective attention, but is it really such an epidemic? While we deal with the onslaught of news/articles/posts about ransomware (including, of course, this post …), let’s recognize that there is vey little incentive to “underreport” ransomware infections. To wit:
  • InfoSec vendors can point to ransomware as the new ‘boogeyman’ that every organization should spend more resources to protect against.
  • Internally within organizations, like with “Y2K”, “SOX”, and “PCI” before it, we can now possibly start to see “ransomware” as the shibboleth that enables projects to be funded.
  • Media sites latch on to the stories, knowing the topic draws attention. As an example, a lot has been made of the incident where a Canadian university opted to pay $20,000 CAD . Would there have been the bombastic coverage if the cause of the loss was, say, a ruptured water main caused by operator error? Not likely…

I can’t help but wonder if this is not a manifestation of a couple of things:
  • one, a variation of what’s called the Principal-Agent problem: in an economic transaction where there is an expectation that an agent will act on behalf of a principal, but instead acts on their own benefit. In this case, bolstering the issue of ransomware above and beyond other relevant topics.
  • two, just your garden variety ‘availability bias’ from behaviour economics, where the ease with which we recall something inflates its perceived rate of occurence.

In either case, we can take a peek at the well-known Verizon Data Breach Report. What do we see? Verizon’s DBIR shows that ransomware, even as a subset of crimeware, is not as prevalent as other attacks. See figure 21 on page 24 of the 2016 report.

 
“’Advice’ once more, dear friends, advice once more”

Wrapping up, then. There is a fantastic paper by Cormac Herley, from Microsoft Research – So Long, and No Thanks for the Externalities – in which he discusses how users ignoring security advice can be the rational economic decision, when taking into account the costs of acting on some security advice. The paper is from 2009 and is still extremely relevant. I consider it mandatory reading for any security professional.

Taking that into account, how should we frame security advice about ransomware?
(One could argue whether ransomware is not exactly the change in cost that invalidates the conclusions. Might be an interesting avenue to pursue…)

At least to me, too much of the security advice we see about ransomware is not taking into account the aggregate cost of acting on such advice.

Ultimately, the protection methods have to be feasible to be implemented. With that in mind, here’s a few recommendations.

For individuals:
  • Be aware of your own limitations and biases as you interact online. To the extent that it is possible, incorporate safe practices.
  • Leverage the automated protections you have available – modern browsers have sophisticated security features, your email provider spends a ton of resources to identify malicious content, etc…
  • Devise and implement a backup system that fits your comfort level, balancing the frequency of backups with their associated hassle.
  • Periodically check and possibly reduce your exposure by moving content to off-line or read-only storage. Just like you wouldn’t walk around at night in a risky neighbourhood with your life savings in your pockets, make it a practice of limiting how much data is exposed.
  • If infected, don’t panic. Keep calm and, if you choose to do so, act promptly to avoid the increases in demands.

For corporate users, similar advice applies, boiling down to “don’t base your security architecture on the presumption that users are infallible at detecting and reacting to security threats”. Back it up with technology. On a tactical level, a few extra things come to mind:

Prevention
  • Verify that current perimeter- and endpoint-based scanning of executables/attachments is able to identify/catch current strains of malware (ask your vendor, then check to make sure). It might be a sandbox approach, endpoint agents, gateway scanning, whatever. Belt & suspenders is a good approach, albeit costly.

Detection
  • Consider application-level monitoring for system calls on the endpoints. This includes watching for known extensions, as well as suspicious bulk changes to files.
  • Consider monitoring data-center activity for potential events of bulk file changes such as encryption. Yes, there may be false positives.

Response/Containment
  • Re-visit the practices that allow users to mount multiple network shares.
  • Make sure the Incident Response playbooks include ransomware as a specific scenario. Prepare for single-machine infection, multiple machines hit, as well as scenario where both local and networked files are encrypted. While I’m skeptical of survey data such as this, getting familiar with how bitcoin transactions work might be a worthwhile investment.

To me, ransomware is here to stay: it leverages too many human and economic aspects to simply vanish. As with many other “security” issues, this is just another one that was never just a technology problem, but a social and economic one. InfoSec professionals should keep that in mind, remembering that the solutions are not always technical…

And to purveyors and enablers of ransomware, “a plague on your houses!

A question about Behavioural Economics

I just came back from a wonderful public event organized by the Behaviour Economics in Action at Rotman (BEAR) team with Richard Thaler, the famed economist that helped launch behavioural economics.

1ggzrx2wgtpr24kq-tjplzaAttendees received a copy of his latest book – Misbehaving, which I’m making my way through (about 50% done) and now own in paper, digital, and audio formats… – and had the opportunity to hear him talk for about an hour on anecdotes about behavioural economics and answer a few audience questions.

I tried to ask a question, but was not picked out in the audience. So, documenting it here and hoping readers can help me with pointers or answers (of course, I’d be thrilled if Prof.Thaler would address it himsefl).

My question[s] – with some background but hopefully not annoyingly “monopolizing the mic”:

On one side of the spectrum, we know that behaviour factors play a huge part in individuals making transactions – choosing to donate organs, saving for retirement, …. On the other, we see high institutional ownership of shares and to my knowledge the significant majority of stock trades are either algorithmic or at least “professional”, which we expect to fall under the purview of efficient markets, etc…

At which point in this “spectrum of rationality” do things change? At which point, what kind of problem, should we stop choosing to favour behavioural factors over traditional ones? 

This is relevant to my interests in information security as we determine which kind of program or action should be more “behavioural” or should be more “rational”. At which point should the actions of agents be modeled one way or the other?

I’m always attempting to learn more, so maybe this is just a naïve question that’ll be answered further down my studies, but would love to hear insights.

Any ideas? Comment below or reach out to me.

Thanks!

 

A skeptical stroll through the RSA expo floor

So, the RSA conference – sessions, keynotes, expo, parties, etc… – wrapped up last week. I’m still working on summaries for the sessions I attended, but I wanted to discuss something else: the influence/persuasion techniques on the expo floor.

DISCLAIMERS:
  • I did not watch the keynotes, so I may have missed any specific set-up done by the larger vendors in their original pitches.
  • The company I work for did not have a booth, so my skepticism might seem self-serving. Besides assuring you it is not, not much else I can do…
  • Reminder: As always, opinions are my own 🙂
Before I jump into this post, a shout-out to Andrew Plato from the Anitian blog for a great blog series on the conference. Highly recommended! His crisis of leadership post is pure gold! I hope my small contributions on economics are but a small nudge in the right direction.

(Also, Dr.Anton Chuvakin from Gartner had a great post on his take at RSA as well 🙂 )

By now, it should be obvious to many of us that expo floors are really meant to influence visitors. This post is meant to bring this to light, through a point-by-point example of how common influence principles are applied.

Why write it? Because not many in InfoSec think consciously about these kind of influences. I strongly believe we can all benefit if we understand how these games are played and can spend our efforts on creating *and* deploying secure solutions.

Before getting into it, a little background. Robert Cialdini’s “Influence: The Psychology of Persuasion” is one of the most influential books I’ve ever read (yes, pun fully intended). In it, Cialdini – a noted researcher on persuasion – describes 6 “weapons” of influence that are often used. These are elements that tend to lead to higher compliance with a request. They include:
  • Authority – a request or message coming from someone of [perceived] authority yields better compliance. Think ‘people in lab coats discussing medical products on TV’.
  • Scarcity – if something is framed as being in short supply (units or time), or otherwise restricted, will yield better influence. “Only good for 24hrs!” kind of messaging.
  • Liking – if the person or entity requesting something is someone we “like”, we tend to comply a lot more often.
  • Social Proof – the effect (real or not) of someone similar to you resonates extremely well.
  • Reciprocity – should you receive a ‘gift’ from someone, your receptiveness to their requests increases significantly.
  • Consistency – finally, if someone is able to frame a request in a way that is compatible with how you perceive yourself, there’s a higher likelihood you’ll comply.
With that in mind, let’s take a stroll through the expo floor…

Elements exploring the “Authority” principle:
  • IMG_1066Suits. Suits everywhere. Anyone working in a “senior” capacity in business development, sales, etc… was likely wearing a suit. Some of the smaller booths had senior people in the standard booth uniform, but to me that was meant to signal something else – that the company has enough people – so it’s understandable.
  • An interesting observation on authority. As I walked the floor, I looked at the wording and visual aspects in the various booths. Larger booths from more familiar brands had very clear messages that were just the brand itself or basic functionality about their offering (“DDoS Protection”, “Malware Analysis”, “User Behaviour Analytics”, …). Smaller booths – disproportionally housing smaller companies- however, had much more emphatic messages: “Leader” in this, “Complete Security” in that, “Best of “ in whatever. This, to me,  is a clear appeal to authority.
    Funny enough, though, there were at least two exceptions that I thought were noticeable:
  • A very prominent software vendor had a relatively large floor presence in the North Hall, but their message carried the same “look at me” style of messaging by calling themselves “the global leader in…”.
  • A very large software company comfortably situated in the Fortune 100 list had a *tiny* booth on the North Hall, alongside upstarts. It also had the same messaging as upstarts (“Maximum security”). Frankly, if they couldn’t afford to pay for at least a mid-sized booth, what where they even doing there?
Scarcity was also easy to explore:
  • Every vendor was unique. Vendors seem to dislike being framed in the same category as others. Every one has a peculiar element that makes them unique. This is extremely useful when trying to explore ‘scarcity’ as a trigger. “We’re the only UBA with strong crypto analytics and threat intel feeds” or something along those lines. If you believe that vendor to be unique, how will you consider alternatives?

Liking is inherent to a trade show:

  • You’d be hard pressed to find a “sad” face in all the expo floor. Sure, some organizations (such as government agencies or non-commercial firms such as business development offices) may have less appetite for easy banter, but mostly everyone else was “happy”.IMG_1070
  • Liking also extended to the vendor allowing you to do nice things, such as going ‘Office Space [slightly NSFW]’ on older equipment, shooting Nerf guns, or letting you meet a trendy actor.

Reciprocity is quite easy to pick up as well:IMG_1063
  • Conference Tchotchke/Trinkets. From Star Wars lightsabers, to USB fans, to drones, to stress balls, to pens, … one could fill volumes of luggage with all the giveaways. They are a clear appeal to reciprocity, along with the drinks/popcorn/… served throughout the expo floor. Personally, I liked the popcorn. 🙂
  • Conference Events/Parties. Sure, not only enjoy the giveaways at the expo floor, but come join your vendor for a bash afterwards.
Appeals to Social Proof and Consistency:
  • Social Proof was in display in every mention of how many thousand people attend the conference, as well as the consistency in the overall materials – from the Norse lanyards to many “(ISC)2” ribbons attached to the badges. The message is clear: “you’re all part of the same community”. Not a bad message overall, of course, but also a nudge that if people are looking at a particular demo/booth, hey, you’re not so different from them and maybe you should too…
  • Consistency seems to come afterwards. After you scan your badge at the booths – either as a condition to get the aforementioned trinket or just because you’re around watching a demo – the inevitable post-RSA email arrives: “You visited our booth and had interest in our solution. How would you like to schedule a sales call/demo?” (thanks to @MeneghelAna for helping me dissect this usage).

 

Just rounding up random observations:IMG_1084
  • Quite a few vendors – large & small – had presence on BOTH North and South expo halls. Marketing budgets must have been plenty this year…
  • Lots of ‘Endpoint’ solutions, alongside ‘Analytics’.
  • Too many ‘pew pew’ maps, including in 3D!

 

So, in essence, a skeptical walk through the expo floor sees many examples of influence. Be aware (and beware…) of it, at RSA and elsewhere.

Lots of people (particularly in our echo chamber) have very negative opinions on the conference. I’m not one of them. I really like the opportunity to learn interesting perspectives from the sessions (sure, some may be ‘basic’, but we’re not all experts at everything, are we?) and I *love* the opportunity to catch up with people I only see at conferences.

That being said, I struggle to find value in the expo floor. Sure, it is a great arena to run into folks, but for other interactions (looking at new products/technologies, chatting up with your friendly vendor, …) there are better options, IMHO.

This is no longer the age of COMDEX.