My conference tracking system

This post is about the mechanics of my conference tracking system. I was asked about it on Twitter and finally took the time to write things down.

(updated: added new effort category and some guesses for monetary costs)

For rationale/justification for how I do things, I may write a follow-up post. For now, a couple of points:
  • “Your mileage may vary”. This is how I chose to do things, based on my interests and goals.
  • The list of events is wholly dependent on my interests and location (Toronto, Canada). Your list will almost certainly be different.
  • If you want to plan your process around proposing talks at events, I’ll write my version later, but for now take a look at Andrew Hay‘s excellent posts on how he does it. Start here and then here for part 2.


Without further ado, how do I keep track of conferences?


Basically, I have a tracking spreadsheet and I update it both ad-hoc and periodically.
 People have asked me to share my spreadsheet. Here it is, hopefully in read-only mode:


This is my version, shown here for example only. PLEASE don’t request edit access, instead, use it as a basis for your own version.

Ok, sure, a spreadsheet. What’s in it?
Here’s a screenshot: 
The spreadsheet has one row for each event instance, then several columns.


Important: this is for tracking mainly public events. Private events (‘analyst days’ with vendors or vendor-specific user events) are usually NOT tracked. Again, this is what works for me, YMMV.


Over time, the Google spreadsheet has grown to include the following columns:
  • Quarter – 1/2/3/4, just to get a sense of when in the year I should expect the conference to happen
  • Name/Link – Name of the conference as text, hyper-link to main conference site (current or last edition)
  • Begin/End dates – self-explanatory, but noting that this is for the main conference, not pre/post training, which I never attend
  • City – Current/last location for the conference
  • Region – I like to remind myself of general location – US or Canada, East, Central or West, etc…
  • CFP Date – *if* I consider submitting, a general date for when I should do something about it. This column uses custom formatting. See comments below.
  • Effort – My estimate of effort to attend. Comments below.
  • Notes – Miscellaneous notes on the conference
  • Attend – Estimated attendance
  • Type – My classification for the conference: local event, regional, niche, or global.
I also included a scoring system. These fields are completely subjective, rated on a 0-3 scale (3 is highest).
  • Insights – do I expect to acquire knowledge relevant to my current interests? Unique knowledge? Forward thinking? What I may rate as 1 someone else might say 3. YMMV!
  • Prospects – I don’t do sales in my role as industry analyst, but that doesn’t mean I don’t want to meet potential clients (enterprises or vendors) for my firm.
  • Meetings – for existing firm clients, do I expect to meet them and have briefings? For sure at events like RSA and BlackHat US, less likely others.
  • Community – is the event related to a community I want to be part of or actively support? It may be a great local event, but if I can’t be a part of it, it may score lower.
  • Interest – this is a bit controversial, but given my own interests, for whatever reason, I may want to tilt scales a bit 😉
  • Speaking – my interest in speaking at the event.
Finally, the ‘score’ – each of these scoring fields have a customizable weight, which then are used to add up to a score for that conference.
I also manually color code my sheet, as follows:
  • Default (no color): something I’m interested in keeping track of, may develop into something to attend or propose a session to.
  • Yellow: something I’m actively trying to attend, either by negotiating with my manager and/or making plans to attend solo/out-of-pocket.
  • Green: something I’m scheduled to attend or have very firm plans to do so.
  • Orange: conflict. Something that, this time around, means I’m not able to attend for whatever reason.
Additional comments:
  • I use custom formatting for CFP date column, with it reminding me a few weeks before that date, or whitening out if the date has passed. That is the date I should do ‘something’ about it: either deliver a proposal, or at least check the website to see if something has been posted about the CFP process. Many smaller events only open CFP process closer to the event, while other events open – and close – CFP process many months before.
  • My ‘effort’ column represents my estimated effort to attend, as follows:
    • 0 – local event, in the evening, minimal effort to attend. Estimated cost < $100
    • 1 – local event, during the day, meaning I’ll have to plan work schedules around it. Estimated cost < $100 plus time
    • 2 – non-local event, but driving distance from Toronto. Multi-day effort. Estimated costs < $500 plus time
    • 3 – non-local event, relatively short flight away from Toronto. Multi-day effort, starts getting expensive. Costs in the $500-$1000 range
    • 4 – non-local event, significant commitment of time and resources to attend. Costs in the $1000-$2000 range
    • 5 – outside North America, huge time/resources commitment. Expected costs > $2000 range.


All this effort. Why? What do you do with this?
Well, I use it to be aware of when I need to plan for CFP or attendance, if I really want to go.
I use it to balance out conference attendance across the year.
I use it to identify obvious conflicts as I sort out by date. Highly unlikely I’ll be able to do two of DockerCon, Cyber Week, and WEIS, for example. (not sure I’ll do even one of them)


So, how do I keep it updated?
Regularly – at least on a monthly basis (I get reminders) – I do the following tasks:
  • Order the sheet by beginning date.
  • For any date that has passed, rewrite begin/end dates as TBD
  • For any existing entry with TBD date (including the ones just changed), poke around – conference site, Twitter, … – to check if dates for next editions have been announced, and update the entry.
On an as-needed basis (i.e. not monthly), I will update the sheet as follows:
  • Add new entries based on personal interest – usually something I got from Twitter/RSS/LinkedIn/…
  • Revise scores based on new information or changes in preferences
  • Remove entries that don’t interest me anymore – maybe I changed focus, maybe the event is no longer running, etc…
  • Check for conflicts with other commitments and mark off as needed.
  • If I feel like it, run through the date update process I mentioned above.
Just as an example, as I was writing this post, I made changes that included:
  • Adding regional RSA/BlackHat events, even if for keeping track only.
  • Moved the Kaspersky SAS event to TBD (not sure yet when/where next year’s will take place)
  • Marked some October events as conflicts based on work calendar


Anything else?
Yes, there’s a second sheet where I add estimates of travel costs. This is just for rough calculations, but I calculate lodging, transportation and local costs (not including conference fees) based on how long I’ll expect to attend. This list is updated as needed, and includes estimates based on looking up mid-cost hotels and airfare on Google Flights from Toronto, at least 4 weeks out.


Would love to hear any feedback, good, bad, ugly. Too OCD? Too FOMO? Yes, guilty as charged. Can it be improved? I’m sure it can, let me know how.

Scores for a particular conference are too high or too low? These are MY scores based on current interests and understanding of that conference. You’re free to disagree. The objective for this post was to discuss the methodology, not any specific results.

Thanks for reading!

Adapt or die


I just came home from SecTor 2017, held here in Toronto. It’s Canada’s largest security event, and Brian Bourne, Bruce Cowper, and team have pulled off another fantastic event.

As I contemplate on what I heard, there’s a message growing in volume in our industry: change. We heard it during the summer with Alex Stamos’ BlackHat keynote, we see it daily on Twitter with people like Jessy Irwin, Wendy Nather, and others taking a more user-friendly approach to security, and I saw it clearly across a few key sessions I saw at SecTor.

On Monday, as part of the Canadian summit of the Cloud Security Alliance, Rich Mogull advocated for and clearly demonstrated the benefits of adopting new paradigms for security of cloud-based workloads. From using message queues for communications between Web and App tiers to the impact of immutable infrastructure principles to do version updates, it is astounding how a modern cloud-based architecture can completely bypass key security challenges such as lateral movement and patching concerns.

On Tuesday, Chris Wysopal from Veracode led a master class in understanding the role of security within software development methodologies, including Watefall, Agile, and DevOps. He skillfully articulated the challenges facing those looking to add security to projects – from slowing down projects to competing with other business priorities, among others. Importantly, he proposed very clear improvements for Agile and DevOps scenarios, by embedding security expertise (NOT people) within Dev teams, and supporting these champions with specific guidance and tools. I tweeted about it at the time: one of the best presentations I’ve ever seen, period.

The two Wednesday keynotes were fantastic.

Bruce Schneier led us through an understanding of the broad changes we’ve seen in security and technology. Suddenly, as the ‘rest of the world’ grapples with issues that the security industry has been dealing with for a long time, our expertise is valuable, and that we should put it to good use by working on these problems with government involvement. It was a broad talk, touching on key topics such as the failure of market mechanisms to address the externalities of poor security practices, the cross-jurisdictional nature of technical problems, and the fundamental clash of paradigms. On one hand, we (IT industry) adopted a paradigm of ‘change things quickly’, and that led to the successes we see today, including massive penetration of technology in modern society. However, much of that society has a different paradigm: ‘do it right the first time, and don’t touch it’ as we build public services and infrastructures built to last 10, 20, 50 years or more without being changed. It’s a fundamental conflict that can’t be easily resolved. He wrapped up with a clear call for us to be more involved in policy discussions, to help government craft policies that are helpful and realistic.

The second keynote of the day was Allison Miller – of whom I am an unabashed fan, not only of her ideas and experience as a multidisciplinary professional, but of her easy-going style and wicked sense of humour. She spoke about the broad reframing of security objectives, from “not losing” to “winning”. She was able to weave together a broader outlook for security tying essential lessons from game theory, behavioural economics, and data science. She articulated the notion that security is not necessarily about the never-ending cycle of taming vulnerabilities and that “we cannot live by breach alone”, but that it is about the much more impactful and achievable objective of protecting our user communities, at scale. Jaw-dropping clarity.

Tying these talks together, the message is so clear it hurts: our industry needs to level up. We need to understand that the game we’re playing it’s not purely technical, it’s economics, and that it is constantly played across stand-up meetings, hackathons, budget discussions,  courthouses, and more.

Not easy, not quick, not painless, absolutely not “just“, but essential.

Converge and BSides Detroit 2017

This past week I had the privilege of presenting at both Converge and BSides Detroit. It was great to see the energy and commitment from the local community, as well as the practical and insightful content from so many presenters.

Thanks to the organizers! It was also great to have video recordings from Irongeek. Converge videos and BSidesDetroit videos are already available.

This is just a quick post with the links to the content from the sessions I delivered:

  • Converge – The 4 Eyes of Information Security. TL;DR: An introduction to the 4 eyes framework from and some example applications to defensive infosec. Slides. Video.
  • BSides – Navigating Career Choices in InfoSec. TL;DR: A description of useful career planning concepts and methods, referencing Wardley maps, PersonalMBA, Cal Newport, and more. Slides. Video.

I’d love to hear your feedback. Comment here, reach out on Twitter, LinkedIn, etc…


The “Four Eyes” in Information Security

This is not a post about:4eyes
  • Middle-school-level verbal abuse of those that wear glasses.
  • The notion that a specific transaction must be approved by at least two people.
  • Clever wordplay about the dynamics of the relationship between the “Five Eyes” nations – US, Canada, UK, Australia, New Zealand – as it relates to surveillance and any recent bad blood between leaders.

Rather, it’s about a different way of looking at problems. Two questions inspired this post:
What if we broaden the arsenal of tools/methods we use for making progress with security initiatives?

What if we have been misdiagnosing issues and thus keep applying the wrong remedies?

ClearerThinking is an initiative/site/movement/… founded by Spencer Greenberg, a mathematician and entrepreneur, with a simple mission: “help people make better decisions”. I came across his site as I researched cognitive biases in the context of behaviour economics, and have been an avid reader ever since. He’s got tonnes of material on all sorts of topics, from helping to cross the political spectrum, to evaluating how rational you are, to – one of my favourites – an analysis of just how much is your time really worth. If you have the time, you might want to take a look. If you don’t have the time, then you absolutely must…

In “Learn a simple model for tackling complex problems“, Spencer describes the “4 Is” framework that he advises when looking at issues. His post includes a link to a short video of a presentation he gave on the topic. In essence, his message – advice to other entrepreneurs – boiled down to:

When looking at a “persistent problem” (something that is important, looks insurmountable, and has not yet been resolved), it is critical to understand where other have failed. This can apply both at a societal/world scale, as well as within organizations. The failure will usually derive from one (or a combination of) the following causes:
  • Individuals or groups were not exposed to the right incentives – positive or negative – to solve the problem.
  • There is ignorance about how to handle the problem, or that was impeding the process to continue.
  • While other elements were in place, the initiative had a severe lack of resources due to limited investment into the issue.
  • Finally, while all elements might have been in place, human irrationality – through cognitive biases or a poor decision-making process – impeded action.
Hence, the “4 Is”: incentives, ignorance (or information), investment, irrationality.

Once the issue has been properly diagnosed, then there are different types of remedies for each:incentives
  • Incentives. Well, create the right incentives: these can be positive (monetary rewards, recognition, etc…) or negative (introduce regulations/rules). There’s a
    famous example of how FedEx solved issues with delivery delays by changing the compensation model for the workers, so that it would reward them for finishing the job faster.
  • Ignorance. In this case, identify ignorancehow to provide the additional information. Is it a matter of simply educating the participants about something they didn’t know or thought incorrectly? Spencer uses the example of AIDS-prevention campaigns that fail because local participants held wildly incorrect views about how contraceptives work. Or is it a matter of the information needed not existing in the first place? In that case, the answer might be basic research, or data collection outside of the organization.investment
  • Investment. Here the answer is, quite simply, find ways to redirect more resources to the problem. It might be an issue of justifying additional budgets, or perhaps redirecting resources from elsewhere. The example Spencer uses is poignant: depending on your values, you should care that a lot more resources are spent saving pets than non-pet animals from cruelty. Should the money and attention be redirected?
  • Irrationality. Finally, the irrationalityway to address human irrationality (be it cognitive biases or flaws in decision making) can include the use of checklists (to reduce mental strain during stressful times), or proper design of system defaults. This is what behaviour economics practitioners refer to as “choice architecture”, and there are fantastic examples of the effect of this with organ donations and medical prescriptions.
To be clear, it might be that a particular issue results from a combination of these, but without applying this kind of clearer thinking, we’re bound to miss out on addressing the problem.

This is a great framework for looking at problems. I love it!

Applying it to Information Security

To me, the applications of the “4 Is” framework to security is direct, simple, and essential. Let’s look at a few scenarios:
  • Effectiveness of Security Awareness training. Security awareness is a common component , but often it is structured heavily on the “information” side of things. Could the answer to better security behaviour be better incentives (again, positive or negative)? Or perhaps the matter is irrationality, and we need to review the choice architecture (defaults)?
  • Deficiencies in rolling out patches. Is poor patch deployment a matter of information (teams don’t know when to roll them out), investment (it’s too onerous to roll them out using current mechanisms), incentives (no one other than security cares, “don’t fix what ain’t broken” mentality), or even irrationality (patches are too low on checklist of things to do).
  • Non-compliance to internal or external requirements. This is another area where a deeper look into the issue using the 4 “I”s framework can yield interesting results. In many cases, we seem to jump to conclusions and infer the cause of failure from our pre-conceived notions. Is that really the case?

The list goes on. We could cover software quality/security, risk management, technology adoption, security culture, …

Moving forward

I really like looking at other areas of knowledge for how we can apply their learnings to information security. This post was an example of that.

Hopefully, this post gives you another tool in your toolset when going about your work with security.

When looking at a security issue, think it through: how much of it is a matter of incentives, information, investments, or irrationality? The answer might not be obivous, and will likely help you …
Note: all images on this post are from

O’Reilly Security Conference

Disclaimer: I was a speaker at the conference. As such, O’Reilly Media covered most of my travel expenses, as well as provided me with a Speaker pass. If you think such benefits, nice though they were, had a significant impact on my opinion, to me it just means we don’t know each other very well yet. Trust me when I say that they do NOT… Happy to discuss as needed…

TL;DR: The experience of being part of the inaugural O’Reilly Security Conference was amazing. The content I watched was excellent, the venue/logistics worked really well, and I really liked the “vibe” on the conference. 10/10!


Source: O’Reilly Media – click for license details.

This longish post is about my experience on the O’Reilly Security Conference. I summarize what I learned from each session I attended, as well as general opinions. I can’t think in prose, so this is mostly in list format. Without further ado:

Format and Venue

  • 4-track conference, held at the New York Hilton Midtown.
  • Pre-conference training and tutorials, an Ignite session, then 2 days with morning keynotes followed by morning and afternoon sessions.
  • Good breaks in between sessions (ranging from 15 minutes to 1 hour)
  • No idea on attendance, likely in the mid hundreds.

Tutorials and Ignite

I attended Jim Manico’s half-day tutorial on “Modern IdM” hoping to learn more about Web authentication and I was not disappointed. He covered OAuth in detail, as well as session management, and recommendations around password storage. He’s a very energetic and engaging speaker, and time flew by.

The afternoon was reserved for the Apache Drill tutorial led by Charles Givre, from Booz Allen Hamilton. Charles took us through the rationale for Apache Drill – basically a SQL-supporting unifying front-end for disparate back-end data stores – and led exercises on data manipulation. Drill can be a fantastic tool for a data scientist to easily get at disparate data sources.  I’m a SQL newbie and struggled with some of the exercises, but that is on me and not on the tutorial. He also based the exercises on a pre-configured VM that has other data science tools. This will come in very handy…

In the evening, Jerry Bell and Andrew Kalat hosted the Ignite talks (lightning fast talks with auto-advancing slides). Jerry and Andrew host the Defensive Security podcast , probably my favourite security podcast. It was a privilege to chat with them. The talks were interesting, ranging from the need to shy away from hero-focused security work, to how we can do better at training/education, to the use of existing intelligence/data sources. Great talks, easy-going format.


Then there was… karaoke… For those that are not familiar, “slide karaoke” is a fun-filled/terrifying (depending on your point of view) format where someone is presented random slides at a fixed-time interval and the they have to “improv” their way to a somewhat coherent talk structure. Andrew and Jerry asked for 5 volunteers…. and I was one of them….

I don’t quite remember what all my slides were, but there were references to llamas, some sort of potato-based disease, and rule breaking. 🙂  I’m just hoping I made it entertaining for the audience…

Lesson learned: Courtney Nash is a master at this: she was funny, coherent, engaging, … She’s a very tough act to follow, which just happened to be my spot in the roster… You have been warned 🙂

Seriously, though: it was great fun, and I hope others join in. It was a great environment, people were having fun, and part of being in this industry is this sense of community that we build. It was a privilege to be able to take part in that.


On day 1, following the intro from Allison Miller and Courtney Nash, Heather Adkins from Google kicked things off by showing us how some of the main classes of security incidents – be they insecure defaults, massive theft, or instability – have been happening in different forms since the 1980s. After pointing to the increased siloization(sp?) of our industry as a possible cause, she urged us to think about broader platforms, and to design with a much longer timeframe in mind.

Richard Thieme took us through a sobering view of the psychological challenges in our career. Drawing parallels to the intelligence community and the challenges faced there, Richard rightfully reminded us to stay mindful of our needs as individuals and building adequate support networks in our lives.

Becky Bace did a great job of comparing the challenges of infosec today with the early days of the auto industry, and how we can use some of the lessons learned there to improve it. Given my interest in economics and incentives, I was silently clapping pretty much all the time.

Unfortunately I missed most of the day 2 keynotes – I look forward to watching video later. What I did catch was the latter part of Cory Doctorow‘s impassioned and cogent plea for more involvement from us as individuals into the immensely important debate about the very nature of property and democracy in modern society. There are key discussions and precedent-setting court cases taking place now, and many of the key societal instruments we take for granted are at risk.

Day 1 Sessions

Speak Security and Enter. Jesse Irwin led a great session focused on how to better engage with users when it comes to discussing security and privacy. She laid out very well defined steps for improving. If I could summarize her session in one idea would be: have more empathy to your user community. From using relatable examples, to framing the issue positively or negatively, and many other suggestions. Hearing her tell of the adventure of teaching security to 8-year-olds was priceless!

Notes from securing Android. Adrian Ludwig from the Google Android team took us through a data-driven journey into the Android security ecosystem. After reminding us that Android security must accommodate from $20 phones to modified units used by world leaders, Adrian focused on three aspects: active protections made by the Google ecosystem, options available for enterprise decisions (such as allowing external app stores or not), and details about the Android OS itself. He made a very compelling case that the security architecture of a modern Android-powered device such as the Google Pixel rivals what other options exist in the mobile ecosystem (iOS, WindowsPhone). This was one of the best talks I attended.

Groupthink. Laura Mather has had a very interesting career, including time at the NSA, eBay, founding SilverTail (where I had the pleasure of working for her), now leading Unitive. Her talk was not a ‘security’ talk, but rather a look into the issue of groupthink, often caused by unconscious biases. Fundamentally, the variety of challenges in modern security environment should be met by having a diverse workforce generate ideas based on diverse points of view. In order to achieve this, we need to work on the issue of lack of diversity. Laura pointed out specific ways to avoid unconscious bias in hiring, particularly being aware of, as an interviewer/hiring manager, not looking for someone “just like me”. Hiring decisions should be matched on values, not on superfluous characteristics that lead to biased outcomes.

UX of Security Software. Audrey Crane leads a design firm, and made the case for proper UX design taking into account the people who will actually use the product. Her firm conducted research into usage habits related to SOC roles, and came up with a few personas (different from the typical ‘marketing’ personas) and then showed an interface design that takes those personas into account. Her recommendations are for vendors to take this aspect of the product creation process seriously, and for buyers of software to not only demand better software from a usability perspective, but to actively try out any software being purchased with the intended audience.

Social Scientist. Andrea Limbago brought a “social scientist” perspective to the broad issues around information security. She framed the discussion in terms of Human Elements, Geopolitical trends, and Data-Driven Security. The human elements section looked at an structure-agent dynamic (top-down versus behavioural) and advocated approaches to evolving the security subculture. Very interesting, as were the comments around security still having a cold war framework, and that there is a gap on the usage of data within security conversations.

Day 2 Sessions

Are we out of the woods?. Kelly Harrington from the Google Chrome team talked about Web security issues. She covered some key issues – how updates are not universal, how older devices get attacked, and the scourge of what Google calls Unwanted Software – and delved into details about the exploit kits (Angler, Rig, and others), trends of attacks on routers, plus examples of malicious behaviour by Unwanted Software. She wrapped up by sharing a little about what Google’s Safe Browsing API does and by giving actionable advice on web security. This was a great talk to complement the one on Android security. Finally, extra points for her for the Jane Austen references… 🙂

Criminal Cost Modelling. Chris Baker – a data scientist at Dyn – took us through a whirlwind tour of some underground markets and the actual data he found there for pricing stolen goods, exploit kits, or DDOS services. It was refreshing to see someone dive beyond “oh, underground markets exist” into actual markets, prices, goods, and the possible economic issues that exist in those markets. I loved this session. If there was one session I wish could have been longer, it is this one. I’ll be watching the video when it comes out, many times over.

Economics of CyberSecurity. This session was delivered by yours truly. Happy to announce that slides are available here. I focused on how a brief understanding of economic concepts – Marginal Cost of Information Goods, Information Asymmetry, Externalities, and concepts from Behaviour Economics – can help us rethink some of the broad challenges we face. I hope the audience liked it. I was happy with my delivery and did pick up on a few things I want to improve. I really hope to have the opportunity to keep bringing this message to others.

No Single Answer. Nick Merker – now a lawyer but formerly an infosec professional – and Mark Stanislav – now a security officer with experience as security consultant – focused on cyber insurance. Their session went into the difference between first-party and third-party insurance, then delved into the details of what cyber insurance options exist, what they typically cover (or not), and how these products are currently priced and sold. They also covered some misconceptions around the role of insurance in a risk management program, how infosec should play a role when purchasing cyber insurance products, and how a well-defined and executed security program can help with insurance premiums. I learned a ton, and really liked the session.


The sponsor area was relatively small (maybe 10-15 sponsors total) but the people I spoke to were knowledgeable and the selection was varied. Not so much your typical security vendor, but more those offering solutions that fit into a more modern architecture view of security. There were options for web app security, container security, source code security, etc… I did not focus much on it, given my role as individual contributor.

The conference schedule and details were available via the O’Reilly app (iOS and Android) and things worked well. One suggestion I have is that the app could offer a toggle for ‘hide past events’ on the Full Schedule view, as that would help people choose their next sessions without having to scroll around so much…

Food options during the breaks were varied and quite nice. I loved that we had sushi available on one of the food stations.

As a Speaker

My “field report” would not be complete without a comment about my experience proposing the talk and later as a speaker.

The submission process was well defined, the guidelines for what should go in the submission were clear, and the timelines were very fair. I followed the process via the website and the questions I asked the speaker management team were answered promptly and efficiently. Major thanks to Audra Montenegro (no relation) and her team.

The organizing committee has been very transparent about what their side of the selection process was like. This is tremendously insightful and helpful for future proposals. I particularly liked the use of blind reviews. Blind reviews help us as an industry increase the quality of the content that makes it into the stage, AND increase the chance of hearing from a more diverse pool of contributors. What’s not to like?

Prior to the event, I was able to connect with Courtney Allen and we collaborated on a short email-based interview (which you can find here). She was fantastic to work with and has a keen insight into the role that O’Reilly Media can play in the security landscape.

Bottom line is: If you have defensive-focused security content you want to present, you’re open to be being evaluated on the merits of your content, and want to work with great people putting it together, O’Reilly Security should definitely be on your short list of conferences to submit to.

MSSP Blues and the Theory of Agency


I like the approach of listening to a good podcast and then using it to expand on a particular idea. This time, I listened to Brakeing Down Security’s fantastic episode where they discussed the fallout from a very rocky response to a security incident by an unnamed Managed Security Services Provider (MSSP). Bryan Brake talked to Nick Selby and Kevin Johnson, based on Nick’s original blog post. Please read the original post and listen to the podcast, but here is the summary:
  • Nick helped an unnamed customer respond to a security incident.
  • This customer had a long-standing contract with an MSSP for monitoring their network, which included having dedicated gear on-site.
  • When Nick & customer got the MSSP involved, they had a number of nasty surprises:
    • The monitoring gear on-site was not working as expected, and had actually not worked for a long time.
    • The customer-facing employees at the MSSP were not only not helpful but almost evasive. Bailing out on phone calls, not giving straight answers, …
    • The actual value the customer was getting from the MSSP was far less than what they imagined, and was not useful during the incident response.

In short, a series of horrible news and interactions. Bryan, Nick, and Kevin make a TON of excellent points on the podcast. Worth the listen.

This whole incident reminded me of a topic I’d been meaning to write about…


“Agents” have “Principals”, but do they have “Principles”?

How do you feel about hiring someone to do something for you? Maybe it’s an employee you bring in to your company, maybe it’s a mechanic you hire to look at your car, maybe it’s a lawyer you call on to help you with a contract negotiation.

This is a very common economic transaction. When looking at it, we often use specific terminology: those doing the hiring are ‘principals’ while those being hired are ‘agents’.

In an ideal scenario, the person/company you hire (the ‘agent’) is having their interests met with the compensation they’re receiving, and will perform their tasks in a way that meets your interests (you’re the ‘principal’). In all those cases – and pretty much any relationship like it – there’s always a potentially thorny issue: despite being compensated for their efforts, are those ‘agents’ acting on a way that is aligned with the ‘principal’s’ interests? What happens when interests don’t align? This happens all the time:
  • Is a mechanic over-estimating the effort to fix a car?
  • Is the lawyer extending the negotiation because they bill by the hour?

Say hello to the “Principal-Agent problem“, a well-known problem in economics (and political science). It is also known by other terms, such as “theory of agency” or the “agency dilemma”. Fundamentally, it is the study of the dynamics between principals and agents with distinct self-interests in a scenario where there is significant information asymmetry.

Information asymmetry, you may recall, is the situation when one of the parties in an economic transaction has much more material knowledge about it than the other.  There are further nuances on whether the information asymmetry develops before a contract is established – the agent has superior information to the principal from the get-go – or that asymmetry develops post-contract – as the agent begins to work, they realize the discrepancy. These lead to slightly different solutions.

Principal agent

 (source: wikipedia)

Another common example of Principal-Agent problems is the conflict between a company’s shareholders – who have limited information about how it is run – and the company management. Depending on how that management team is compensated, they may make decisions that are not in the shareholders interest: maybe boost stock price by playing accounting tricks, for example.

Both economics and politics have identified a series of mechanisms to help address Principal-Agent issues, but they fundamentally come down to a combination of:
  • Contract design – how compensation is dispensed (deferred), fixed versus variable, profit sharing, etc…
  • Performance evaluation – both objective and subjective
  • Reducing the information asymmetry – having more information to make informed decisions


Back to the MSSP debacle…
Now that we have this notion of Principal-Agent fresh in our minds, looking into the unfortunate MSSP incident we see the clear issues caused by the agency dilemma: there’s indication that the MSSP did not perform their tasks with the interests of the customer in mind. That is very unfortunate, and well deserving of the criticism they got …

Still, let’s look a bit deeper into the whole thing. As we do, we see there’s plenty of potential blame to go around (again, I suggest reading Nick’s blog for deeper background):
  • First of all, did the original security team at the customer that chose the MSSP do so with the organization’s best interest in mind? Were they trying to actually implement a proper monitoring solution or were they just trying to check off a ‘have you contracted with a managed security vendor for monitoring?’ item from some compliance checklist?
  • There was plenty of blame for the MSSP not following up a poorly deployed solution, but what about on the customer side? Why was there no oversight?
  • When the new security team started at the customer, what level of diligence was done on taking on a new infrastructure?
  • Did the management team at the MSSP care that a particular customer was not deployed properly? Did the team/individuals that created the on-boarding run-books for new customers care? Was the implementation team at the MSSP side properly measured on how to do on-boardings?
  • During the initial calls, were the employees of the MSSP acting on their own self-interest of “just get this customer off my back”? Were they empowered to do something but chose not to?
  • Back to MSSP management: did they structure internal operations to empower their employees to handle the exceptions and urgent requests?
One minor point I differ from Bryan, Nick, and Kevin on their well-deserving roasting of the MSSP is that they seem to assume that the individuals at the MSSP had lots of freedom to deviate from the established procedures. I’m not so sure: it’s one thing for senior, knowledgeable professionals to do so, but it may be radically different for others. Again, what did the MSSP empower their team to do?

I’m being overtly picky here to drive a point that there’s potential for agency issues at multiple levels of the event chain, both within each organization (customer and MSSP) and between them. There can be agency issues between employees and employers, as well as between separate commercial entities.


The broader impact

The point for this post is broader than the MSSP debacle. By the very nature of our industry, it is extremely easy for Principal-Agent issues to appear:
  • There is tremendous information asymmetry in InfoSec to begin with: There are too many details to go wrong, things change too fast, too many moving parts, etc… Those who hire us are often not aware of what we do.
  • We have tendencies to compartmentalize information about security itself (“sorry, we can’t talk about this”). This leads to further information asymmetry.
  • With “security” being a latent construct – it is difficult/expensive to observe/measure – our principals have a hard time measuring the effectiveness of security efforts.
  • With the difficulty & cost in hiring for security – be it employees, contractors, or businesses – there is less flexibility and interest in exploring details of contract design.
How do we – as an industry – get better? How do we deal with this? I think it comes down to:
  • First, we need to be aware of the issue and recognize it for what it is: a well-defined economic problem for which there are broad classes of solutions.
  • Then, we should recognize our roles within the transaction:
    • Sometimes as a buyer – hiring outsourcers, buying security solutions.
    • Sometimes as a seller – employee/contractor providing security services/expertise to someone, or selling a security solution/service.
  • Finally, within our roles, we should expand beyond the technical nuance – networks, encryption, appsec, etc… – and delve into:
    • clearly define and deliver reporting
    • pay more attention to contract design, service level definitions
    • perform periodic evaluation of the services
    • anticipate where principal-agent issues might arise and address early on. Maybe it is creating a better report, maybe it is having a lunch&learn on the solution, etc…
  • Lastly, we should continue to grow as community by sharing information – blogs, podcasts, conferences, … All that helps to reduce the underlying information asymmetry.
On that final point, I salute Bryan, Nick, and Kevin for their excellent podcast episode, and all the other community participants from whom we all learn so much…

If I had to summarize things:
  • Know what you’re buying. Educate yourself as needed.
  • Know what you’re selling and help your customer understand it as well.
As with so many other things, it’s not only an InfoSec issue , it’s an economic one…

On the economics of ransomware

We blinked, and the world changed on us.

This [long] post is not meant as doom&gloom on the scourge of ransomware, but rather a look at some basic economic aspects of this type of attack, and some recommendations for the future.

So far,  2016 is definitely the year of ransomware. Every vendor is talking about it in their pitches, the media is all over it (good articles here and here), etc. This blog just adds to that cacophony, though hopefully adding a different perspective.

“Prior Art”: Lots of people are now talking about ransomware, and I’m sure many have in the past too. I’d be remiss if I didn’t point out that Anup Ghosh of Invincea wrote a scaringly prophetic blog post on this back in July of 2014! Check it out here. Also, I liked Daniel Miessler’s piece here.

Note : as I discuss these topics, I may sound insensitive to the plight of the victims. It’s absolutely not that: I think ransomware is a scourge that should be eradicated, that we bring to bear the full force of law enforcement, but I’m pessimistic it can be done.

There are several aspects of ransomware that make it interesting from an economic angle. Let’s explore some of them.

The “Taming” of Externality

First and foremost, to me, ransomware is the first major, widescale threat that significantly reduces the inherent externality of poor security practices. What does that mean?

Up until now, poor security practices by end users resulted in relatively light consequences for the users themselves. In some cases, being used as a spam relay might have been not noticeable, or at worst there was a rare circumstance where malware resulted in having to reformat one’s PC. Yes, annoying and potentially painful, but manageable. From a behavioral economics perspective, biases such as mental accounting made it even less painful.

The broader costs of that infection – spam being generated, systems that had to be wiped, etc… – were largely invisible to the user. In market economics terms, all these costs were externalities. This means that the agent in the transaction – the user – was not taking those costs into consideration when making their choice – in this case, poor security practices that let to an infection.

Enter ransomware. Now, the user is faced with the painful choice of paying the ransom – actual monies being stolen – or facing the imminent destruction of their data. Worse, depending on how that strain of ransomware behaves, it infected network drives and potentially backups as well. This triggers several well-known behavioural quirks/biases, including:
  • The salience of paying. It’s pretty clear that there is money being lost, and it’s your money (or your organization’s).
  • The immediacy of the request. It’s not something that can be postponed. Criminals know this, and exploit it: in many cases, ransoms increase as time passes.
  • Loss aversion. From Kahneman and Tversky’s work, we know the tendency of people to be loss averse.

All of this is, naturally, horrible for the user. From an economic perspective, though, it is interesting that this, in a way, “reduces” the externality of a poor security choice. The user now knows full well that their poor choice/practice may result in a non-negligible cost. [Edit: as someone provided feedback to me, just another way of saying “the chickens come home to roost”.] They’re understandably concerned, and rightly so. I don’t see this diminishing soon.

“To Pay or Not To Pay, that is the Question”

The second interesting point is analyzing the dilemma of deciding to pay the ransom or not. Even law enforcement seems ambivalent: recent advice has included both “pay” and “don’t pay”.

There’s two things to look at:
  • First, from a societal perspective, the issue is similar to the Tragedy of the Commons, a well-known economic problem. In the traditional Tragedy of the Commons, individuals overconsume a shared resource, leading to depletion. In the case of ransomware, it’s not the same: to me, it is close to the “Unscrupulous Diner Dilemma”, a variation of the more traditional Prisoner’s Dilemma, but where a group ends up paying more, even though they all wished they couldn’t. In the case of ransomware, the individual decision to pay negatively affects the community by supplying the criminals with additional funds to reward them for the crime, along with funds to reinvest in future capabilities for the tools, thus costing more in the future.
  • Individually, people and organizations should recognize that the rational economic decision is not just simply “is the cost of paying the ransom less than the loss associated with losing the data”. The decision should be based on that cost, sure, but also taking into account:
    • Is that the end of it? Will paying the ransom this one time be an exception? In most cases, hardly… As ransomware proliferates, different gangs will keep attacking.
    • Will paying the ransom even get the data back in the first place? As @selenakyle nicely pointed out recently, there’s little recourse if things goes wrong…

At the end of the day, we’re back to externalities:
  • Those recommending “don’t pay” don’t bear the cost of the advice: lost data, etc…
  • Those choosing to “pay” don’t [immediately] bear the indirect cost of enabling the criminals to continue their efforts.

A more realistic approach to handling of ransomware should keep these in mind.

“Thy ransomware’s not accidental, but a trade.”

There seems to be consensus that what has enabled the rise of ransomware is, among other things, the maturity of bitcoin. That was the point clearly made by @jeremiahg and @DanielMiessier (here). I agree: bitcoin seems to have tipped it, but along with other changes to the overall ecosystem that appear to have made ransomware a more viable attack.

Like legitimate business, criminals have explored ‘efficiencies’ in their supply chain. As the main example: bitcoin (the peer-to-peer exchange, not the currency itself) has removed significant “friction” from the system. Whereas before, the steps needed in the cashout scheme might include several steps – all of which incurred fees to the criminal – the ubiquity of bitcoin has made the cashout process faster, cheaper. Taking out the middlemen, if you will.

Regarding bitcoin specifically, there’s a couple of interesting points:
  • more than anonymous “enough”, bitcoin is a reliable and fast payment system. Even though it doesn’t provide full anonymity – the transactions on the blockchain can be traced to wallets – bitcoin is sufficiently opaque that the tradeoff of limited tracking with ubiquity/speed made it the currency/payment system of choice.
  • This leaves an interesting question about the bitcoin exchanges: Can we expect the exchanges to work against their own self-interest in restricting these transactions? What sort of defensive approach can we expect the exchanges to take? The danger of people equating bitcoin with ransomware is real, and the industry is right in defending itself.

All in all, from looking at the underground ecosystem, it looks like ransomware is a ‘killer app’: profitable, easy to use, etc…

“Much Ado About Nothing”? Maybe…

Finally, ransomware seems to have exploded into our collective attention, but is it really such an epidemic? While we deal with the onslaught of news/articles/posts about ransomware (including, of course, this post …), let’s recognize that there is vey little incentive to “underreport” ransomware infections. To wit:
  • InfoSec vendors can point to ransomware as the new ‘boogeyman’ that every organization should spend more resources to protect against.
  • Internally within organizations, like with “Y2K”, “SOX”, and “PCI” before it, we can now possibly start to see “ransomware” as the shibboleth that enables projects to be funded.
  • Media sites latch on to the stories, knowing the topic draws attention. As an example, a lot has been made of the incident where a Canadian university opted to pay $20,000 CAD . Would there have been the bombastic coverage if the cause of the loss was, say, a ruptured water main caused by operator error? Not likely…

I can’t help but wonder if this is not a manifestation of a couple of things:
  • one, a variation of what’s called the Principal-Agent problem: in an economic transaction where there is an expectation that an agent will act on behalf of a principal, but instead acts on their own benefit. In this case, bolstering the issue of ransomware above and beyond other relevant topics.
  • two, just your garden variety ‘availability bias’ from behaviour economics, where the ease with which we recall something inflates its perceived rate of occurence.

In either case, we can take a peek at the well-known Verizon Data Breach Report. What do we see? Verizon’s DBIR shows that ransomware, even as a subset of crimeware, is not as prevalent as other attacks. See figure 21 on page 24 of the 2016 report.

“’Advice’ once more, dear friends, advice once more”

Wrapping up, then. There is a fantastic paper by Cormac Herley, from Microsoft Research – So Long, and No Thanks for the Externalities – in which he discusses how users ignoring security advice can be the rational economic decision, when taking into account the costs of acting on some security advice. The paper is from 2009 and is still extremely relevant. I consider it mandatory reading for any security professional.

Taking that into account, how should we frame security advice about ransomware?
(One could argue whether ransomware is not exactly the change in cost that invalidates the conclusions. Might be an interesting avenue to pursue…)

At least to me, too much of the security advice we see about ransomware is not taking into account the aggregate cost of acting on such advice.

Ultimately, the protection methods have to be feasible to be implemented. With that in mind, here’s a few recommendations.

For individuals:
  • Be aware of your own limitations and biases as you interact online. To the extent that it is possible, incorporate safe practices.
  • Leverage the automated protections you have available – modern browsers have sophisticated security features, your email provider spends a ton of resources to identify malicious content, etc…
  • Devise and implement a backup system that fits your comfort level, balancing the frequency of backups with their associated hassle.
  • Periodically check and possibly reduce your exposure by moving content to off-line or read-only storage. Just like you wouldn’t walk around at night in a risky neighbourhood with your life savings in your pockets, make it a practice of limiting how much data is exposed.
  • If infected, don’t panic. Keep calm and, if you choose to do so, act promptly to avoid the increases in demands.

For corporate users, similar advice applies, boiling down to “don’t base your security architecture on the presumption that users are infallible at detecting and reacting to security threats”. Back it up with technology. On a tactical level, a few extra things come to mind:

  • Verify that current perimeter- and endpoint-based scanning of executables/attachments is able to identify/catch current strains of malware (ask your vendor, then check to make sure). It might be a sandbox approach, endpoint agents, gateway scanning, whatever. Belt & suspenders is a good approach, albeit costly.

  • Consider application-level monitoring for system calls on the endpoints. This includes watching for known extensions, as well as suspicious bulk changes to files.
  • Consider monitoring data-center activity for potential events of bulk file changes such as encryption. Yes, there may be false positives.

  • Re-visit the practices that allow users to mount multiple network shares.
  • Make sure the Incident Response playbooks include ransomware as a specific scenario. Prepare for single-machine infection, multiple machines hit, as well as scenario where both local and networked files are encrypted. While I’m skeptical of survey data such as this, getting familiar with how bitcoin transactions work might be a worthwhile investment.

To me, ransomware is here to stay: it leverages too many human and economic aspects to simply vanish. As with many other “security” issues, this is just another one that was never just a technology problem, but a social and economic one. InfoSec professionals should keep that in mind, remembering that the solutions are not always technical…

And to purveyors and enablers of ransomware, “a plague on your houses!