Converge and BSides Detroit 2017

This past week I had the privilege of presenting at both Converge and BSides Detroit. It was great to see the energy and commitment from the local community, as well as the practical and insightful content from so many presenters.

Thanks to the organizers! It was also great to have video recordings from Irongeek. Converge videos and BSidesDetroit videos are already available.

This is just a quick post with the links to the content from the sessions I delivered:

  • Converge – The 4 Eyes of Information Security. TL;DR: An introduction to the 4 eyes framework from ClearerThinking.org and some example applications to defensive infosec. Slides. Video.
  • BSides – Navigating Career Choices in InfoSec. TL;DR: A description of useful career planning concepts and methods, referencing Wardley maps, PersonalMBA, Cal Newport, and more. Slides. Video.

I’d love to hear your feedback. Comment here, reach out on Twitter, LinkedIn, etc…

 

Advertisements

The “Four Eyes” in Information Security

This is not a post about:4eyes
  • Middle-school-level verbal abuse of those that wear glasses.
  • The notion that a specific transaction must be approved by at least two people.
  • Clever wordplay about the dynamics of the relationship between the “Five Eyes” nations – US, Canada, UK, Australia, New Zealand – as it relates to surveillance and any recent bad blood between leaders.

Rather, it’s about a different way of looking at problems. Two questions inspired this post:
 
What if we broaden the arsenal of tools/methods we use for making progress with security initiatives?

What if we have been misdiagnosing issues and thus keep applying the wrong remedies?

ClearerThinking is an initiative/site/movement/… founded by Spencer Greenberg, a mathematician and entrepreneur, with a simple mission: “help people make better decisions”. I came across his site as I researched cognitive biases in the context of behaviour economics, and have been an avid reader ever since. He’s got tonnes of material on all sorts of topics, from helping to cross the political spectrum, to evaluating how rational you are, to – one of my favourites – an analysis of just how much is your time really worth. If you have the time, you might want to take a look. If you don’t have the time, then you absolutely must…

In “Learn a simple model for tackling complex problems“, Spencer describes the “4 Is” framework that he advises when looking at issues. His post includes a link to a short video of a presentation he gave on the topic. In essence, his message – advice to other entrepreneurs – boiled down to:

When looking at a “persistent problem” (something that is important, looks insurmountable, and has not yet been resolved), it is critical to understand where other have failed. This can apply both at a societal/world scale, as well as within organizations. The failure will usually derive from one (or a combination of) the following causes:
  • Individuals or groups were not exposed to the right incentives – positive or negative – to solve the problem.
  • There is ignorance about how to handle the problem, or that was impeding the process to continue.
  • While other elements were in place, the initiative had a severe lack of resources due to limited investment into the issue.
  • Finally, while all elements might have been in place, human irrationality – through cognitive biases or a poor decision-making process – impeded action.
Hence, the “4 Is”: incentives, ignorance (or information), investment, irrationality.

Once the issue has been properly diagnosed, then there are different types of remedies for each:incentives
  • Incentives. Well, create the right incentives: these can be positive (monetary rewards, recognition, etc…) or negative (introduce regulations/rules). There’s a
    famous example of how FedEx solved issues with delivery delays by changing the compensation model for the workers, so that it would reward them for finishing the job faster.
  • Ignorance. In this case, identify ignorancehow to provide the additional information. Is it a matter of simply educating the participants about something they didn’t know or thought incorrectly? Spencer uses the example of AIDS-prevention campaigns that fail because local participants held wildly incorrect views about how contraceptives work. Or is it a matter of the information needed not existing in the first place? In that case, the answer might be basic research, or data collection outside of the organization.investment
  • Investment. Here the answer is, quite simply, find ways to redirect more resources to the problem. It might be an issue of justifying additional budgets, or perhaps redirecting resources from elsewhere. The example Spencer uses is poignant: depending on your values, you should care that a lot more resources are spent saving pets than non-pet animals from cruelty. Should the money and attention be redirected?
  • Irrationality. Finally, the irrationalityway to address human irrationality (be it cognitive biases or flaws in decision making) can include the use of checklists (to reduce mental strain during stressful times), or proper design of system defaults. This is what behaviour economics practitioners refer to as “choice architecture”, and there are fantastic examples of the effect of this with organ donations and medical prescriptions.
To be clear, it might be that a particular issue results from a combination of these, but without applying this kind of clearer thinking, we’re bound to miss out on addressing the problem.

This is a great framework for looking at problems. I love it!

Applying it to Information Security

To me, the applications of the “4 Is” framework to security is direct, simple, and essential. Let’s look at a few scenarios:
  • Effectiveness of Security Awareness training. Security awareness is a common component , but often it is structured heavily on the “information” side of things. Could the answer to better security behaviour be better incentives (again, positive or negative)? Or perhaps the matter is irrationality, and we need to review the choice architecture (defaults)?
  • Deficiencies in rolling out patches. Is poor patch deployment a matter of information (teams don’t know when to roll them out), investment (it’s too onerous to roll them out using current mechanisms), incentives (no one other than security cares, “don’t fix what ain’t broken” mentality), or even irrationality (patches are too low on checklist of things to do).
  • Non-compliance to internal or external requirements. This is another area where a deeper look into the issue using the 4 “I”s framework can yield interesting results. In many cases, we seem to jump to conclusions and infer the cause of failure from our pre-conceived notions. Is that really the case?

The list goes on. We could cover software quality/security, risk management, technology adoption, security culture, …

Moving forward

I really like looking at other areas of knowledge for how we can apply their learnings to information security. This post was an example of that.

Hopefully, this post gives you another tool in your toolset when going about your work with security.

When looking at a security issue, think it through: how much of it is a matter of incentives, information, investments, or irrationality? The answer might not be obivous, and will likely help you …
Note: all images on this post are from clearerthinking.org.

A question about Behavioural Economics

I just came back from a wonderful public event organized by the Behaviour Economics in Action at Rotman (BEAR) team with Richard Thaler, the famed economist that helped launch behavioural economics.

1ggzrx2wgtpr24kq-tjplzaAttendees received a copy of his latest book – Misbehaving, which I’m making my way through (about 50% done) and now own in paper, digital, and audio formats… – and had the opportunity to hear him talk for about an hour on anecdotes about behavioural economics and answer a few audience questions.

I tried to ask a question, but was not picked out in the audience. So, documenting it here and hoping readers can help me with pointers or answers (of course, I’d be thrilled if Prof.Thaler would address it himsefl).

My question[s] – with some background but hopefully not annoyingly “monopolizing the mic”:

On one side of the spectrum, we know that behaviour factors play a huge part in individuals making transactions – choosing to donate organs, saving for retirement, …. On the other, we see high institutional ownership of shares and to my knowledge the significant majority of stock trades are either algorithmic or at least “professional”, which we expect to fall under the purview of efficient markets, etc…

At which point in this “spectrum of rationality” do things change? At which point, what kind of problem, should we stop choosing to favour behavioural factors over traditional ones? 

This is relevant to my interests in information security as we determine which kind of program or action should be more “behavioural” or should be more “rational”. At which point should the actions of agents be modeled one way or the other?

I’m always attempting to learn more, so maybe this is just a naïve question that’ll be answered further down my studies, but would love to hear insights.

Any ideas? Comment below or reach out to me.

Thanks!

 

Personal Knowledge Management – an accidental reminiscence…

I love Twitter. Over the years, I’ve been fortunate to be able to find a style of following/retweeting/conversing/… that works well for me: lots of lurking, occasional retweets, and some actual conversation now and then. One of those conversations led to this post: David J Bianco commented about the Evernote API, and Kyle Maxwell and I chimed in…

 

As I started to write about my ‘system’ for ‘personal knowledge management’, I recall an old (but still online) attempt of blogging of mine, where I covered this subject.

As I read those posts, it’s uncanny how true to form I stayed over the past 5+ years, and how much of the same problems remain…
 

Stayed the same/minor changes:
  • I still use the same “modified GTD/InboxZero” approach. It has resisted the test of time pretty well.
  • I still keep the same type of inboxes, but with more emphasis on Twitter now. Tweetbot is my primary interface, with the occasional glance into the official Twitter apps on iOS or web interface.
  • My personal knowledge system (lots of mind-maps) keeps growing, though some maps show their age. If anything, they now serve as a jumping off point to newer information. Also, I’ve standardized on keeping those files “on the cloud” (currently Dropbox).
  • I still use “Read it Later” (now renamed Pocket), and still struggle with how to extract information from it in a meaningful way. “reading list” is now several thousand articles long (yeah, good luck clearing that…)
  • I still use Evernote, no longer as bookmark manager, but for writing and note taking. I have 3 notebooks:
    • a local (not synced) work notebook for notes from customer meetings, etc…
    • a shared notebook with my family for local notes. rarely used, though.
    • a personal notebook where everything else goes. This blog post started as a note there.
  • I still use Mind Manager, now on the Mac. Not as powerful as the Windows version, but good enough for me.

 

Changes:
  • MLO is a wonderful tool, but not available on Mac. I switched to Things, which is not perfect, but does a very good job. One thing I really got into is being able to use it on desktop and mobile platform (iOS). This was something I didn’t care much for back then, but have grown fond of.
  • RIP “Google Reader”. Now I use Feedly, and links of interest (that didn’t show up on Twitter first), get sent to Pocket.
  • Not related to the tools directly, but now I chose to support paid version of these tools (Evernote, Pocket, Feedly, …) whenever I can. It’s affordable and I feel good doing a little bit to keep these tools running…
My struggles still remain. I’d love to hear how others handle it…
  • I come across LOTS of interesting content on Twitter – links to articles, specific images, … Lots of this interesting content gets saved into Pocket, but I only go back to them when using Pocket’s own search capability.
  • It is unrealistic to expect I’ll read all on my Pocket list, or that I’ll ONLY save stuff on Pocket that I’ll surely read later.
  • Often times I’ll struggle to find something I just *know* I came across before. This is worse for images (memes et al.)
  • I get the nagging sense I should be able to leverage Evernote better, but not sure how.
Maybe I can pick up on what David Bianco was hinting at with his Evernote API question and find a way to automatically save things in different notebooks, but not sure yet…

 

On a related problem, I also dabble with things like paper.li, scoop.it, … as a way of possibly generating *some* value to others from what I follow, but still haven’t found the right workflow for me. When do I share something? How to optimize that process at the point I collected a link? Still looking for answers…

 

So, any comments/advice/…? Comment on this post or hit me up on Twitter

 

Thanks!

On the “shortage” of InfoSec professionals…

It was interesting that two podcasts I listen to – PVCSec and Down the Security RabbitHole – both covered the ‘shortage in InfoSec’ topic. Both discussed the nuances and uncertainty around it. This is my contribution to that debate, just looking at things from a different perspective.

Getting things out of the way: yes, I think there is a shortage, and the sooner we acknowledge it, the sooner we can work on addressing it. I don’t have the hard data I wished to have to back up this claim, unfortunately, and will have to make semantic and anecdotal arguments. Yes, “plural of anectode is not data“.

Semantically, the very definition of shortage (from Oxford Dictionary) is:

{noun} A state or situation in which something needed cannot be obtained in sufficient amounts:

shortage of hard cash

food shortages

Anecdotally:

Yes, I admit there might be confirmation bias in my perspective.

I really enjoyed @catalyst‘s position here (and Rafal had a brilliant description of why we’re failing at training Tier1-type resources in the podcast), but I wanted to approach the problem from a different angle.

Let’s drill down into this a bit more and explore what this ‘shortage’ really means. To me, when people refer to ‘InfoSec professional shortage’, they really mean that:

“the current hiring process is not finding a large enough supply of professionals with a particular skillset and/or experience, for these roles within these teams at these companies, at these levels of compensation.”
So, as we analyze each of these sections, we can inquire/debate things such as:
  • current hiring process – is it broken? is the flow between HR and hiring managers appropriate? Is the screening process contributing to this? Are responsibilities and incentives for each party in this process properly allocated?
  • not finding – are they looking in the right place? Just waiting for resumes to arrive? Actively engaging with communities?
  • large enough supply – is the number of people being required a true necessity, or a reflection of inefficiencies somewhere else in the environment?
  • with these skills – are the skills being required actually relevant? How much of asking for a particular skill is not ‘playing it safe’, as opposed to understanding that some skills are easily transferable (especially products in same space: FWs, IDS, …)?
  • this type of experience – same as with skills, are the experience requirements really required, or are they a byproduct of inefficiency somewhere else?
  • for these roles – are we looking at the right roles for these people? Is it something that should be done within InfoSec or another team? Internal or outsourced? Human-driven or automated?
  • these teams at these companies -is it a matter of leadership? are the teams structured in a way that encourages professionals to apply? is the reputation of the team, manager, company such that would attract qualified candidates?
  • at these levels of compensation – finally, “rubber meets the road”. Is the overall compensation acceptable? Is the package of benefits attractive to the professionals looking into these roles?

I think @catalyst is right in that any one of these areas present an opportunity for enlightened leadership. All of them can be ‘fixed’:

  • adjust the hiring process to not “throw baby out with bathwater” – collaborate with HR to screen candidates adequately at all stages, look for candidates in different ways.
  • take a good look at the skills and experience being required.
  • reconsider the role and looks for alternatives, but considering the ‘whole’ picture and not just the immediate need to fill a seat.
  • take a good look in the mirror and check to see if there’s anything in the structure, culture, or leadership approach that might be driving candidates away.
  • finally, understand that there may be a case of ‘surge pricing’ and adjust expectations on compensation.

In economics, the labour market is a perfect example of information asymmetry at play. George Akerlof, Michael Spence and Joseph Stiglitz were awarded the Noble prize for their work on this. They have incorporated two mechanisms of reducing that asymmetry – signalling & screening. Learn about them and consider how your process currently implements them (even if unconsciously).

Finally, there may be perverse incentives at play: is the hiring manager (or HR) evaluated on the short term of ‘stopping the pain’ (just get someone!) or are there broader considerations about the health of the business (Raf’s point is poignant here).

So many opportunities here for leadership, like @catalyst said… but still, IMHO, there is a shortage of information security professionals.

As I look at the breakdown of what the shortage really means, I’m reminded of that adage: “Good, Fast, and Cheap. Pick Two.

Opinions on a report (inspired by DtSR #144)

( Meta: This was going to be a very different post. This version is, IMHO, much better. It is only better because of the incredible generosity of people I deeply admire, who gave me something precious: good advice, and their time. I am deeply grateful. You know who you are.)

Before we begin, two disclaimers:

  • Disclaimer 1: When discussing my relationship with statistics, economics, and data science, I like to use the same expression I use to describe my golf and chess skills: “Don’t mistake enthusiasm for accomplishment (or competence)“. What I write here may be completely off the mark and laughed at by a first-year stats/econ student. If that is the case, let me know and I’ll gladly eat some humble pie…
  • Disclaimer 2: I am a strong supporter of (ISC)2. I have been a CISSP since December 2000 (my number is in the low 4 digits…), I dutifully keep the certification up-to-date, I vote in the elections regularly, and more. Anything I write here is as a way of supporting the broader mission of the organization by highlighting where we can improve the deliverables for our community.

With that out of the way…

I was listening to the excellent Down the Security Rabbithole podcast, episode 144. Rafal Los, James Jardine and Michael Santarcangelo had a really nice interview with David Shearer, executive director of (ISC)2. David and the others were discussing the recent workforce report released by the (ISC)2, along with several interesting perspectives on how to engage more people in security, growth options for InfoSec professionals, etc… Loved it!

OK, let’s look at the report then. Having recently attended Tony Martin-Vegue’s excellent BSidesSF talk about issues with statistics in InfoSec, I figured reading this report from that perspective would be a good exercise… It was based on a broad survey sent out last year. The original survey was quite long, so I know a lot of effort went into creating it. I wondered at the time if the length would be an issue. As I prepared to write this post and looked at the mailing list archives for CISSPforum, looks like I was not the only one: several people complained about the length.

Well… In my opinion, there’s several issues with the report that are worth calling out. Listing them here was both a way for me to apply what I learned to the real world, as well as to hopefully be part of the broader conversation on how to improve future versions. Hoping to act as a good member of our community, I didn’t want to just ‘throw a grenade and walk away’: underneath each issue, I suggest what, in my view, might be a better approach.

Part 1 – Key issues

  • A major theme for me throughout the report is that many security questions are asked outside of the purview of the respondents’ expected area of responsibility (as self-reported later in the report). How can we evaluate answers about software development practices given by network security teams, or vice-versa? What visibility does your typical tier-1 security analyst have over plans for use of advanced analytics, or outsourcing to answer questions about that? To me, this affects the comments about all the security issues listed throughout the report.
    • My suggestion: Get more detailed information from the respondents and only ask questions that are relevant. If that kind of selection was already done, make it known in the report itself. “When polling about budgets, we only tallied responses from those that indicated they have budget management authority” and so on. Does it make for a more complex survey? Yes. Does it make it better? Also yes.
  • I think there’s confounding in the salary data. When looking at salary ranges (p.18) for (ISC)2 members and non-members, the salary ranges for (ISC)2 members is significantly higher, but then so is their tenure (p.19). Also, their roles (as reported on p.22) also seem to skew towards more senior roles. Surely one would expect that more experience in a career would result in higher wages, no?
    • My suggestion: Instead of single salaries, break them down by roles, possibly by tenure too. This makes it much more transparent what the effects are.
  • The report is based on a survey and, with the methodology section being very short, I can’t be confident that the many issues associated with surveys – selection biases (selection of the sample, representativeness of the sample against the population, etc…) and response biases (non-response, voluntary response, leading questions, …) – were properly handled. There were many questions on my mind as I read the report: How was the survey distributed? What was the response rate? What were the incentives to answer? What were the exact questions? Were there leading questions? How did these factors influence the response? How representative is the overall response pool? etc… With “producing statistically significant findings” being a design goal for the report, did we achieve it? I don’t know (but guess we did not).
    • My suggestion: It would be much better if the report included a more detailed survey process and methodology section, discussing some of these topics explicitly. As an example, I really like the frankness with which Verizon approaches this on their excellent VZ DBIR (see their Appendix B).

Part 2 – Other Issues

  • I disliked the structure of the report in that it provided minimal information about the population then jumped straight into their opinions (“State of Security”), only to review the makeup of the population later on.
    • My suggestion: It would really help if the report was structured so that details about the population – job roles, tenure, etc… – would be brought up front, so that we can keep that in mind when we’re reading the sections about about people’s opinion.
  • Try as I might, I have no idea how the authors decided to extrapolate the demands for the future workforce (around p.32)… How were these numbers reached? What factors were taking into consideration? What are the margins of error?
    • My suggestion: Providing a much more detailed description of the underlying assumptions, methodology and data will make this section much more defensible and, by extension, much more useful for those using it as part of their rationale for workforce planning.
  • When looking through the ‘issues’ sections, I kept asking myself: how were these options presented to the respondents? To me, some options were not mutually exclusive, as they included both vulnerabilities and threat actors. How to choose between “hackers”, “configuration mistakes” or “application vulnerabilities”? Between “internal employees” and “configuration mistakes”? How were people led to their answers?
    • My suggestion: As the survey improves with time, a more rigorous validation of the questions will help clarify these points. Also, including their definitions in the report itself will also add clarity: “by ‘Hackers’ we mean this:…
  • The choice for many charts is “3D pie”. Tony’s BSidesSF session made excellent points about how 3D pie charts are extremely deceiving. This was clearly evident here, with the visual distortions skewing interpretation throughout. In addition to Tony’s session, there’s a great paper on pie charts here.
    • My suggestion: replace the 3D pie charts with bar charts or other option.
  • I think the discussion of global trends – salaries, primarily – was very brief and did not take into account what I think are relevant factors, such as exchange rates, cost of living, and inflation, to name a few. Page 18 talks about 2.1% increase in salary for members versus 0.9% for non-members, but is that before or after inflation? When looking at exchange rates, it might even work in favour of the argument that salaries is rising, but there is no discussion of rates anywhere. When one looks at how the US dollar behaved against other currencies (Here’s a couple of screenshots, click for larger), with a greater than 20% variation, how does the survey-over-survey gain of 2.1% compare?
    • My suggestion: a more rigorous discussion of the financial conditions surrounding the respondents will help put proper numbers in context.

Screen Shot 05-26-15 at 10.18 PM 001Screen Shot 05-26-15 at 11.23 PM

  • I think the report is not clear on the distinction between job growth and self-identification on a role. As an example, p.21 claims “demand for security architects leads job growth”. Based on my reading of the report, it means more people ‘self-identified’ as security architects. Unless there was a review of job openings or detailed interviews with hiring managers about roles, I don’t think we can make these claims about job growth.
    • My suggestion: Clarify terms (goes back to my main point about better methodology description).
  • There were a few more quibbles, but nothing as critical as the main points above.
    • Page 21 presents the breakdown of respondent profiles. Well, if you add up the percentages listed for job titles, you get 70.9%. This means a full 29.1% of respondents (almost 3 times the top category – security analyst at 10.5%) did not identify themselves. The chart leads one to think that most of the respondents were security analysts.
    • The “train & retrain” section (p.35) has several options that all circle around “training”. How to choose between them?

Part 3 – Positive things and conclusion

Not all is “doom and gloom”. I also want to highlight a few themes I really liked:

  • I particularly liked the reasoning about the challenges of on-boarding employees on page 32.
  • I wholeheartedly agree with the general sense that ‘communication skills’ are important and should be improved on.
  • Towards the end, I tend to agree with the approach listed in “The Last Word”, about working together to address several issues.

Let me make it clear: I strongly believe that the (ISC)2 is working hard to make positive changes in the InfoSec industry and deserves our support. In our industry, we suffer from not having enough good, unbiased content to discuss our issues. I applaud the initiative to go out and try to compile what the profession is thinking, what challenges we see, how we can move forward, etc…

That being said, my opinion is that, as an (ISC)2 member, I’d value much more having a statistically sound report over the ‘experience’ of participating in a less stringent survey.

One of the things I learned when writing this post is that the report is not issued by the (ISC)2 membership organization itself, but rather our charitable foundation. (ISC)2 Foundation then contracted the survey out. Let’s not forget that a lot of the work in defining the survey is done by volunteers and we need to help our community where we can. For me, right now, I think I can help by writing this post. In the future, who knows?

On Plane Hacking… We’re missing a point here.

[ Target Audience: Our InfoSec industry… ]

Yes, I know we have been inundated with the discussion on Chris Roberts’ saga. Feel free to read the thousands (closing in on a million as I write this) of links about the whole situation to catch up. FBI affidavits, ISS, “sideways” (yaw?), NASA, … I particularly liked Violet Blue’s summary of the recent history of airplane security. Another very interesting post is Wendy Nather’s, here.

Yet I think a critical point is being lost in the debate of whether he was able to do what he did or not.

I don’t care whether he was actually able to interfere with avionics. Being uninformed about it, I prefer the heuristic of believing the aviation experts that have, in great numbers, called out ‘B.S.’ on the claims.

What I *do* care about is the alleged pattern of behaviour of trying this with disregard for the possible consequences to the public.

I am NOT against security research, holding those responsible to task, responsible disclosure, “sunlight is the best disinfectant”, … I think all those doing responsible research on car hacking, medical devices, avionics (read Violet Blue’s excellent summary), etc… deserve our gratitude and support.

What I AM strongly against is the apparent complete disregard for the well-being of fellow passengers. It is alleged that this was done ’15 or 20 times’ on several flights. I don’t know if the flights were mid-air or not. I don’t know if anyone noticed, or should have noticed. What I do know is that the consequences of those security tests could have affected innocent bystanders. That is NOT ok.

“Oh, but if he couldn’t really affect the plane, it’s ok, right?” NO, it is not. What if there were adverse consequences? What if the pilots noticed something and – being safety conscious – decided to divert flights?

Some might say – “ok, that is the price we must pay for security”. It was not his call to make, was it?

As an industry, we can’t carry around this sense of entitlement and be seen in good light by the broader public.

He has apparently shown poor judgement in other occasions – talking to the FBI on his own without legal representation is another example, but that just affects him.

That being said, I echo Bill Brenner’s sentiment on moving forward. I’ve never met Chris (hope to one day) and I wish that he is able to learn from this debacle and grow as a professional.

For the broader industry, let’s look at the mess this has created and learn a few lessons too…