SIRAcon, day 1

I was extremely fortunate to be able to attend my first SIRAcon last week: it’s not often that one of those ‘aspirational’ conferences was happening at just the right time (found a way to fit in my schedule), not too far from home (Toronto to Detroit is not too far a drive), and was affordable (working on a tight budget here…).

It was a fantastic experience. Many, many thanks to the hosts (Quicken Loans), sponsors (CBI, RiskLens, BT, and BitSight), organizers (David Musselwhite and team), … The venue was great, and it was wonderful to see how the team is proud of Detroit and the turnaround that is happening.

My plan is to have a quick summary of the sessions and then, later, more general comments. There was a decent amount of live tweeting (spread between three hashtags: #SIRAcon2015, #SIRAcon15, and #SIRAcon) , but I thought a quick summary of each session would be a nice idea too.

Warning: my ‘starstruckness’ was out in full force. Totally justified 🙂

 

Keynote: Douglas Hubbard (@hdr_frm) and Richard Seiersen (@RichardSeiersen)

Doug and Richard opened up SIRAcon with a tour-de-force on applying quantitative methods to Risk analysis. They presented interesting findings showing that an appreciation of qualitative methods seems to be correlated with less comfort/familiarity with statistics concepts. To me, this presents a fantastic opportunity to pursue better dialogue through education 🙂bear

I loved the message that ‘we don’t have enough data’ is not an excuse. They presented a good case for using the beta distribution as a stepping stone from a world of ‘no data’ (where the uniform distribution applies) to a scenario where data is available.

Oh, bonus points for Latinizing the [in]famous bear analogy as ‘Exsupero Ursus‘ 🙂

 

Jay Jacobs (@jayjacobs) and Tom Montroy (@TomMontroy)

Jay presented an interesting concept of Information Security as a ‘Wicked Problem’ and presented the Cynefin Framework as a basis for discussion on how complex the discussion around good/best/current/… practice applies to our problem space.

Later, Jay and Tom presented several interest exploratory data visualizations looking into how SSL/TLS practices correlate with botnet activity, as well as how indicators such as BItTorrent traffic appear related to Botnet activity and breaches.

I think it was a perfect example of how a data-driven approach to security can lead to insights we would not otherwise have.

 

J. Wolfgang Goerlich (@jwgoerlich) covered the topic of Culture and the relation to Risk, something he’s been deeply involved in. He collaborates with Kai Roer (@kairoer) on the excellent Security Culture Framework. There were several good examples of how changing user behaviour led to successful outcomes: security awareness training, SDLC, DLP, and physical security. More than that, though, he emphasized the importance of proper feedback loops when addressing culture changes, as well as what I thought was one of the most important messages: culture changes “one conversation at a time”.

 

Barton Yadlowski (@bmorphism) is an applied mathematician at HurricaneLabs, and presented an introduction and the case for leveraging machine learning in InfoSec, leveraging examples with Splunk, scikit-learn and Spark. He showed how tools such as Splunk can help with unstructured information and normalization, followed by exploratory data analysis. From there, he had an interesting introduction of broad Machine Learning topics and how it can be used to detect anomalies in different scenarios.

It’s always nice to start putting together the description of methods floating around with more practical applications.

 

Karl Schimmeck (@kschimmeck) covered an effort by SIFMA (Securities Industry and Financial Markets Association, an industry association of 300+ financial services firms) to simplify the process of performing 3rd-party risk assessments. This is extremely important to reduce to compliance costs for both financial services and vendors alike, and hopefully will be adopted by the regulators and the auditing organizations. Using SharedAssessments and SOC2 as initial guidelines, then mapping specific custom requirements and later mapping to NIST-CF, it looks very promising.

As someone who has been on the receiving end of those questionnaires, I really(!) look forward to this effort being successful.

 

Jack Whitsitt (@sintixerr)  led us down a different path. Drawing on his broad experience and recent activities well beyond typical InfoSec, he urged us all to consider the much broader environment in which InfoSec exists. There’s fundamental issues at multiple levels of abstraction – from individual all the way to global – and, when it comes to organizations, how can we deal with (and support) InfoSec teams being thrown in the middle of geopolitical conflicts?

I loved the talk, but I would like us to explore better the assumption that things are getting worse: are we being affected by the availability bias of all the breaches? That’s an open question (to me, at least).

 

Thomas Lee from Vivo Security stayed consistent with the ‘quantitative’ theme for SIRAcon and looked at some interesting correlations on factors that may be related to breaches/compromise. He then made a strong case for adopting a more ‘actuarial’ approach to security programs, by taking a better look at loss data as a method of selecting security controls. He then presented an example of applying this methodology to a mid-sized pharmaceutical company, showing how a performing an endpoint update was actually a great approach of reducing impact from phishing.

Personally, I think the approach has merit, as long as we can avoid the trap of spurious correlations. I would have liked to have seen more confidence intervals there too 🙂

 

Michael Roytman (@mroytman) needs no introductions. His talk brought together concepts that have been around us for a while, coming from the likes of Schneier, Geer, Hutton, Ed Bellis, and others in a discussion of the interplay between Metrics, Data, and Automation. He clearly demonstrated how attackers are able to leverage automation in attacks much better than defenders are able to do so for defense. He also gave a great example of how better datasets can fundamentally change the whole ecosystem: Uber. By having better data about passenger demand (along with other things, of course), Uber has become the market-changing force we all know.

We all throw ideas around ‘what is a good metric’ and ‘how we can better automate’. This talk helped a lot.

 

Allison Miller (@selenakyle) closed off the first day with a topic that is very near and dear to me: drawing concepts from Economics into InfoSec and Risk. I’m a huge fan of her work, and this was no exception. Following a quick look into how microeconomics topics such as maximization of utility and utility curves work, she clearly demonstrated how, given an expected value (mean), a posture of risk aversion manifests itself as the desire for smaller expected variance. She then chose to explore possible linkages between InfoSec/Risk and macroeconomics topics, including a great tie-in to the Lucas critique. She has mentioned before the possible use of a ‘Security CPI‘ but now called out the possibility of defining ‘security econometrics’. Very thought-provoking indeed.

 

Day 2 post coming up soon…

 

NOTE: If this summary is at all interesting, know that SIRA recorded the event and that, if I understood it right, video will be made available to members (hint, hint, …) soon.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s