O’Reilly Security Conference

Disclaimer: I was a speaker at the conference. As such, O’Reilly Media covered most of my travel expenses, as well as provided me with a Speaker pass. If you think such benefits, nice though they were, had a significant impact on my opinion, to me it just means we don’t know each other very well yet. Trust me when I say that they do NOT… Happy to discuss as needed…

TL;DR: The experience of being part of the inaugural O’Reilly Security Conference was amazing. The content I watched was excellent, the venue/logistics worked really well, and I really liked the “vibe” on the conference. 10/10!

security_newyork_2016_dsc_4822

Source: O’Reilly Media – click for license details.

This longish post is about my experience on the O’Reilly Security Conference. I summarize what I learned from each session I attended, as well as general opinions. I can’t think in prose, so this is mostly in list format. Without further ado:

Format and Venue

  • 4-track conference, held at the New York Hilton Midtown.
  • Pre-conference training and tutorials, an Ignite session, then 2 days with morning keynotes followed by morning and afternoon sessions.
  • Good breaks in between sessions (ranging from 15 minutes to 1 hour)
  • No idea on attendance, likely in the mid hundreds.

Tutorials and Ignite

I attended Jim Manico’s half-day tutorial on “Modern IdM” hoping to learn more about Web authentication and I was not disappointed. He covered OAuth in detail, as well as session management, and recommendations around password storage. He’s a very energetic and engaging speaker, and time flew by.

The afternoon was reserved for the Apache Drill tutorial led by Charles Givre, from Booz Allen Hamilton. Charles took us through the rationale for Apache Drill – basically a SQL-supporting unifying front-end for disparate back-end data stores – and led exercises on data manipulation. Drill can be a fantastic tool for a data scientist to easily get at disparate data sources.  I’m a SQL newbie and struggled with some of the exercises, but that is on me and not on the tutorial. He also based the exercises on a pre-configured VM that has other data science tools. This will come in very handy…

In the evening, Jerry Bell and Andrew Kalat hosted the Ignite talks (lightning fast talks with auto-advancing slides). Jerry and Andrew host the Defensive Security podcast , probably my favourite security podcast. It was a privilege to chat with them. The talks were interesting, ranging from the need to shy away from hero-focused security work, to how we can do better at training/education, to the use of existing intelligence/data sources. Great talks, easy-going format.

Karaoke…

Then there was… karaoke… For those that are not familiar, “slide karaoke” is a fun-filled/terrifying (depending on your point of view) format where someone is presented random slides at a fixed-time interval and the they have to “improv” their way to a somewhat coherent talk structure. Andrew and Jerry asked for 5 volunteers…. and I was one of them….

I don’t quite remember what all my slides were, but there were references to llamas, some sort of potato-based disease, and rule breaking. 🙂  I’m just hoping I made it entertaining for the audience…

Lesson learned: Courtney Nash is a master at this: she was funny, coherent, engaging, … She’s a very tough act to follow, which just happened to be my spot in the roster… You have been warned 🙂

Seriously, though: it was great fun, and I hope others join in. It was a great environment, people were having fun, and part of being in this industry is this sense of community that we build. It was a privilege to be able to take part in that.

Keynotes

On day 1, following the intro from Allison Miller and Courtney Nash, Heather Adkins from Google kicked things off by showing us how some of the main classes of security incidents – be they insecure defaults, massive theft, or instability – have been happening in different forms since the 1980s. After pointing to the increased siloization(sp?) of our industry as a possible cause, she urged us to think about broader platforms, and to design with a much longer timeframe in mind.

Richard Thieme took us through a sobering view of the psychological challenges in our career. Drawing parallels to the intelligence community and the challenges faced there, Richard rightfully reminded us to stay mindful of our needs as individuals and building adequate support networks in our lives.

Becky Bace did a great job of comparing the challenges of infosec today with the early days of the auto industry, and how we can use some of the lessons learned there to improve it. Given my interest in economics and incentives, I was silently clapping pretty much all the time.

Unfortunately I missed most of the day 2 keynotes – I look forward to watching video later. What I did catch was the latter part of Cory Doctorow‘s impassioned and cogent plea for more involvement from us as individuals into the immensely important debate about the very nature of property and democracy in modern society. There are key discussions and precedent-setting court cases taking place now, and many of the key societal instruments we take for granted are at risk.

Day 1 Sessions

Speak Security and Enter. Jesse Irwin led a great session focused on how to better engage with users when it comes to discussing security and privacy. She laid out very well defined steps for improving. If I could summarize her session in one idea would be: have more empathy to your user community. From using relatable examples, to framing the issue positively or negatively, and many other suggestions. Hearing her tell of the adventure of teaching security to 8-year-olds was priceless!

Notes from securing Android. Adrian Ludwig from the Google Android team took us through a data-driven journey into the Android security ecosystem. After reminding us that Android security must accommodate from $20 phones to modified units used by world leaders, Adrian focused on three aspects: active protections made by the Google ecosystem, options available for enterprise decisions (such as allowing external app stores or not), and details about the Android OS itself. He made a very compelling case that the security architecture of a modern Android-powered device such as the Google Pixel rivals what other options exist in the mobile ecosystem (iOS, WindowsPhone). This was one of the best talks I attended.

Groupthink. Laura Mather has had a very interesting career, including time at the NSA, eBay, founding SilverTail (where I had the pleasure of working for her), now leading Unitive. Her talk was not a ‘security’ talk, but rather a look into the issue of groupthink, often caused by unconscious biases. Fundamentally, the variety of challenges in modern security environment should be met by having a diverse workforce generate ideas based on diverse points of view. In order to achieve this, we need to work on the issue of lack of diversity. Laura pointed out specific ways to avoid unconscious bias in hiring, particularly being aware of, as an interviewer/hiring manager, not looking for someone “just like me”. Hiring decisions should be matched on values, not on superfluous characteristics that lead to biased outcomes.

UX of Security Software. Audrey Crane leads a design firm, and made the case for proper UX design taking into account the people who will actually use the product. Her firm conducted research into usage habits related to SOC roles, and came up with a few personas (different from the typical ‘marketing’ personas) and then showed an interface design that takes those personas into account. Her recommendations are for vendors to take this aspect of the product creation process seriously, and for buyers of software to not only demand better software from a usability perspective, but to actively try out any software being purchased with the intended audience.

Social Scientist. Andrea Limbago brought a “social scientist” perspective to the broad issues around information security. She framed the discussion in terms of Human Elements, Geopolitical trends, and Data-Driven Security. The human elements section looked at an structure-agent dynamic (top-down versus behavioural) and advocated approaches to evolving the security subculture. Very interesting, as were the comments around security still having a cold war framework, and that there is a gap on the usage of data within security conversations.

Day 2 Sessions

Are we out of the woods?. Kelly Harrington from the Google Chrome team talked about Web security issues. She covered some key issues – how updates are not universal, how older devices get attacked, and the scourge of what Google calls Unwanted Software – and delved into details about the exploit kits (Angler, Rig, and others), trends of attacks on routers, plus examples of malicious behaviour by Unwanted Software. She wrapped up by sharing a little about what Google’s Safe Browsing API does and by giving actionable advice on web security. This was a great talk to complement the one on Android security. Finally, extra points for her for the Jane Austen references… 🙂

Criminal Cost Modelling. Chris Baker – a data scientist at Dyn – took us through a whirlwind tour of some underground markets and the actual data he found there for pricing stolen goods, exploit kits, or DDOS services. It was refreshing to see someone dive beyond “oh, underground markets exist” into actual markets, prices, goods, and the possible economic issues that exist in those markets. I loved this session. If there was one session I wish could have been longer, it is this one. I’ll be watching the video when it comes out, many times over.

Economics of CyberSecurity. This session was delivered by yours truly. Happy to announce that slides are available here. I focused on how a brief understanding of economic concepts – Marginal Cost of Information Goods, Information Asymmetry, Externalities, and concepts from Behaviour Economics – can help us rethink some of the broad challenges we face. I hope the audience liked it. I was happy with my delivery and did pick up on a few things I want to improve. I really hope to have the opportunity to keep bringing this message to others.

No Single Answer. Nick Merker – now a lawyer but formerly an infosec professional – and Mark Stanislav – now a security officer with experience as security consultant – focused on cyber insurance. Their session went into the difference between first-party and third-party insurance, then delved into the details of what cyber insurance options exist, what they typically cover (or not), and how these products are currently priced and sold. They also covered some misconceptions around the role of insurance in a risk management program, how infosec should play a role when purchasing cyber insurance products, and how a well-defined and executed security program can help with insurance premiums. I learned a ton, and really liked the session.

Sponsors/Logistics/Others

The sponsor area was relatively small (maybe 10-15 sponsors total) but the people I spoke to were knowledgeable and the selection was varied. Not so much your typical security vendor, but more those offering solutions that fit into a more modern architecture view of security. There were options for web app security, container security, source code security, etc… I did not focus much on it, given my role as individual contributor.

The conference schedule and details were available via the O’Reilly app (iOS and Android) and things worked well. One suggestion I have is that the app could offer a toggle for ‘hide past events’ on the Full Schedule view, as that would help people choose their next sessions without having to scroll around so much…

Food options during the breaks were varied and quite nice. I loved that we had sushi available on one of the food stations.

As a Speaker

My “field report” would not be complete without a comment about my experience proposing the talk and later as a speaker.

The submission process was well defined, the guidelines for what should go in the submission were clear, and the timelines were very fair. I followed the process via the website and the questions I asked the speaker management team were answered promptly and efficiently. Major thanks to Audra Montenegro (no relation) and her team.

The organizing committee has been very transparent about what their side of the selection process was like. This is tremendously insightful and helpful for future proposals. I particularly liked the use of blind reviews. Blind reviews help us as an industry increase the quality of the content that makes it into the stage, AND increase the chance of hearing from a more diverse pool of contributors. What’s not to like?

Prior to the event, I was able to connect with Courtney Allen and we collaborated on a short email-based interview (which you can find here). She was fantastic to work with and has a keen insight into the role that O’Reilly Media can play in the security landscape.

Bottom line is: If you have defensive-focused security content you want to present, you’re open to be being evaluated on the merits of your content, and want to work with great people putting it together, O’Reilly Security should definitely be on your short list of conferences to submit to.

Advertisements

MSSP Blues and the Theory of Agency

Introduction

I like the approach of listening to a good podcast and then using it to expand on a particular idea. This time, I listened to Brakeing Down Security’s fantastic episode where they discussed the fallout from a very rocky response to a security incident by an unnamed Managed Security Services Provider (MSSP). Bryan Brake talked to Nick Selby and Kevin Johnson, based on Nick’s original blog post. Please read the original post and listen to the podcast, but here is the summary:
  • Nick helped an unnamed customer respond to a security incident.
  • This customer had a long-standing contract with an MSSP for monitoring their network, which included having dedicated gear on-site.
  • When Nick & customer got the MSSP involved, they had a number of nasty surprises:
    • The monitoring gear on-site was not working as expected, and had actually not worked for a long time.
    • The customer-facing employees at the MSSP were not only not helpful but almost evasive. Bailing out on phone calls, not giving straight answers, …
    • The actual value the customer was getting from the MSSP was far less than what they imagined, and was not useful during the incident response.

In short, a series of horrible news and interactions. Bryan, Nick, and Kevin make a TON of excellent points on the podcast. Worth the listen.

This whole incident reminded me of a topic I’d been meaning to write about…

 

“Agents” have “Principals”, but do they have “Principles”?

How do you feel about hiring someone to do something for you? Maybe it’s an employee you bring in to your company, maybe it’s a mechanic you hire to look at your car, maybe it’s a lawyer you call on to help you with a contract negotiation.

This is a very common economic transaction. When looking at it, we often use specific terminology: those doing the hiring are ‘principals’ while those being hired are ‘agents’.

In an ideal scenario, the person/company you hire (the ‘agent’) is having their interests met with the compensation they’re receiving, and will perform their tasks in a way that meets your interests (you’re the ‘principal’). In all those cases – and pretty much any relationship like it – there’s always a potentially thorny issue: despite being compensated for their efforts, are those ‘agents’ acting on a way that is aligned with the ‘principal’s’ interests? What happens when interests don’t align? This happens all the time:
  • Is a mechanic over-estimating the effort to fix a car?
  • Is the lawyer extending the negotiation because they bill by the hour?

Say hello to the “Principal-Agent problem“, a well-known problem in economics (and political science). It is also known by other terms, such as “theory of agency” or the “agency dilemma”. Fundamentally, it is the study of the dynamics between principals and agents with distinct self-interests in a scenario where there is significant information asymmetry.

Information asymmetry, you may recall, is the situation when one of the parties in an economic transaction has much more material knowledge about it than the other.  There are further nuances on whether the information asymmetry develops before a contract is established – the agent has superior information to the principal from the get-go – or that asymmetry develops post-contract – as the agent begins to work, they realize the discrepancy. These lead to slightly different solutions.

Principal agent

 (source: wikipedia)

Another common example of Principal-Agent problems is the conflict between a company’s shareholders – who have limited information about how it is run – and the company management. Depending on how that management team is compensated, they may make decisions that are not in the shareholders interest: maybe boost stock price by playing accounting tricks, for example.

Both economics and politics have identified a series of mechanisms to help address Principal-Agent issues, but they fundamentally come down to a combination of:
  • Contract design – how compensation is dispensed (deferred), fixed versus variable, profit sharing, etc…
  • Performance evaluation – both objective and subjective
  • Reducing the information asymmetry – having more information to make informed decisions

 

Back to the MSSP debacle…
 
Now that we have this notion of Principal-Agent fresh in our minds, looking into the unfortunate MSSP incident we see the clear issues caused by the agency dilemma: there’s indication that the MSSP did not perform their tasks with the interests of the customer in mind. That is very unfortunate, and well deserving of the criticism they got …

Still, let’s look a bit deeper into the whole thing. As we do, we see there’s plenty of potential blame to go around (again, I suggest reading Nick’s blog for deeper background):
  • First of all, did the original security team at the customer that chose the MSSP do so with the organization’s best interest in mind? Were they trying to actually implement a proper monitoring solution or were they just trying to check off a ‘have you contracted with a managed security vendor for monitoring?’ item from some compliance checklist?
  • There was plenty of blame for the MSSP not following up a poorly deployed solution, but what about on the customer side? Why was there no oversight?
  • When the new security team started at the customer, what level of diligence was done on taking on a new infrastructure?
  • Did the management team at the MSSP care that a particular customer was not deployed properly? Did the team/individuals that created the on-boarding run-books for new customers care? Was the implementation team at the MSSP side properly measured on how to do on-boardings?
  • During the initial calls, were the employees of the MSSP acting on their own self-interest of “just get this customer off my back”? Were they empowered to do something but chose not to?
  • Back to MSSP management: did they structure internal operations to empower their employees to handle the exceptions and urgent requests?
One minor point I differ from Bryan, Nick, and Kevin on their well-deserving roasting of the MSSP is that they seem to assume that the individuals at the MSSP had lots of freedom to deviate from the established procedures. I’m not so sure: it’s one thing for senior, knowledgeable professionals to do so, but it may be radically different for others. Again, what did the MSSP empower their team to do?

I’m being overtly picky here to drive a point that there’s potential for agency issues at multiple levels of the event chain, both within each organization (customer and MSSP) and between them. There can be agency issues between employees and employers, as well as between separate commercial entities.

 

The broader impact

The point for this post is broader than the MSSP debacle. By the very nature of our industry, it is extremely easy for Principal-Agent issues to appear:
  • There is tremendous information asymmetry in InfoSec to begin with: There are too many details to go wrong, things change too fast, too many moving parts, etc… Those who hire us are often not aware of what we do.
  • We have tendencies to compartmentalize information about security itself (“sorry, we can’t talk about this”). This leads to further information asymmetry.
  • With “security” being a latent construct – it is difficult/expensive to observe/measure – our principals have a hard time measuring the effectiveness of security efforts.
  • With the difficulty & cost in hiring for security – be it employees, contractors, or businesses – there is less flexibility and interest in exploring details of contract design.
How do we – as an industry – get better? How do we deal with this? I think it comes down to:
  • First, we need to be aware of the issue and recognize it for what it is: a well-defined economic problem for which there are broad classes of solutions.
  • Then, we should recognize our roles within the transaction:
    • Sometimes as a buyer – hiring outsourcers, buying security solutions.
    • Sometimes as a seller – employee/contractor providing security services/expertise to someone, or selling a security solution/service.
  • Finally, within our roles, we should expand beyond the technical nuance – networks, encryption, appsec, etc… – and delve into:
    • clearly define and deliver reporting
    • pay more attention to contract design, service level definitions
    • perform periodic evaluation of the services
    • anticipate where principal-agent issues might arise and address early on. Maybe it is creating a better report, maybe it is having a lunch&learn on the solution, etc…
  • Lastly, we should continue to grow as community by sharing information – blogs, podcasts, conferences, … All that helps to reduce the underlying information asymmetry.
On that final point, I salute Bryan, Nick, and Kevin for their excellent podcast episode, and all the other community participants from whom we all learn so much…

If I had to summarize things:
  • Know what you’re buying. Educate yourself as needed.
  • Know what you’re selling and help your customer understand it as well.
As with so many other things, it’s not only an InfoSec issue , it’s an economic one…

On Plane Hacking… We’re missing a point here.

[ Target Audience: Our InfoSec industry… ]

Yes, I know we have been inundated with the discussion on Chris Roberts’ saga. Feel free to read the thousands (closing in on a million as I write this) of links about the whole situation to catch up. FBI affidavits, ISS, “sideways” (yaw?), NASA, … I particularly liked Violet Blue’s summary of the recent history of airplane security. Another very interesting post is Wendy Nather’s, here.

Yet I think a critical point is being lost in the debate of whether he was able to do what he did or not.

I don’t care whether he was actually able to interfere with avionics. Being uninformed about it, I prefer the heuristic of believing the aviation experts that have, in great numbers, called out ‘B.S.’ on the claims.

What I *do* care about is the alleged pattern of behaviour of trying this with disregard for the possible consequences to the public.

I am NOT against security research, holding those responsible to task, responsible disclosure, “sunlight is the best disinfectant”, … I think all those doing responsible research on car hacking, medical devices, avionics (read Violet Blue’s excellent summary), etc… deserve our gratitude and support.

What I AM strongly against is the apparent complete disregard for the well-being of fellow passengers. It is alleged that this was done ’15 or 20 times’ on several flights. I don’t know if the flights were mid-air or not. I don’t know if anyone noticed, or should have noticed. What I do know is that the consequences of those security tests could have affected innocent bystanders. That is NOT ok.

“Oh, but if he couldn’t really affect the plane, it’s ok, right?” NO, it is not. What if there were adverse consequences? What if the pilots noticed something and – being safety conscious – decided to divert flights?

Some might say – “ok, that is the price we must pay for security”. It was not his call to make, was it?

As an industry, we can’t carry around this sense of entitlement and be seen in good light by the broader public.

He has apparently shown poor judgement in other occasions – talking to the FBI on his own without legal representation is another example, but that just affects him.

That being said, I echo Bill Brenner’s sentiment on moving forward. I’ve never met Chris (hope to one day) and I wish that he is able to learn from this debacle and grow as a professional.

For the broader industry, let’s look at the mess this has created and learn a few lessons too…