MSSP Blues and the Theory of Agency


I like the approach of listening to a good podcast and then using it to expand on a particular idea. This time, I listened to Brakeing Down Security’s fantastic episode where they discussed the fallout from a very rocky response to a security incident by an unnamed Managed Security Services Provider (MSSP). Bryan Brake talked to Nick Selby and Kevin Johnson, based on Nick’s original blog post. Please read the original post and listen to the podcast, but here is the summary:
  • Nick helped an unnamed customer respond to a security incident.
  • This customer had a long-standing contract with an MSSP for monitoring their network, which included having dedicated gear on-site.
  • When Nick & customer got the MSSP involved, they had a number of nasty surprises:
    • The monitoring gear on-site was not working as expected, and had actually not worked for a long time.
    • The customer-facing employees at the MSSP were not only not helpful but almost evasive. Bailing out on phone calls, not giving straight answers, …
    • The actual value the customer was getting from the MSSP was far less than what they imagined, and was not useful during the incident response.

In short, a series of horrible news and interactions. Bryan, Nick, and Kevin make a TON of excellent points on the podcast. Worth the listen.

This whole incident reminded me of a topic I’d been meaning to write about…


“Agents” have “Principals”, but do they have “Principles”?

How do you feel about hiring someone to do something for you? Maybe it’s an employee you bring in to your company, maybe it’s a mechanic you hire to look at your car, maybe it’s a lawyer you call on to help you with a contract negotiation.

This is a very common economic transaction. When looking at it, we often use specific terminology: those doing the hiring are ‘principals’ while those being hired are ‘agents’.

In an ideal scenario, the person/company you hire (the ‘agent’) is having their interests met with the compensation they’re receiving, and will perform their tasks in a way that meets your interests (you’re the ‘principal’). In all those cases – and pretty much any relationship like it – there’s always a potentially thorny issue: despite being compensated for their efforts, are those ‘agents’ acting on a way that is aligned with the ‘principal’s’ interests? What happens when interests don’t align? This happens all the time:
  • Is a mechanic over-estimating the effort to fix a car?
  • Is the lawyer extending the negotiation because they bill by the hour?

Say hello to the “Principal-Agent problem“, a well-known problem in economics (and political science). It is also known by other terms, such as “theory of agency” or the “agency dilemma”. Fundamentally, it is the study of the dynamics between principals and agents with distinct self-interests in a scenario where there is significant information asymmetry.

Information asymmetry, you may recall, is the situation when one of the parties in an economic transaction has much more material knowledge about it than the other.  There are further nuances on whether the information asymmetry develops before a contract is established – the agent has superior information to the principal from the get-go – or that asymmetry develops post-contract – as the agent begins to work, they realize the discrepancy. These lead to slightly different solutions.

Principal agent

 (source: wikipedia)

Another common example of Principal-Agent problems is the conflict between a company’s shareholders – who have limited information about how it is run – and the company management. Depending on how that management team is compensated, they may make decisions that are not in the shareholders interest: maybe boost stock price by playing accounting tricks, for example.

Both economics and politics have identified a series of mechanisms to help address Principal-Agent issues, but they fundamentally come down to a combination of:
  • Contract design – how compensation is dispensed (deferred), fixed versus variable, profit sharing, etc…
  • Performance evaluation – both objective and subjective
  • Reducing the information asymmetry – having more information to make informed decisions


Back to the MSSP debacle…
Now that we have this notion of Principal-Agent fresh in our minds, looking into the unfortunate MSSP incident we see the clear issues caused by the agency dilemma: there’s indication that the MSSP did not perform their tasks with the interests of the customer in mind. That is very unfortunate, and well deserving of the criticism they got …

Still, let’s look a bit deeper into the whole thing. As we do, we see there’s plenty of potential blame to go around (again, I suggest reading Nick’s blog for deeper background):
  • First of all, did the original security team at the customer that chose the MSSP do so with the organization’s best interest in mind? Were they trying to actually implement a proper monitoring solution or were they just trying to check off a ‘have you contracted with a managed security vendor for monitoring?’ item from some compliance checklist?
  • There was plenty of blame for the MSSP not following up a poorly deployed solution, but what about on the customer side? Why was there no oversight?
  • When the new security team started at the customer, what level of diligence was done on taking on a new infrastructure?
  • Did the management team at the MSSP care that a particular customer was not deployed properly? Did the team/individuals that created the on-boarding run-books for new customers care? Was the implementation team at the MSSP side properly measured on how to do on-boardings?
  • During the initial calls, were the employees of the MSSP acting on their own self-interest of “just get this customer off my back”? Were they empowered to do something but chose not to?
  • Back to MSSP management: did they structure internal operations to empower their employees to handle the exceptions and urgent requests?
One minor point I differ from Bryan, Nick, and Kevin on their well-deserving roasting of the MSSP is that they seem to assume that the individuals at the MSSP had lots of freedom to deviate from the established procedures. I’m not so sure: it’s one thing for senior, knowledgeable professionals to do so, but it may be radically different for others. Again, what did the MSSP empower their team to do?

I’m being overtly picky here to drive a point that there’s potential for agency issues at multiple levels of the event chain, both within each organization (customer and MSSP) and between them. There can be agency issues between employees and employers, as well as between separate commercial entities.


The broader impact

The point for this post is broader than the MSSP debacle. By the very nature of our industry, it is extremely easy for Principal-Agent issues to appear:
  • There is tremendous information asymmetry in InfoSec to begin with: There are too many details to go wrong, things change too fast, too many moving parts, etc… Those who hire us are often not aware of what we do.
  • We have tendencies to compartmentalize information about security itself (“sorry, we can’t talk about this”). This leads to further information asymmetry.
  • With “security” being a latent construct – it is difficult/expensive to observe/measure – our principals have a hard time measuring the effectiveness of security efforts.
  • With the difficulty & cost in hiring for security – be it employees, contractors, or businesses – there is less flexibility and interest in exploring details of contract design.
How do we – as an industry – get better? How do we deal with this? I think it comes down to:
  • First, we need to be aware of the issue and recognize it for what it is: a well-defined economic problem for which there are broad classes of solutions.
  • Then, we should recognize our roles within the transaction:
    • Sometimes as a buyer – hiring outsourcers, buying security solutions.
    • Sometimes as a seller – employee/contractor providing security services/expertise to someone, or selling a security solution/service.
  • Finally, within our roles, we should expand beyond the technical nuance – networks, encryption, appsec, etc… – and delve into:
    • clearly define and deliver reporting
    • pay more attention to contract design, service level definitions
    • perform periodic evaluation of the services
    • anticipate where principal-agent issues might arise and address early on. Maybe it is creating a better report, maybe it is having a lunch&learn on the solution, etc…
  • Lastly, we should continue to grow as community by sharing information – blogs, podcasts, conferences, … All that helps to reduce the underlying information asymmetry.
On that final point, I salute Bryan, Nick, and Kevin for their excellent podcast episode, and all the other community participants from whom we all learn so much…

If I had to summarize things:
  • Know what you’re buying. Educate yourself as needed.
  • Know what you’re selling and help your customer understand it as well.
As with so many other things, it’s not only an InfoSec issue , it’s an economic one…

On the economics of ransomware

We blinked, and the world changed on us.

This [long] post is not meant as doom&gloom on the scourge of ransomware, but rather a look at some basic economic aspects of this type of attack, and some recommendations for the future.

So far,  2016 is definitely the year of ransomware. Every vendor is talking about it in their pitches, the media is all over it (good articles here and here), etc. This blog just adds to that cacophony, though hopefully adding a different perspective.

“Prior Art”: Lots of people are now talking about ransomware, and I’m sure many have in the past too. I’d be remiss if I didn’t point out that Anup Ghosh of Invincea wrote a scaringly prophetic blog post on this back in July of 2014! Check it out here. Also, I liked Daniel Miessler’s piece here.

Note : as I discuss these topics, I may sound insensitive to the plight of the victims. It’s absolutely not that: I think ransomware is a scourge that should be eradicated, that we bring to bear the full force of law enforcement, but I’m pessimistic it can be done.

There are several aspects of ransomware that make it interesting from an economic angle. Let’s explore some of them.

The “Taming” of Externality

First and foremost, to me, ransomware is the first major, widescale threat that significantly reduces the inherent externality of poor security practices. What does that mean?

Up until now, poor security practices by end users resulted in relatively light consequences for the users themselves. In some cases, being used as a spam relay might have been not noticeable, or at worst there was a rare circumstance where malware resulted in having to reformat one’s PC. Yes, annoying and potentially painful, but manageable. From a behavioral economics perspective, biases such as mental accounting made it even less painful.

The broader costs of that infection – spam being generated, systems that had to be wiped, etc… – were largely invisible to the user. In market economics terms, all these costs were externalities. This means that the agent in the transaction – the user – was not taking those costs into consideration when making their choice – in this case, poor security practices that let to an infection.

Enter ransomware. Now, the user is faced with the painful choice of paying the ransom – actual monies being stolen – or facing the imminent destruction of their data. Worse, depending on how that strain of ransomware behaves, it infected network drives and potentially backups as well. This triggers several well-known behavioural quirks/biases, including:
  • The salience of paying. It’s pretty clear that there is money being lost, and it’s your money (or your organization’s).
  • The immediacy of the request. It’s not something that can be postponed. Criminals know this, and exploit it: in many cases, ransoms increase as time passes.
  • Loss aversion. From Kahneman and Tversky’s work, we know the tendency of people to be loss averse.

All of this is, naturally, horrible for the user. From an economic perspective, though, it is interesting that this, in a way, “reduces” the externality of a poor security choice. The user now knows full well that their poor choice/practice may result in a non-negligible cost. [Edit: as someone provided feedback to me, just another way of saying “the chickens come home to roost”.] They’re understandably concerned, and rightly so. I don’t see this diminishing soon.

“To Pay or Not To Pay, that is the Question”

The second interesting point is analyzing the dilemma of deciding to pay the ransom or not. Even law enforcement seems ambivalent: recent advice has included both “pay” and “don’t pay”.

There’s two things to look at:
  • First, from a societal perspective, the issue is similar to the Tragedy of the Commons, a well-known economic problem. In the traditional Tragedy of the Commons, individuals overconsume a shared resource, leading to depletion. In the case of ransomware, it’s not the same: to me, it is close to the “Unscrupulous Diner Dilemma”, a variation of the more traditional Prisoner’s Dilemma, but where a group ends up paying more, even though they all wished they couldn’t. In the case of ransomware, the individual decision to pay negatively affects the community by supplying the criminals with additional funds to reward them for the crime, along with funds to reinvest in future capabilities for the tools, thus costing more in the future.
  • Individually, people and organizations should recognize that the rational economic decision is not just simply “is the cost of paying the ransom less than the loss associated with losing the data”. The decision should be based on that cost, sure, but also taking into account:
    • Is that the end of it? Will paying the ransom this one time be an exception? In most cases, hardly… As ransomware proliferates, different gangs will keep attacking.
    • Will paying the ransom even get the data back in the first place? As @selenakyle nicely pointed out recently, there’s little recourse if things goes wrong…

At the end of the day, we’re back to externalities:
  • Those recommending “don’t pay” don’t bear the cost of the advice: lost data, etc…
  • Those choosing to “pay” don’t [immediately] bear the indirect cost of enabling the criminals to continue their efforts.

A more realistic approach to handling of ransomware should keep these in mind.

“Thy ransomware’s not accidental, but a trade.”

There seems to be consensus that what has enabled the rise of ransomware is, among other things, the maturity of bitcoin. That was the point clearly made by @jeremiahg and @DanielMiessier (here). I agree: bitcoin seems to have tipped it, but along with other changes to the overall ecosystem that appear to have made ransomware a more viable attack.

Like legitimate business, criminals have explored ‘efficiencies’ in their supply chain. As the main example: bitcoin (the peer-to-peer exchange, not the currency itself) has removed significant “friction” from the system. Whereas before, the steps needed in the cashout scheme might include several steps – all of which incurred fees to the criminal – the ubiquity of bitcoin has made the cashout process faster, cheaper. Taking out the middlemen, if you will.

Regarding bitcoin specifically, there’s a couple of interesting points:
  • more than anonymous “enough”, bitcoin is a reliable and fast payment system. Even though it doesn’t provide full anonymity – the transactions on the blockchain can be traced to wallets – bitcoin is sufficiently opaque that the tradeoff of limited tracking with ubiquity/speed made it the currency/payment system of choice.
  • This leaves an interesting question about the bitcoin exchanges: Can we expect the exchanges to work against their own self-interest in restricting these transactions? What sort of defensive approach can we expect the exchanges to take? The danger of people equating bitcoin with ransomware is real, and the industry is right in defending itself.

All in all, from looking at the underground ecosystem, it looks like ransomware is a ‘killer app’: profitable, easy to use, etc…

“Much Ado About Nothing”? Maybe…

Finally, ransomware seems to have exploded into our collective attention, but is it really such an epidemic? While we deal with the onslaught of news/articles/posts about ransomware (including, of course, this post …), let’s recognize that there is vey little incentive to “underreport” ransomware infections. To wit:
  • InfoSec vendors can point to ransomware as the new ‘boogeyman’ that every organization should spend more resources to protect against.
  • Internally within organizations, like with “Y2K”, “SOX”, and “PCI” before it, we can now possibly start to see “ransomware” as the shibboleth that enables projects to be funded.
  • Media sites latch on to the stories, knowing the topic draws attention. As an example, a lot has been made of the incident where a Canadian university opted to pay $20,000 CAD . Would there have been the bombastic coverage if the cause of the loss was, say, a ruptured water main caused by operator error? Not likely…

I can’t help but wonder if this is not a manifestation of a couple of things:
  • one, a variation of what’s called the Principal-Agent problem: in an economic transaction where there is an expectation that an agent will act on behalf of a principal, but instead acts on their own benefit. In this case, bolstering the issue of ransomware above and beyond other relevant topics.
  • two, just your garden variety ‘availability bias’ from behaviour economics, where the ease with which we recall something inflates its perceived rate of occurence.

In either case, we can take a peek at the well-known Verizon Data Breach Report. What do we see? Verizon’s DBIR shows that ransomware, even as a subset of crimeware, is not as prevalent as other attacks. See figure 21 on page 24 of the 2016 report.

“’Advice’ once more, dear friends, advice once more”

Wrapping up, then. There is a fantastic paper by Cormac Herley, from Microsoft Research – So Long, and No Thanks for the Externalities – in which he discusses how users ignoring security advice can be the rational economic decision, when taking into account the costs of acting on some security advice. The paper is from 2009 and is still extremely relevant. I consider it mandatory reading for any security professional.

Taking that into account, how should we frame security advice about ransomware?
(One could argue whether ransomware is not exactly the change in cost that invalidates the conclusions. Might be an interesting avenue to pursue…)

At least to me, too much of the security advice we see about ransomware is not taking into account the aggregate cost of acting on such advice.

Ultimately, the protection methods have to be feasible to be implemented. With that in mind, here’s a few recommendations.

For individuals:
  • Be aware of your own limitations and biases as you interact online. To the extent that it is possible, incorporate safe practices.
  • Leverage the automated protections you have available – modern browsers have sophisticated security features, your email provider spends a ton of resources to identify malicious content, etc…
  • Devise and implement a backup system that fits your comfort level, balancing the frequency of backups with their associated hassle.
  • Periodically check and possibly reduce your exposure by moving content to off-line or read-only storage. Just like you wouldn’t walk around at night in a risky neighbourhood with your life savings in your pockets, make it a practice of limiting how much data is exposed.
  • If infected, don’t panic. Keep calm and, if you choose to do so, act promptly to avoid the increases in demands.

For corporate users, similar advice applies, boiling down to “don’t base your security architecture on the presumption that users are infallible at detecting and reacting to security threats”. Back it up with technology. On a tactical level, a few extra things come to mind:

  • Verify that current perimeter- and endpoint-based scanning of executables/attachments is able to identify/catch current strains of malware (ask your vendor, then check to make sure). It might be a sandbox approach, endpoint agents, gateway scanning, whatever. Belt & suspenders is a good approach, albeit costly.

  • Consider application-level monitoring for system calls on the endpoints. This includes watching for known extensions, as well as suspicious bulk changes to files.
  • Consider monitoring data-center activity for potential events of bulk file changes such as encryption. Yes, there may be false positives.

  • Re-visit the practices that allow users to mount multiple network shares.
  • Make sure the Incident Response playbooks include ransomware as a specific scenario. Prepare for single-machine infection, multiple machines hit, as well as scenario where both local and networked files are encrypted. While I’m skeptical of survey data such as this, getting familiar with how bitcoin transactions work might be a worthwhile investment.

To me, ransomware is here to stay: it leverages too many human and economic aspects to simply vanish. As with many other “security” issues, this is just another one that was never just a technology problem, but a social and economic one. InfoSec professionals should keep that in mind, remembering that the solutions are not always technical…

And to purveyors and enablers of ransomware, “a plague on your houses!

Professional Certifications & Information Asymmetry

This is a topic I’ve been meaning to write about for a while. I’d love to receive feedback on it: please, let me know your thoughts… (It got a little long, so bear with me.)

One of the most debated topics on the professional gatherings I attend, be they physical (conferences, meetups, …) or virtual (Twitter, LinkedIn, …), is professional certifications. CISSP, SANS, CCIE, CCxP, Microsoft, VMware, … you name it, there’s discussions about it. Do any of these sound familiar?
  • “Should I get <insert cert name>?”
  • “Is <insert cert name> a good cert to have?”
  • “Why does HR insist on having <insert cert name> as requirement even though I know WAY more than that?”
  • “Wondering if I should keep my <insert cert name> or let it lapse”
  • “What do I need to do to pass <insert cert name>? Any brain dumps? ;-)”

Please note: For many of the points below, someone can almost replace “certification” with “degree”. The discussion whether or not to get a degree – College, Bachelors, Post-Graduate, … – is, in my opinion, deeper than the certification one, with much more significant implications. Let’s treat that one separate, shall we? Baby steps…

I think looking at this issue from the perspective of information economics helps us tremendously, particularly the notion of Information Asymmetry.

In any economic transaction, information asymmetry is the notion that parties in a transaction have different information given their roles, and that each will alter their behavior to maximize their own utility. As a buyer, you may not know much about the quality of the product you’re buying as much as the seller does. However, as a seller, you don’t know how much the buyer is willing to pay for the goods you’re selling, or even if they can actually pay for them.

This is no judgement on either party, but an inherent characteristic of the economic transaction itself: only you know how badly you want a particular car, just as the previous owner of the car knows how well it’s been taken care of over the years.

There are two key mechanisms – signaling and screening – that can be used to reduce information asymmetry:
  • The ‘over-informed’ party can SIGNAL to the under-informed party by presenting information that attempts to resolve the asymmetry.  Examples: “this is a ‘certified’ pre-owned’” or “here is my latest pay stub to show that I’m good for credit”.
  • The ‘under-informed’ party can SCREEN the over-informed party by asking for information or offering choices that force the other to reveal that information. Examples: “give me three references from your career”, “show me your insurance policy against errors & omissions”.

Also important to recognize is that there is a cost associated with both signaling and screening, and that this cost can also be a signal on its own right. Knowing that a signal is expensive to generate might be interpreted as a stronger signal of commitment, or that a complicated screening process might indicate level of importance of the decision, and therefore the value of whatever is being bought.

The study of information asymmetry has been worthy of Nobel prizes – George Akerlof, Joseph Stiglitz, and Michael Spence shared the 2001 Economics prize on this topic. At the risk of sounding geeky, I think this is truly fascinating stuff…

With this in mind, we can shift the discussion on professional certifications, treating them as a potential means of resolving information asymmetry. They can be used both as SIGNAL and SCREEN mechanisms:
  • “Here is my <insert cert name>” signals that you [possibly] have the skills/knowledge/experience associated with that cert.
  • “This position requires <insert cert name>” is a screening mechanism meant to easily (from the point of view of the recruiter) winnow out candidates that have a low likelihood of having the necessary skills/knowledge/experience. It forces candidates to demonstrate at least some commitment to that area.
This is by no means a perfect solution, as several flaws can happen if one relies on certification alone:
  • the content of a certification may not be relevant to the true skills/knowledge/experience required, but may still be considered adequate or even required.
  • the certification process may be broken and allow those without the skills/knowledge/experience required to still obtain the certification.
  • the cost of obtaining the certification may become an impediment and artificially screen out candidates that would otherwise be suitable.
  • and so on…

Nevertheless, they are useful heuristics to be applied to the true problem at hand: reducing information asymmetry. If we focus on that, we can provide better advice. Let’s try to put that to practical use…

“Should I get <insert cert name>?”
This is the most common question, and one that has to be unpacked. “WHY” do you want to have the certification? It’ll likely boil down to one of these reasons:
  • The certification is part of a formal gate in a process: be it a promotion, formal tender, partner requirements, etc… In this case it’s pretty simple: if you [often] find yourself in that formal process and you want to continue, get the certification.
  • The certification is to be used as an informal roadmap for learning. I do this often (see disclosure below). In that case, ask yourself: how high is the marginal cost of actually obtaining the certification after your studying is done? If you look at the cert as roadmap, study a lot, then just need a simple exam after, it may be worth it actually getting it. If, on the other hand, the preparation for the actual certification is arduous and/or the exam is expensive (CCIE/CCDE, VCIX/VCDX, SANS GSE come to mind) then, maybe, you may choose to skip it.
  • The certification “will help in getting something (job, position)” but is not formally required. This is where the “information asymmetry” shows up and you can reframe the question as “can I resolve the information asymmetry in another way?”. If you’re a professional hoping to break into a new field (regardless of this being your first job or just a career transition), a certification may help. If, on the other hand, you have a meaningful alternative – maybe recommendations, a portfolio, blog posts, professional reputation, … – then that certification may not be necessary.
I think this last point is key. Too often we see two problems:
  • Those that think the certification is “necessary & sufficient” for a role, when in fact recruiters look at the cert as “just a signal”. Unfortunately, those candidates are often vulnerable to aggressive and potentially misleading advertising from those offering certifications or prep courses.
  • Those unceremoniously dismissing the certification as “useless”. I think they often do it because they themselves have – consciously or not – enough experience/reputation to resolve the information asymmetry, but fail to see how someone breaking into the field might not be as fortunate.
“Is <insert cert name> a good cert?”
I see this as a variation of the first question. Here, the question is focused on the cert itself, rather than on your intended use for it. As before, the answer follows the similar options:
  • Is the cert used widely in industry as a gate process or generally respected in something you take part often? Might be a good cert to have.
  • Does the cert provide a good roadmap of self-learning?  Might be worth pursuing. Here I mention that while I never got my CCIE, I used the blueprints as a reference of topics to brush up on in network security.
  • Finally, for “having the cert just in case”, it is helpful to think about it in terms of “how well does this certification resolve the underlying information asymmetry?” If you’re trying to signal broad understanding of an area, getting a specialized certification may not be as helpful. The reverse is true, of course: a generalized cert is useless if your signal is meant to be about a specific area. Also, keep in mind the value that industry/market places on the cert as a good signal mechanism. Things change over time…
“Why does HR insist on having <insert cert name> as requirement even though I know WAY more than that?”

HR does this because that certification has been, in their opinion, a useful heuristic to screen candidates. It may not be accurate from your perspective, but HR is making the rational decision that the cost of screening candidates via their certification signal is a good trade-off for the value they are getting. It’s not personal, it’s not stupid, it’s basic economics.

Whether this is a big issue for a candidate, depends on how much flexibility they have with the hiring process. If you’re being formally evaluated with a broad pool of possible candidates, you may have little choice but to go for it. If, on the other hand, you have both another way of resolving the asymmetry implied by requiring the cert AND the flexibility in the process (maybe you know the hiring manager and can bypass that requirement), go ahead and try that.

“I’m wondering if I should keep my <insert cert name> or let it lapse”

In this case, reframe it as “do the benefits of choosing to send this signal outweigh my own individual cost”? The cost may be clearly monetary or primarily the time needed.

Also, if you’re a more experienced professional, thinking of “can I resolve the information asymmetry in another way?” also helps. Maybe you lapse your professional certification, but you have a portfolio of blog posts, community participation, public code, … that are alternatives for showing what the certification was meant to show. It may be OK to let go of your introductory-level certification in a field where you can show expertise differently…

“What do I need to do to pass <insert cert name>? Any brain dumps? ;-)”
I wanted to comment on brain dumps. Personally, I think brain dumps are against the spirit of certifications, if not the letter, but from an economic perspective, consider that: if a certification is somewhat easily obtainable by those resorting to brain dumps, expect the following to happen:
  • the value of the having that particular cert as a valid signal may diminish.
  • the screening effort will increase, both from the certification provider as well as potential employers. We see this happening with more stringent testing requirements, perhaps more obscure questions (in both testing and interviews), all of which raise the cost of the screening itself. Expect that cost increase to manifest itself in more expensive exam fees, or even more stressful hiring processes…
Wrapping up
I think bringing a mindset of “looking at the economics of it” brings a different perspective to the debate about certifications:
  • Understanding certifications as both signal and screen mechanisms.
  • Considering the “transaction costs” and “opportunity costs” of both obtaining the certification OR using it as a screening mechanism.

Hoping this contributes a bit as one considers which certs to embark on, or which certs to list in those job descriptions…


For my own career, I’ve let many certs lapse, not because they were good or bad, but because I evaluated that my personalized “signaling” cost (i.e. keeping the certification) was too expensive given the expected benefit. Others I plan to keep, since either the signaling cost is low enough, or they offer other benefits (tangible or not) that I value…

For the record, I cherish my CISSP designation. It means a lot to me, not so much for the technical knowledge itself (it was  over 15 years ago…) or the inherent signal (many have it, and it has many supporters & detractors), but for reminding me of the never-ending quest to bring excellence to the InfoSec profession.

Finally, as a lifelong learner, I like to look at certifications as a rough guide to the common knowledge of a particular area. I may choose to just review the blueprint/requirements and guide my own studies along those lines. In some cases, I may go further and consider acquiring the certification as a personal goal or as a ‘sanity check’ that I do indeed have the minimum knowledge. After all, I’m always aware of the dangers of Dunning-Kruger effect, though not always able to avoid it..