At this week’s CES, Jen Easterly, who is Director of the Cybersecurity and Infrastructure Security Agency for the United States, gave an interview about security and the technology ecosystem. Her comments were given in the context of the consumer electronics industry, but are relevant for business-oriented software and technology as well. (BTW, I really like the energy and visibility that Director Easterly has brought to CISA, not to mention some really cool fashion!).
I think we’d all agree with the thrust of interview within the consumer product sector – “We’ve accepted the fact that cyber safety is my job and your job and the job of my mom and my kid, but we’ve put the burden on consumers, not on the companies who are best equipped to be able to do something about it.”
Even if these companies are able to make their products and websites easier for non-technical consumers, their back ends are still opaque to us, yet we rely on them to protect our private data. Which some companies do well, and some do exceedingly poorly.
In the interview she also touches on information security for businesses. Here’s the relevant quote from the interview: “Cyber is a social good…It’s about societal resilience. And my last message is that we need to fundamentally change the relationship between government and industry.”
This is an interesting comment. I do think that we need a better way for software and technology companies to have their offerings validated in some way, but I don’t think that a government mandate or regulation is necessarily the right way to do this. I do think that there’s an opportunity to strike the right balance here, with a voluntary set of measurement or validation guidelines. At the same time, I think we need to make sure we don’t put too onerous a burden on smaller firms who are innovating and delivering real advances, but may not have the resources to obtain a heavyweight industry certification. For example, FedRAMP is a useful program which gives government buyers a level of confidence in the security and reliability of cloud-based services. It’s effective, but it is expensive and time-consuming for an organization to achieve, with estimates of $250K – $750K, according to StackArmor.
On the other hand, it is important to prevent insecure software and services from being brought to market. Shifting back to the consumer analogy, I think we’d all agree that even though we want to encourage innovation in (for example) kitchen and cooking technology, we need to prevent having these devices electrocute their users. Even if there’s a barrier to entry to the market requiring government or private sector certification, which imposes a time and cost burden on the manufacturer. In some cases, these barriers can be significant, for example, bringing a new automobile to market justifiably involves significant time and money spent on safety testing. I think we’re all good with his level of regulation.
Taking that as an inarguable starting point, we do have to recognize that it’s an imperfect analogy for software. Consumer products have much more constrained use cases, so the “surface area” for testing is much simpler. And yet, we can’t give up because security testing or certification is too hard. On the other hand, we also want to make sure that we’re not getting in the way of innovation or efficiency. We wouldn’t want, for example, to be unable to update a SaaS service due to reliance on an external entity.
I don’t have an answer here – but I can imagine that we’ll be able to find ways to objectively measure and test the security and resiliency of a given product and architecture. And that we need to find the right balance between innovation, speed, reliability, and security, while avoiding negligent behavior on behalf of technology providers. And I realize how complex this is – trust me, I work for a security software vendor.
Your thoughts? Is the idea of a “UL Labs for software” reasonable from a security perspective? Can we rely on for-profit firms to self-regulate without any external and objective oversight?