Security Mindset and Ordinary Paranoia

As a software engineer, everything I create has the potential to become a tool for bad actors to exploit. Security Mindset and Ordinary Paranoia is an engaging background read on the philosophy of designing software that is secure to the core.

This is one of my favourite computer security articles that I come back to time and time again. I would recommend reading it to anyone involved in creating software, from the engineers implementing it, to the operations teams keeping it running, right up to product designers and managers. In my opinion, it is possible to think about security as an architectural feature, and to design products that are more secure because of what they are, not simply because security was considered in their implementation.

Here are some choice quotes from the article…


AMBER, a philanthropist interested in a more reliable Internet, and CORAL, a computer security professional, are at a conference hotel together discussing what Coral insists is a difficult and important issue: the difficulty of building “secure” software.

Setting the scene… This article is in the form of a dialogue.


CORAL: There are systematic, nonrandom forces strongly selecting for particular outcomes, causing pieces of the system to go down weird execution paths and occupy unexpected states. If your system literally has no misbehavior modes at all, it doesn’t matter if you have IQ 140 and the enemy has IQ 160—it’s not an arm-wrestling contest. It’s just very much harder to build a system that doesn’t enter weird states when the weird states are being selected-for in a correlated way, rather than happening only by accident. The weirdness-selecting forces can search through parts of the larger state space that you yourself failed to imagine. Beating that does indeed require new skills and a different mode of thinking, what Bruce Schneier called “security mindset”.

I love this paragraph, it sums up the scope of the problem at hand very neatly.


CORAL: Imagining attacks, including weird or clever attacks, and parrying them with measures you imagine will stop the attack; that is ordinary paranoia.

Ordinary paranoia pits wits against wits.


AMBER: Well, okay, but if we’re guarding against an AI system discovering cosmic powers in a millisecond, that does seem to me like an unreasonable thing to worry about. I guess that marks me as a merely ordinary paranoid.

CORAL: One of the hallmarks of security professionals is that they spend a lot of time worrying about edge cases that would fail to alarm an ordinary paranoid because the edge case doesn’t sound like something an adversary is likely to do.

Extreme thinking or reasonable concern?


CORAL: People with security mindset sometimes fail to build secure systems. People without security mindset always fail at security if the system is at all complex. What this way of thinking buys you is a chance that your system takes longer than 24 hours to break.


CORAL:

This kind of thinking is not natural for most people. It’s not natural for engineers. Good engineering involves thinking about how things can be made to work; the security mindset involves thinking about how things can be made to fail…

(Quoting Bruce Schneier.)


CORAL: You don’t want to assume that parts of the system “succeed” or “fail”—that’s not language that should appear in [your written design]. You want the elements of the story to be strictly factual, not… value-laden, goal-laden?


[(aside):] …the real reason the author is listing out this methodology is that he’s currently trying to do something similar on the problem of aligning Artificial General Intelligence, and he would like to move past “I believe my AGI won’t want to kill anyone”…

🤞


…the author [hopes] that practicing this way of thinking can help lead people into building more solid stories about robust systems, if they already have good ordinary paranoia…

It’s certainly something to try to live up to.

Leave a comment