Secure design principles

Developers

Secure design, new to the OWASP Top 10, is in the spotlight again. Let's have an overview of some long-standing principles.

When talking about secure design principles, most of security experts immediately mention the paper “The Protection of Information in Computer Systems”. The paper – written by Jerome H. Saltzer and Michael D. Schroeder in 1975– explores the mechanisms of protecting computer-stored information from unauthorized use or modification. Yes, you read it correctly, that was almost half a century ago. Yet, the secure design principles laid down there are still valid!

Saltzer and Schroeder described 8 main principles and 2 additional ones which they thought “unfortunately, apply only imperfectly to computer systems”. But, as life showed, these two are nowadays as valid in computer security as the first eight.

The main secure design principles are the following:

a) Economy of mechanism: Keep the design as simple and small as possible.

b) Fail-safe defaults: Base access decisions on permission rather than exclusion.

c) Complete mediation: Every access to every object must be checked for authority (there and then).

d) Open design: The design (and the code) should not be considered secret. The secret is always data, like a password or a cryptographic key.

e) Separation of privilege: It’s always safer if it takes two parties to agree on a decision than if one can do it alone.

f) Least privilege: Operate with the minimal set of powers needed to get the job done.

g) Least common mechanism: Minimize subsystems shared between or relied upon by mutually distrusting users.

h) Psychological acceptability: Design security systems for ease of use for humans.

The two additional secure design principles are:

i) Work factor: Compare the cost of circumventing the mechanism with the resources of a potential attacker.

j) Compromise recording: Record that a compromise of information has occurred.

The eight main principles

Let us first discuss the main secure design principles in detail.

Economy of mechanism

“Keep the design as simple and small as possible. … design and implementation errors that result in unwanted access paths will not be noticed during normal use (since normal use usually does not include attempts to exercise improper access paths). As a result, techniques such as line-by-line inspection of software and physical examination of hardware that implements protection mechanisms are necessary. For such techniques to be successful, a small and simple design is essential.”

The smaller and simpler your code is, the smaller the attack surface. There are less possibilities for an attacker to exploit a bug. It’s also easier to check the correctness of the code if it is small and simple.

But keep in mind that “small” does not automatically imply “simple”. For example, look at the following code in C:

if (a = b) // …
a = b;
if (a != 0) // …

When looking at the first example, it’s possible that the developer meant “==” instead of “=” and somebody might just “fix” the issue. It is even more likely to be “fixed” as the compiler will throw a warning about it. But in the second example, the intention of the developer is clear.

Or let’s look at these two pieces of equivalent code:

f() && g();
if (f())
  g();

Here the first example uses the side effect that the function g() on the right side of && is only executed when the result of executing f() evaluates to a non-zero value. However, the second example makes it clear that g() should be only executed if the result of f() evaluates to non-zero.

Even though both code examples achieve the same functionality, and at the end compile to the same machine code, these are clear cases where shorter is not better. Making sure that the functionality is unambiguous is important not only in secure design, but also from the point of view of code maintenance and sometimes even code stability! Of course, most of us could write a quick hack into our code which actually works, but nobody else can understand it; and what if we tell you that this “nobody” can even be you yourself a year from now? Keep your code clean and adhere to coding standards and best practices – for example, here is one for C/C++.

Fail-safe defaults

“Base access decisions on permission rather than exclusion. … A design or implementation mistake in a mechanism that gives explicit permission tends to fail by refusing permission, a safe situation, since it will be quickly detected. On the other hand, a design or implementation mistake in a mechanism that explicitly excludes access tends to fail by allowing access, a failure which may go unnoticed in normal use…”

When authorizing, start with denying all access. Then allow only that what has been explicitly permitted. This will most likely lead to false negatives: somebody not gaining access to the information they need. But this will be reported quickly. If it were the other way round – somebody being able to access information they are not authorized to – this would never be reported. After all, it is very unlikely for people to queue up at the admin’s door just to report that they have permission to access some piece of information they don’t need and therefore shouldn’t have.

In other words: follow the secure design approach of using allowlists over denylists. An allowlist (the recently accepted terminology for what was formerly called a whitelist) is a set of users authorized to enter or inputs allowed to be processed, everybody and everything else is forbidden. On the other hand, a denylist (similarly, formerly known as blacklist) is exactly the opposite: everything that is not enlisted is allowed, and only the listed items are blocked. Of course, strictly speaking, these are not necessarily implemented as lists, but most likely as a set of rules, such as “emails from mycompany.com are let through, while all other emails are first scanned for spam and malware”.

Complete mediation

“Every access to every object must be checked for authority. … It forces a system-wide view of access control, which in addition to normal operation includes initialization, recovery, shutdown, and maintenance. It implies that a foolproof method of identifying the source of every request must be devised. … If a change in authority occurs, such remembered results must be systematically updated.”

This secure design principle promotes the concept of defense in depth, in which multiple layers of security complementing each other are used in order to increase the overall security. Every access to every object must be checked for authority. No relying on previous checks, no assuming the checks are still valid. Check it each time! There and then! Period! If you fail to apply this principle, attackers may target you for example with a TOCTTOU (Time Of Check To Time Of Use).

To explain it, just imagine the following situation. You have a bank account with two ATM cards, and go to an ATM to withdraw all your money. You enter the amount, and the ATM checks your funds. There is enough money in your account, but it asks you whether you are sure you want to withdraw all your money. While the ATM waits patiently for your reply, you use a neighboring ATM with the other card and withdraw all the money. After that, you confirm on the first ATM that you really want all that money. If the ATM relied on the previous check of your funds, you could withdraw your money twice. Fortunately, the developers of the ATM software know about complete mediation, so the ATM always checks the funds again before dispensing the banknotes; this was just an easily comprehensible example.

Open design

“The design should not be secret. The mechanisms should not depend on the ignorance of potential attackers, but rather on the possession of specific, more easily protected, keys or passwords. … it is simply not realistic to attempt to maintain secrecy for any system which receives wide distribution.”

The open design principle has its roots in Kerckhoff’s principle, which states that the security of a cryptographic system must depend on the secrecy of its keys only. Everything else, including the algorithm itself, should be considered public knowledge.

The opposite of the open design principle is security by obscurity, when your security solution tries to rely on the fact that nobody else knows what you are doing. Which is a very bad idea. Attackers can possibly obtain design documents or source code, or can simply reverse-engineer the product to learn everything about its implementation. In addition, trying to ensure the secrecy of the implementation also makes any security audits and reviews very hard, if not impossible. Secure design should never rely on the secrecy of the implementation!

Separation of privilege

“Where feasible, a protection mechanism that requires two keys to unlock it is more robust and flexible than one that allows access to the presenter of only a single key….”

It’s a fundamental concept of secure design to have several layers of protection, not just one. The more checks you have, the harder it is to attack your system. But make sure that these checks use different mechanisms. For example, if implementing a multi-factor authentication, combine knowledge-based authentication (something you know) with either possession-based authentication (something you have) or biometrics (something you are). An example of this can be a token on your phone (possession) combined with your fingerprint (biometrics) or a PIN (knowledge). To make two-factor authentication even more secure, as an additional factor you can throw in location information. For example, if your credit card was used in London and then 5 minutes later a usage in Moscow is reported, your bank’s fraud detection system will most likely stop the second transaction. But note that position data cannot replace any of the primary factors!

Least privilege

“Every program and every user of the system should operate using the least set of privileges necessary to complete the job. … The military security rule of need-to-know is an example of this principle.”

Figure out what capabilities a program requires to run and grant exactly those, and nothing more. This will remarkably limit the consequences of a successful attack.

For example, an image viewer program shouldn’t need network access, or a bus timetable app shouldn’t need access to your phone call history or contacts. Of course, this is not easy; the best way to achieve this secure design goal is by granting no rights at all by default and adding the privileges one by one as needed. Just think about the military’s “need to know” rules – this is the same concept.

Least common mechanism

“Minimize the amount of mechanism common to more than one user and depended on by all users. Every shared mechanism (especially one involving shared variables) represents a potential information path between users and must be designed with great care to be sure it does not unintentionally compromise security. …”

Any dependence between components means that the consequences of a successful attack in one component may spread through the system like dominoes falling over. This is something that secure design wants to minimize; keep the dominoes apart!

Be careful with any shared code, since an original assumption may no longer be valid once the module starts interacting with a different environment. Take, for example, the story of the Ariane 5 rocket disaster. Parts of the code developed for the Ariane 4 were reused. But the underlying hardware changed, and more advanced sensors were installed on the new rocket – sending 64-bit data instead of 32-bit. The code’s assumption of 32-bit data resulted in an integer overflow costing $370m as the rocket blew up in a huge fireball. The same applies to shared data – they create an opportunity for one process to influence another. For example, if two processes write/read temp files in a temporary directory and one of the components is compromised, it can compromise the other one through those temp files.

Psychological acceptability

“It is essential that the human interface be designed for ease of use, so that users routinely and automatically apply the protection mechanisms correctly. Also, to the extent that the user’s mental image of his protection goals matches the mechanisms he must use, mistakes will be minimized. If he must translate his image of his protection needs into a radically different specification language, he will make errors.”

In secure design it is important to keep in mind that your users are human beings. Simply put, if you push security too hard, it will break at some point. If the authentication process is counter-intuitive, your users will hate it. And “hating” here means that they will try to break it, avoid it, go around it or even stop using your product entirely. Unfortunately, security and usability are mostly at odds with each other: when you increase one, the other unavoidably decreases. You must find a working compromise where the security is there, but the product is still usable. And in most cases that is a hard task.

secure design

The two additional principles

Work factor

“Compare the cost of circumventing the mechanism with the resources of a potential attacker. The cost of circumventing, commonly known as the “work factor”, in some cases can be easily calculated. … The trouble with the work factor principle is that many computer protection mechanisms are not susceptible to direct work factor calculation, since defeating them by systematic attack may be logically impossible. …”

This secure design principle acknowledges that we can’t always estimate risk accurately, because we can’t always calculate how much work the attacker needs to do to perform an attack. There should be a balance between cost and adequate security. One should always calculate the cost of security, taking into account the potential loss and the gain and costs of the attacker. Remember, when securing your car, it is generally enough if it is harder to steal it than your neighbor’s car. Of course, if your car is somewhat special to the thief, you may need much better than average security. And it goes without saying that you must consider how much your belonging is worth to the criminals, and not to you.

Compromise recording

“It is sometimes suggested that mechanisms that reliably record that a compromise of information has occurred can be used in place of more elaborate mechanisms that completely prevent loss. …”

This secure design principle stresses the importance of logging and evidence collection. Obviously, an attack is much more dangerous if it goes unnoticed, hence detecting it as soon as possible can minimize the damage, and this is a critical aspect of incident response.

Conclusion

To quote Saltzer and Schroeder one last time: “As is apparent, these principles do not represent absolute rules–they serve best as warnings. If some part of a design violates a principle, the violation is a symptom of potential trouble, and the design should be carefully reviewed to be sure that the trouble has been accounted for or is unimportant.”

And remember: even the best designed system can be vulnerable if it contains a single exploitable bug introduced during the implementation. Secure design and implementation should go hand in hand. Security is a holistic discipline after all!

Generally, we can say that half of the exploitable weaknesses are introduced during the design, and half during the implementation. Nevertheless, the attackers don’t really have a preference for one type of weakness over another. They just want to break the security of the system any way they can.

To give design aspects more emphasis, the latest OWASP Top 10 list now includes Insecure design as the fourth element. And we cannot agree more. Already aligned to this, in our courses we discuss all of the above secure design principles in more detail. All of them include concrete examples, where ignoring some of them can lead to exploitable weaknesses.

Check out all the courses in our catalog and pick the most appropriate for your development group!