Trust in Cyber Immune systems

"It takes time to gain trust, but a mere instant to lose it”
Alexander Vinyavsky
Technology Evangelist

It’s funny that this phrase from the 147th episode of an endless soap opera so accurately reflects the essence of an important issue in creating cybersecure systems.

To trust a solution, you need to prove that it deserves to be trusted. To do this, software assurance processes are used, e.g. static and dynamic code analysis, fuzz testing, formal verification, and penetration testing. There are numerous methods and tools available, from the simple and well-known to complex options rarely used and requiring special expertise.

While these methods are mature and well-instrumented, there’s still an important problem: it’s not clear how to intelligently divide code into what needs to be checked “cheaply,” and that which needs to be checked “expensively.” So to prove the trustworthiness of a system, you have to thoroughly check almost all of its code, which is almost impossible to do in practice.

This problem can be solved by minimizing the trusted code base. The point here is that there should be as little security-critical code as possible. That means you don’t have to use expensive analysis methods for all the code, just a small part of it. For everything else, basic verification methods will be enough. This can considerably reduce the cost of code analysis.

At the application level, minimizing the trusted code base implies:

– As few trusted components as possible that directly impact the system’s security goals.

– The trusted components themselves must be relatively simple in terms of functionality and have a small attack surface.

At the system level:

– As little code as possible concentrated in the operating system security kernel, including the OS kernel itself and the modules that run in its context. Minimizing the trusted code base lies at the heart of the Cyber Immune approach. I talk more about this principle in the two-minute video here:

It’s funny that this phrase from the 147th episode of an endless soap opera so accurately reflects the essence of an important issue in creating cybersecure systems.

To trust a solution, you need to prove that it deserves to be trusted. To do this, software assurance processes are used, e.g. static and dynamic code analysis, fuzz testing, formal verification, and penetration testing. There are numerous methods and tools available, from the simple and well-known to complex options rarely used and requiring special expertise.

While these methods are mature and well-instrumented, there’s still an important problem: it’s not clear how to intelligently divide code into what needs to be checked “cheaply,” and that which needs to be checked “expensively.” So to prove the trustworthiness of a system, you have to thoroughly check almost all of its code, which is almost impossible to do in practice.

This problem can be solved by minimizing the trusted code base. The point here is that there should be as little security-critical code as possible. That means you don’t have to use expensive analysis methods for all the code, just a small part of it. For everything else, basic verification methods will be enough. This can considerably reduce the cost of code analysis.

At the application level, minimizing the trusted code base implies:

– As few trusted components as possible that directly impact the system’s security goals.

– The trusted components themselves must be relatively simple in terms of functionality and have a small attack surface.

At the system level:

– As little code as possible concentrated in the operating system security kernel, including the OS kernel itself and the modules that run in its context. Minimizing the trusted code base lies at the heart of the Cyber Immune approach. I talk more about this principle in the two-minute video here: