TheosOS is down

I have had serious problems while building the theory of ThesOS, which led me to give it up. Here are some explanations of what I was trying to achieve, and why I think this goal can't be reached.

Because I think the theory itself was starting to be a nice one I will explain it here anyway, in the hope that someone may find it useful in some other domain, or, in the worst case, that this might serve as a distracting reading :-)

Dynamic Process Firewall based on Potential Knowledge Flow

So my idea was to create a "theoretically secure" operating system that was able to track potential knowledge transfers, and therefore, by blocking some carefully chosen communications, to ensure that knowledge doesn't go from some point to another whenever requested by the user.

System's view

In my design, I decided that the only thing the system was allowed to know about a process is what input and what output the process is doing, i.e. the processing itself was to be hidden to the system.

Explicit Input and Output of a process is any actual transfer of data between two processes (such as a shared pipe, shared memory, shared semaphore etc) or between one process and some physical device (such as a file on hard disk, network connection, screen, keyboard, etc). I will explain later what is the need of "explicit", and what implicit transfer means.

Process Behavioural Equivalence

I defined above what the system knows about processes. We say that two process settings (a sequence of the "life" of the system, ie the sequence of actions and state of all processes) are equivalent if the ordered sequence of data transfers is the same in these two settings. (The actual data transferred is irrelevant, and so are the times of the transfers, as long as the order matches)

I will say that, in some setting A something can potentially happen if there is some setting B equivalent to A, where that thing actually happened.

Knowledge

Knowledge, in this theory, is some sort of "meta-information", and is defined by a set of "atomic knowledges". In an operating system it could exist two sorts of knowledge.
One is the fact that a particular user injected some information into the system (this sort of knowledge is important to make sure that some user doesn't overwrite some precious information).
Another sort of knowledge is more natural, it states that some secret information (not to be read by some particular user) is present in some process.

We say that a potential knowledge is present in a process if the knowledge is potentially present in that process. Okay that sentence might look stupid, but it just uses the above definition of "potentially" which wouldn't apply directly to "potential knowledge" ;-)

Soundness Theorem

I will explain what was the theorem that I was aiming at.

The system maintains, with the help of the user, the set of base knowledge present in the system ("base knowledge" meaning that this is where knowledge is created), and some constraints that has to be kept true during the life of the system.
As said above, this knowledge either carries the fact that a user injected information in the system, or the fact that some particular information must not be read by some user.
The constraints express that some knowledge must not reach some point in the system. For instance a knowledge stating that user X may not read some information must not reach an interface on which X is listening, or some particular file must not be touched by a knowledge carrying information given by a user X (read: X may not write that file)

The theorem basically states that when no potential knowledge transited from a point P to a point Q after some time t, then whatever information I reaching P after time t has no influence on the behaviour of Q. No influence means that the probability distribution of the behaviour of Q is constant when I changes, or that the common information between I and the behaviour of Q, in terms of information theory, is zero.

A Dynamic Firewall based on Potential Knowledge

Having a working theory of potential knowledge, it would get easy (in theory, at least :-), to design a process firewall, ie to let the system decide whether to allow a data transfer to occur: If the transfer would result in the violation of a knowledge constraint, then it is forbidden. Otherwise it is allowed.
I then added concepts of trusted processes, permanent connections, trusted permanent connections, etc, that are only refinements of the above theory, that prevent some sort of denial of service, that allow efficient modelling of shared memory (which would mean sharing potential knowledge), and that make sure the system doesn't get too paranoid.

Knowledge Dynamics: The Implicit Transfer Problem

An easy and naive definition of potential knowledge dynamics would be as follows:

Let K1 be the potential knowledge in process P and K2 the potential knowledge in process Q. After an explicit data transfer from P to Q, the potential knowledge in process P will remain unchanged and the potential knowledge in process Q will be equal to K1 U K2.

But this is not enough :-(

Implicit Transfer: Sending information without transmitting

Consider the following protocol, that would be quite easy to implement, that allows to transfer a secret number from a process to another without having the above dynamics transfer potential knowledge:

We have four processes Clock, Left, Middle and Right. Suppose that Left read the secret number. The system will then attach the corresponding knowledge to it. The other three processes have empty knowledge.

The behaviour of Clock is to send a message with number "zero" to the other three processes, to wait a bit (that waiting can be achieved with busy waiting, ie no access to system clock is required. Busy waiting is seen by the system as an internal processing so it means no action), then to send number "one" to all processes, wait, send "two", etc, ad eternum. Because Clock has no knowledge, according to the system, these actions don't change the knowledge of the other processes.

The behaviour of Left is to wait to have received from Clock a number equal to the secret number, and then immediately notify Middle about it, once the secret number has been reached. Note that the system will (correctly) expand the potential knowledge of Middle once this happens, but only once it happens. We see a beginning of the problem here, in that, if Middle received "ten" from Clock, it knows that the number can't be smalled than ten, even though its potential knowledge is empty.

The behaviour of Middle takes advantage of this knowledge of which the system is unaware to send it to Right: Every time Middle receives a number from Clock which is not followed by a reaction of Left, it (Middle) sends a message to Right saying it is not that number. As soon as Middle receives a message from Left it stops sending messages to Right. We recall that Middle had no potential knowledge before getting a message from Left, so the messages it sends to Right don't extend the latter's potential knowledge. When Middle receives a message from Left its potential knowledge increases but because it doesn't send any more message to Right the latter's potential knowledge remains empty.

Finally, the behaviour of Right is to wait for messages from Middle to stop, and to deduce from that what is the value of the secret number, and to act depending on that value.

So we see that the above knowledge dynamic doesn't match the soundness theorem, because even though Right doesn't have the potential knowledge associated with the secret number, its future behaviour depends on that number (that's a complicated way of saying that it "knows" that number). We will call this type of information transfer as implicit transfer

Taking Implicit Transfers into Account: Information Radiation

Examining all cases in which we can have this sort of implicit transfer I realised that for each explicit transfer from a process P to a process Q, there is a "virtual" information transfer going from Q to P (and therefore there also is a virtual knowledge transfer). In the above case, the messages sent from Clock make Left's knowledge "virtually" leak to Clock. Then, next time it sends a message to one of the other two processes, this knowledge will "potentially" reach those two processes.

This gets a little more clear if you change the above protocal saying that Middle and Clock are the same process. The fact that Middle "knows" that the number can't be less than ten after counting to ten is matched by the "virtual" data transfer from Left to it.

This means that data transfer is "virtually" symmetric!

Making Things Real Bad: CPU Knowledge Radiation

Consider the following example where the Clock isn't really a process but is the "impulsions" given by the CPU clock to the currently running process (these "impulsions" can be "felt" by the processes as it makes them execute instructions) It is a slight variation of the strict counting example because, depending on the process scheduler, clock will randomly send messages without information to the three processes. But, statistically, each process will receive the same number of impulsions, in the same time span (at least should, in any "normal" scheduler. We assume all three processes to have same priority etc).

This can be implemented by having all three processes doing busy waiting, and counting the number of loops, and incrementing a counter every one thousand loops or so. All three processes will have their own counter approximately at the same value, provided they are started simultaneously, or that they "explicitly" synchronize before any of them accesses the secret number. The rest of the algorithm is the same.

So we have to consider impulsions from the cpu as "explicit messages" to all running processes. By symmetry, all running processes send "virtual messages" to the cpu, therefore "virtually" transmitting their knowledge to the cpu, which will then transfer its newly acquired knowledge to all running processes!

What does this mean?
It means that if you have two processes running somewhere, for which there exists a possibility to exchange information (the possibility exists but isn't necessarily used), then their knowledge will be "virtually" merged in two cpu ticks! Who talked about information security??

Why This is Not the End of the World

Ok, don't panic, there are basically three things that can ensure information security:

Trusted and Well-Written Programs
data transfer, no matter if it is explicit, implicit, virtual or whatever, requires cooperation of the source and the target processes.
If a program has no bugs and you fully trust his author, and that a bad guy can't take control of it (Ok I know there is no such a thing in the real word, but I am doing theories anyway :-) then you know it is not going to do elaborated algorithms for transmitting your information, so you can give that program access to your information. That's the way things work today, or at least try to work.
"Internal Processing"
I assumed the system does not know anything about the behaviour of the programs, i.e. that as long as they don't input or output, the system doesn't care what they do. Maybe a solution for a theoretically secure system would be to have the system analyse the programs before running them. Of course this would probably have to be for programs written in a simpler language than assembly (that's the way the system sees them), and we'd still have to work with "potential actions" because the system can't predict everything that a program could do, if we want the programs to be able to do anything interesting.
Static Firewalls
I have not studied SELinux and similar systems. but I guess this must be the solution they are using. Instead of having the system dynamically deciding what data transfer to allow and what to refuse, the user gives a set of rules that apply immediately to the programs. If there is no way an explicit data transfer can take place between two processes, then there can't be virtual data transfer either (as said above, we can only have a virtual data transfer if there could have been a data transfer but there hasn't)

Maxime Gamboni - Tue Aug 31 16:55:42 IST 2004