An Axiomatic Approach Flow in Programs to Information

An Axiomatic Approach to Information
Flow in Programs
GREGORY R. ANDREWS
University of Arizona
and
RICHARD P. REITMAN
Syracuse University
A new approach to information flow in sequential and parallel programs is presented. Flow proof
rules that capture the information flow semantics of a variety of statements are given and used to
construct program flow proofs. The method is illustrated by examples. The applications of flow proofs
to certifying information flow policies and to solving the confinement problem are considered. It is
also shown that flow rules and correctness rules can be combined to form an even more powerful
proof system.
Key Words and Phrases: information flow, information security, security certification, parallel programs, axiomatic logic, proof rules
CR Categories: 4.32, 4.35, 5.21, 5.24
1. INTRODUCTION
A c o m p u t e r s y s t e m is s e c u r e if t h e i n f o r m a t i o n it c o n t a i n s c a n o n l y b e m a n i p u l a t e d i n w a y s a u t h o r i z e d b y its s e c u r i t y policies. T h e r e a r e t w o t y p e s of s e c u r i t y
policies: access policies a n d i n f o r m a t i o n flow policies. A n access policy specifies
t h e r i g h t s t h a t s u b j e c t s , s u c h as users, h a v e to access o b j e c t s t h a t c o n t a i n
i n f o r m a t i o n , s u c h as files. A n information flow policy specifies t h e classes of
information t h a t can be contained in objects and the allowed relations between
o b j e c t classes. F o r e x a m p l e , a n access" policy m i g h t specifY t h a t u s e r 1 c a n r e a d
f r o m file 1 a n d w r i t e to file 2; a flow policy m i g h t specify t h a t t h e i n f o r m a t i o n i n
file 1 b e a t m o s t c o n f i d e n t i a l a n d a l w a y s less t h a n t h e class of i n f o r m a t i o n i n file
2. I n o r d e r to v a l i d a t e t h e s e c u r i t y of a s y s t e m , o n e m u s t verify b o t h t h a t s y s t e m
p r o g r a m s a c c u r a t e l y i m p l e m e n t t h e g i v e n policies a n d t h a t t h e u n d e r l y i n g h a r d w a r e a n d s o f t w a r e p r o t e c t i o n m e c h a n i s m s are correct. T h i s p a p e r focuses o n t h e
Permission to copy without fee all or part of this material is granted provided that the copies are not
made or distributed for direct commercial advantage, the ACM copyright notice and the title of the
publication and its date appear, and notice is given that copying is by permission of the Association
for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific
permission.
This research was supported in part by NSF Grants MCS 77-07554 and MCS79-05003.
A preliminary report on a portion of this research was presented at the 6th Annual ACM Symposium
on Principles of Programming Languages, San Antonio, Tex., Jan. 1979.
Authors' addresses: G.R. Andrews, Department of Computer Science, University of Arizona, Tucson,
AZ 85721; R.P. Reitman, School of Computer and Information Science, 313 Link Hall, Syracuse
University, Syracuse, NY 13210.
© 1980 ACM 0164-0925/80/0100-0056 $00.75
ACM Transactionson ProgrammingLanguagesand Systems, Vo|. 2, No. 1, January 1980.Pages 56-76.
An Axiomatic Approach to Information Flow
57
problem of validating flow policies. The interested reader is referred to [14, 17]
for discussions of validating access policies and to [15, 19] for discussions of
validating protection kernels.
We view a system as a large, parallel program in which variables are the objects
that contain information, statements define actions, and processes are the subjects
that take actions. Given a program for a specific system, a flow policy specifies
acceptable classifications of the variables in the program. A flow policy is
validated by first solving the information flow problem, which consists of determining the ways in which program execution can cause the information in each
variable to depend on the input values and the information in other variables,
and then by certifying that no execution path can cause information flows that
result in variable classifications that violate the policy.
The information flow problem has been examined previously by several researchers ([3] contains a survey). Some have advocated monitoring program
execution at run time [6, 7, 15], while others have defined the flow semantics of
program statements in order to validate programs prior to execution [2-4, 10, 13,
16]. Four papers in particular have made important contributions: Denning [3]
presents the basic model of information flow and characterizes the various
approaches; Denning and Denning [4] define flow semantics for sequential
language statements and develop a compile-time certification procedure (which
assumes that variable classifications never change}; Jones and Lipton [ 10] analyze
flows in a flowchart language and show how to add checks to the program to
validate a given policy; and Cohen [2] examines the information flow problem
from an information-theoretic point of view. The above research has increased
both our understanding of information flow and our tools for addressing the
problem. It has, however, only dealt with information flow in sequential programs.
Information flow is also of concern in parallel programs, such as operating or
database systems.
This paper presents a new approach to information flow that utilizes a program
flow proof constructed by applying flow axioms and rules of inference. There are
four major benefits of this approach. First, it is powerful enough to certify the
flows in parallel as well as in sequential programs; in fact, flow proof rules for all
of Concurrent Pascal [1] have been developed [21]. Second, once a flow proof for
a program has been constructed, the proof can be used to validate a variety of
flow policies. Third, a flow proof is valid even if a program may not terminate (a
problem also considered in [10]); this is important since many processes in
operating and database systems are not intended to terminate. Fourth, flow
proofs combine in a straightforward way with correctness proofs; the result is a
proof system more powerful than one that only considers the classifications and
not the values of variables. Subsequently, each of these points will be explained
in detail.
The paper is organized as follows. Section 2 discusses basic concepts about
information flow in programs and presents the concepts and notation used in flow
proofs. Section 3 presents proof rules for sequential program statements, in
particular assignment, alternation (if), iteration (while), and procedures. Section
4 presents proof rules for parallel execution (cobegin) and synchronization
(semaphores). Section 5 discusses the use of flow proofs for certifying flow policies
and solving the confinement problem. Section 6 discusses the relation between
ACM Transactions on Programming Languages and Systems, Vol. 2, No. 1, January 1980.
58
G.R. Andrews and R. P. Reitman
flow proofs and correctness proofs and shows how t h e y can be combined. Finally,
Section 7 s u m m a r i z e s the approach.
2. BASIC CONCEPTS
With respect to information flow, a p r o g r a m has three components:
(1) vclriables, which contain information;
(2) an information state, which consists of the current classification (i.e.,
degree of sensitivity) of each variable;
(3) statements, which modify variables (and hence alter the state) or control
the order of execution.
T h i s section discusses information classification schemes, describes the ways in
which s t a t e m e n t s alter variable classifications, and presents the concept of a flow
proof.
2.1 Classification of Information
T o characterize the sensitivity of information, each variable h a s a classification,
called its "class," for short. Following the m o d e l in [3, 4], we a s s u m e t h a t the set
of security classes is finite, is partially ordered (__), and has a least u p p e r b o u n d
o p e r a t o r (@). T h e notation x denotes the class of variable x. T h e ordering _<
indicates the relationship between classes. Po~
example,
x _ y m e a n s t h a t the
:~l"
•
class of variable y is at least as great as t h a t of:variable x. T h e Feast u p p e r bound
o p e r a t o r (9 is the m e a n s by which classes are combined. For example, if E is the
expression x * y, t h e n the class of E, denoted _E, is b y definition x (9 y since E
contains information f r o m b o t h x and y. We will use low to r e p r e s e n t t-he lowest
class and high to r e p r e s e n t the highest.
T w o specific classification Schemes are used in practice. T h e first a s s u m e s t h a t
classes are linearly ordered f r o m low to high; it corresponds to the military
classification scheme where low = unclassified and high = top secret. T h e second
assumes t h a t a distinct class is associated with each variable, t h a t constants h a v e
class low, and t h a t the other classes consist of the powerset of the variable classes
and low. In this scheme, (9 is set union, _ is set inclusion, and high is the union
of all the classes.
2.2 Flows of Information
T h e classifications of variables change as a result of executing s t a t e m e n t s . As
described in [3, 4], there are two t y p e s of information flow t h a t can occur in
sequential programs: 1
(1) direct flows resulting f r o m a s s i g n m e n t and IO s t a t e m e n t s ;
(2) indirect flows resulting f r o m conditional s t a t e m e n t s , such as i f and w h i l e .
In parallel programs, there is an additional source of indirect flow resulting f r o m
process synchronization.
T o illustrate these flows, consider the following s t a t e m e n t s :
(1) assignment
x := E
(2) alternation
if B t h e n S~ else S~
'Covert flows [11] are being disregarded for now because they cannot be represented within a
program; they are discussed and related to this approach in Section 5.2.
ACMTransactionson ProgrammingLanguagesand Systems,Vol.2, No. l, January 1980.
An Axiomatic Approach to Information Flow
(3) iteration
while B do S
(4) synchronization
wait(sem)
59
signal (sem )
T h e assignment s t a t e m e n t causes a direct flow of information from E to x;
hence it causes the class of x to change. T h e new class of x is the least upper
b o u n d of the class of E and the class of any indirect flow t h a t affects execution of
the assignment. T h e r e are two subtypes of indirect flows: local flows within
statements and global flows between statements. Alternation and iteration both
cause a local flow of information from the condition B to the body of the
statement. For example, in the statement
if x = 0 then y := 0 else y := 1
y is set to zero if and only if x is zero; hence y depends on x. In general, statements
$1 and $2 both obtain informatibn about B. T h e same type of flow occurs in
iterative statements; the body S of a w h i l e loop receives information about the
condition B. W h e n alternative or iterative statements are nested, every nested
s t a t e m e n t obtains information from every condition controlling the nesting.
Iteration can also cause a global flow from the condition to all subsequent
statements. For example, the s t a t e m e n t sequence
x:=l;
while y = 0 do "skip";
x:=O
causes x to be set to zero if and only i~y is zero. This is because x := 0 is executed
conditionally, depending on whether the loop terminates." In general, the information in the expression y -- 0 flows to all s t a t e m e n t s t h a t can be executed after
the loop. Unless one assumes or can prove t h a t all loops terminate, this type of
flow must be a c c o m m o d a t e d in proofs.
T h e other source of global flow is process synchronization. If one process
executes
if x = 0 then signal(sere)
and another executes
y := 1;
wait(sem);
y:=0
where sere is a semaphore with initial value zero, the final value of y indicates
whether or not x is zero. This results from the fact t h a t when a process blocks, it
depends on another process to be awakened. All statements executed after the
w a i t in the blocked process potentially receive information from the signaler.
Although the above p r o g r a m fragment p e r m a n e n t l y blocks the second process if
x is not zero, a global flow due to process synchronization can occur even if all
processes are known to terminate, as will be shown.
2 It can be argued with some justification that nontermination causes a covert rather than an indirect
flow since one needs to know something about "reasonable" execution time to conclude t h a t the
program is in a perpetual loop [4]. However, we consider nontermination of a loop to cause an indirect
flow because it leads to two possible execution sequences in which the values, and hence the classes,
of the program variables may differ.
ACM Transactions on Programming Languages and Systems, Vol. 2, No. 1, January 1980.
60
G.R. Andrews and R. P. Reitman
2.3 Flow Proofs
For each variable x in a given program, let x be a new variable whose value is the
class of x. In addition, let local and global be two special certification variables.
Local contains the class of the information available as a result of indirect flow
within a s t a t e m e n t ; global contains the class of the information t h a t m a y flow
b e t w e e n s t a t e m e n t s . T h e information state of the given p r o g r a m consists of all
the _x, local, and global. 3 T o r e p r e s e n t restrictions on the set of possible states,
assertions are used to indicate relationships between the state variables and
classes. For example, the assertion {x ~ y_} r e p r e s e n t s the set of states where the
classification of x is no m o r e sensitive t h a n t h a t of y.
T h e information state of a p r o g r a m changes as a result of executing s t a t e m e n t s .
Let P and Q be assertions a b o u t the information state and let S be a s t a t e m e n t .
T h e notation
{P} S {Q}
m e a n s t h a t if P is true before exeuction of S, t h e n Q is true after execution of S,
provided t h a t S terminates. T h i s is the s a m e notation used in correctness proofs;
the difference is t h a t P and Q refer to classes r a t h e r t h a n values. T o develop a
flow proof of {P} S {Q}, a deductive logic t h a t describes the information f l o ~
semantics of s t a t e m e n t s is needed. T h e r e m a i n d e r of the p a p e r presents and
analyzes a deductive s y s t e m for information flow t h a t is similar to H o a r e ' s
deductive s y s t e m for functional correctness [8].
3. PROOF RULES FOR SEQUENTIAL PROGRAMS
T h e information flow semantics of sequential p r o g r a m s h a v e b e e n considered
previously in [3, 4, 10, 13, 16]. T h i s section presents an alternative, axiomatic
definition. In particular, p r o o f rules for sequential p r o g r a m s containing assignment, alternation, iteration, composition, and procedure call s t a t e m e n t s are
described, a n d their use is illustrated by an example. T h r o u g h o u t , P and Q are
assertions a b o u t the information state, and P[x <--y] is the assertion P with every
free occurrence of x syntactically replaced by y. T h e notation P t- Q m e a n s t h a t
Q can be derived f r o m P; the notation
A1. . . . . A,
B
m e a n s t h a t if logical s t a t e m e n t s A1 . . . . . A , are true, t h e n so is B. Occasionally,
assertions of the f o r m {V, L, G} are used where the c o m m a m e a n s conjunction
(and), V is an assertion a b o u t the classes of p r o g r a m variables, L is an assertion
a b o u t the class of auxiliary variable local, G is an assertion a b o u t the class of
auxiliary variable global, and local and global only a p p e a r in L and G, respectively.
:3The components of the information state are similar to what Jones and Lipton called surveillance
variables [10]. The chief difference is that the information state contains two extra certification
variables, local and global, rather than just one for the program counter; this distinguishes between
the two different types of indirect flow. The advantage of this approach is that it allows the value of
local to be decreased when conditional statements terminate.
A C M Transactions on Programming Languages and Systems, Vol. 2, No. I, January 1980.
An Axiomatic Approach to Information Flow
61
3.1 Assignment
The assignment statement is the only statement being considered that can change
the classification of a variable (the effect of IO statements is similar and is
discussed in [21]). In the simple assignment statement x:= E, x receives information from three sources: E, local, and global. (The ways in which local and
global change are discussed shortly.) Let P be an assertion that is to be true after
executing x := E. The proof rule for assignment then is
{P[x ~ E • local @ global]}x:=E {P}
Array element assignment of the form A[i] := E is similar to simple assignment
but has two additional complications. First, the class of an array A must be at
least as great as that of any of its elements since assigning to A[i] cannot lower
the classification of A. Second, information from i also flows into A as a result of
assigning to A[i]; for example, if A is initially all zeros and A[i] := 1 is executed,
the value of i can be determined by searching A. Given these considerations, the
proof rule for array assignment is
{P[A ~ A @ i • E • local ~ global]}A[i] :=E {P}
3.2 Alternation
The effect of the statement
if B then $1 else $2
is to make the information in B available to $1 and $2. In correctness logics, one
has a rule of inference of the form:
{P/~ B} S, {Q}, {P/~ -~S} S2 {Q}
(P} if B then $I else $2 (Q}
which shows that B is included in the preconditions of both hypotheses of the
rule. Analogously, a flow logic must ensure that the class of B is included in the
preconditions of the hypotheses of the rule for alternation. The variable local
contains this class; consequently, within the alternative statement, B must be
"added" (using ~) to local. Let V and V' be assertions about the--classes of
program variables; let L and L' be assertions about local; and let G and G' be
assertions about global. The flow rule for alternation then is
{V, L', G} S, (V', L', G'}, (V, L', G} S2 {V', L', G'}
V, L, G b- L'[local ~ local • B]
{V, L, G} if B then $1 else $2 {V', L, G'}
If the local flow initially satisfies L, and if L' (after substituting local ~ B for
local) can be derived from the initial state, then L' can be used in the proofs of
the hypotheses. After the alternation statement, the local flow once again satisfies
L although V may have changed to V' and G may have changed to G'.
3.3 Iteration
Consider the following statement:
while B do S
ACM Transactions on Programming Languages and Systems, Vol. 2, No. 1, January 1980.
62
G.R. Andrews and R. P. Reitman
To motivate the flow rule for while, consider its correctness rule:
{P A B} S (P}
{P} while B do S {P A -~B}
This nile states that P must be invariant over execution of S, that B can be used
in the proof {P A B} S {P}, and that -~B is true after the loop terminates (if it
does).
The flow rule for w h i l e loops has three analogous parts. First, an assertion
must be invariant over the execution of S. Second, the flow proof of the body S
must take account of a potential increase in the local flow due to B. Third,
execution of statements after the loop is contingent on termination of the lo0p;
hence the class of B flows into global. Let V be an assertion about the classes of
program variables; let L and L' be assertions about local; and let G and G' be
assertions about global. The flow rule for w h i l e then is
{V, L', G} S {V, L', G},
V, L, G ~- L'[local ~- local ~B]I,
V, L, G, }-- G'[global *-global~ local$ B]
{V, L, G} while B do S {V, L, G'}
Any flow caused by statement S is captured by the 10op invariant {V, L', G}.
Within S, the local flow is L' (derived from the initial state in the same way as for
alternation) and the global flow is G (which must be at least as great as the global
flow resulting from S). After the w h i l e statement the local flow is again L, but
the global flow is changed to G'. Note that the global flow after the loop is at
least as great as that before the loop and that it is affected by both B and local;
local affects global because the w h i l e loop may itself have been nested within a
conditional statement.
It should be noted that the above rule assumes tha_t w h i l e may not terminate.
If, however, w h i l e is assumed to terminate, is proven to terminate (see Section
6), or is replaced by an iteration statement that always terminates, thenglobal
will not increase.
3.4 Composition and Consequence
In most programs, compound statements of the form
begin $ I ; . . . ; SN e n d
are also needed. The flow rule for composition is identical to the corresponding
correctness rule in which the postcondition of one statement is the precondition
of the next. The rule is as follows:
(Pi} Si {Pi+I}
1 _< i _< N
{PI} begin $1; ... ; S, end {PN+I}
TO develop flow proofs, especially for alternative and iterative statements, a
rule of consequence is also needed to allow new proofs to be derived from existing
ones. The rule of consequence for flow proofs is also identical to that for
ACM Transactions on Programming Languages and Systems, Vol. 2, No. 1, January 1980.
An Axiomatic Approach to Information Flow
63
correctness proofs:
{P'} S {Q'}, P}- P', Q ' ~ Q
{P} S {Q}
3.5 Procedures
As the final sequential programming construct, consider procedures of the form:
procedure pname (x; vat y);
<variable declarations>;
S
end pname
where x is a sequence of value parameters, y is a sequence of value/result
parameters, S is a statement, and no variable appears in both x and y.t The
information flow rule for a call of the form call name(a; b) is again similar to the
procedure call rule for correctness: there is a substitution of actual parameters
for formal parameters, and, given a flow proof {P} S (Q} of the body of the
procedure, Q with appropriate substitutions is true after the call provided that P
with appropriate substitutions is true before the call. In addition, there are
potential flows from local and global into the procedure, and a potential increase
in the global flow in the calling program resulting from the fact that the procedure
may not terminate.
A flow proof rule for a procedure must indicate its use of parameters and local
variables and its contribution, if any, to global flow.° Initially, each local procedure
variable has classification low since it contains no information. For each value/
result parameter yi, let di be a new, unique variable used to indicate an initial
bound on yi. In addition,-iet_/and ~ be new, unique variables used as placeholders
for bounds on the values of local a n d g l o b a l present at the point of a call. Then
a procedure with body S has a flow proof of the form
{P, yi <- ~i, local <_[, globe._...1_<_~}S {Q}
where P states other assumptions, for example, about variables local to the
procedure, and Q is the consequence of executing S. (Q should not refer to any of
the variables local to the procedure since they do not exist once the procedure
returns.)
In order to develop a proof of a specific call, appropriate class expressions for
the di, [, and ~ must be found. In addition, P must be true at the point of the call.
The effect of the call is to produce a new state Q with appropriate substitutions
for the classes of the parameters and indirect flow variables. Accordingly, the call
4 It is assumed that there is no aliasing or recursion, that there are no own variables, that all value/
result parameter names are unique, and that no value parameters are modified by S.
The information flow semantics of procedures have been previously described in [4, 10, 13, 16]. The
proof rules presented here are an alternative definition of the semantics. As will be shown, the rules
are powerful enough to handle the two important classes of procedures described in [4]: those that
accept arbitrary classes of input, such as library procedures, and those that produce output of low
classification from input of higher classification.
ACM Transactions on Programming Languages and Systems, Vol. 2, No. 1, January 1980.
64
G.R. Andrews and R. P. Reitman
begin
first := 1;
while first < n d o
begin
m a x i n d e x := first; i := first + 1;
w h i l e i _< n d o
• begin
i f A [ i ] > A[rnaxindex] t h e n m a x i n d e x := i;
i:=i+1
end;
ternp := A[first]; A[first] := A[rnaxindex];
A[rnaxindex] := ternp;
first := first + 1
end;
temp := 0; m a x i n d e x := 0; i := 0
end
Fig. 1.
Successive m a x i m a s o r t i n g p r o g r a m .
s t a t e m e n t has the proof rule:
{P, yi <- ~i, local <_ i, global _< ~} S {Q}
{P[x ~ a, y ~-- b], bi <- ci , local <- l, global <_ g, R}
call pname(a; b)
(Q[x ~-- a, y ~-- b, ~i ,--- c , / *-/, ~ ~-- _g], R}
In this rule ci, l, and g_ are actual bounds on the classes of the p a r a m e t e r s and
indirect flows at the point of the call; it is assumed t h a t neither t h e y nor R refer
to program variables t h a t can be changed by S, the body of the procedure.
T h e above formulation of the rule for procedures uses di, [, a n d ~ as placeholders in the proof of a procedure body. This allows the proof of the body to be
developed once and t h e n used as the hypothesis for any application of the proof
rule for call. For example, given
procedure zero(vat f: integer);
f:=O
end zero
the following proof can be developed:
{f_ <_ ~_, local <_ [, global <_ ~}
f:=O
( f <_ [ (B ~, local <_ [, global <<_~, }
This allows any call of the form call zero(a) to be proved to result in a _ 1 (B g
where l o c a l <_ I a n d g l o b a l <_ g at the point of the call (e.g., if I = g ffi low, t h e n
a _ low after the call independent of the value of a prior to the call).
3.6 Example
T h e use of the above proof rules is now illustrated by a series of steps t h a t lead
to a proof of the program shown in Figure 1, which sorts an array A l l : n] using
the successive maxima sorting algorithm. T h e program successively finds the
ACM Transactions on Programming Languages and Systems, Vol. 2, No. 1, January 1980.
An Axiomatic Approach to Information Flow
65
largest element of A ( A [ m a x i n d e x ] ) and swaps it with the first of the remaining,
unsorted elements (A[first]). Variable i is used as a counter and t e m p is used for
swapping.
S t e p 3.1. If l o c a l <_ low a n d g l o b a l <_ low and constants are of class low, the
following flow proof can be derived using the assignment rule and the rule of
consequence:
{local <- low, global <- low}
first := 1
{first <- low, local <_ low, global <_ low}
P r o o f outlines for other assignments are derived in the same way.
S t e p 3.2. Consider the s t a t e m e n t
ifA[i] > A[maxindex] then m a x i n d e x := i
Within the body of the i f statement, the local flow increases by the class of the
expression A [ i ] > A [ m a x i n d e x ] , namely, A ~ i • m a x i n d e x . T h e r e f o r e the assignment of i to maxindex causes m a x i n d e x to be potentially increased by A
i • m a x i n d e x , which means t h a t m a x i n d e x cannot decrease. If c is an upper
bound on A, it can be proved t h a t
CA <- c, i <- low, m a x i n d e x <_ low, local <_ low, global <_ low}
if A[i] > A[maxindex]
CA _ c, / _ low, m a x i n d e x <_ low, local <_ c, global<_ low}
t h e n m a x i n d e x := i
CA <- c, i <- low, m a x i n d e x <_ c, local <_ c, global <_ low}
CA - c, i <- low, m a x i n d e x <_ c, local <_ low, global <_ lo__ww}
using the rules for assignment, alternation, and consequence.
S t e p 3.3. Now consider the portion of the sort program t h a t finds the largest
element of subarray A [ i : n]:
while i _ n do
begin irA[i] > A[maxindex] t h e n m a x i n d e x := i;
i:=i + 1
end
Let V = A <_ c @ n, i <_ n , m a x i n d e x <_ c • n. T h e following can then be derived
using the previous example and the rules foT assignment, iteration, composition,
and consequence:
{V, local <_ low, global <- low}
while i <- n do
{V, local <- n, global <_ low}
begin irA[i] > A[maxindex] t h e n rnaxindex := i;
i:= i + 1 end
ACM Transactions on Programming Languages and Systems, Vol. 2, No. l, January 1980.
66
G.R. Andrews and R. P. Reitman
(V, local <- n, global <- low}
(V, local <_ low, global <_ n}
Within the loop, l o c a l increases by i $ n, which implies l o c a l _ n. V is invariant
over both statements in the loop. I-t is invariant over the i f statement as can be
seen by appropriate changes to Step 3.2; it is also invariant over the assignment
since i @ l o c a l • g l o b a l <_ n . In the assertion after the loop, l o c a l returns tolow
but g l o b a l is increased by l o c a l @ i • n , which implies g l o b a l <_ n .
In the sort program, the above loop is embedded within another loop. Consequently, the local flow present at the start of the inner loop is a c t u a l l y f i r s t • n
since f i r s t < n is the condition of the outer loop. In addition, the global flow
present at the start of the loop is actually n since the global flow produced by the
inner loop must be invariant throughout the body of the outer loop. Therefore, in
a proof of the sort program, the proof outline for the inner loop is
{V, first <_ n, local <- n, global <- n}
while i <- n do
begin ifA[i] > A[maxindex] then rnaxindex := i;
i:=i + 1 end
{V, first <- n, local < n, global <_ n}
S t e p 3.4. By applying the techniques and results of the previous examples, the
following proof for the sort program can be developed:
{A <_ c, local <- low, global <- low}
"Sort Program of Figure 1"
{ temp <- n, m a x i n d e x <_ n, i <_ n, first <
- n,
A <- c • n, local <- low, global <- n }
In the sort program, there is a direct flow of information from A to ternp,
and i within the bodies of the w h i l e loops. However, this flow is
negated by unconditionally assigning zero to these values after the array is sorted.
These variables may contain information about n, though, since termination of
each loop depends on the value of n. Stated differently, this program "remembers"
information about n but n o t about A. Had f o r loops been used instead of w h i l e
loops, the program could have been proved memoryless [11], meaning that not
even information about n could be retained in the local variables.
maxindex,
S t e p 3.5. As listed in Figure 1, the sort program is a distinct program. Suppose
instead that it were a procedure of the form:
procedure sort(n:integer; v a t A : ar r ay 1.. n of T);
<declarations>
<body as in Figure 1>
end sort
where T is some type.
Th e proof outlined in the previous step can be expanded to show:
{P, A __ ~, loca___~l<_ [_global <- ~} S (Q}
ACM Transactions on Programming Languages and Systems, Vol. 2, No. 1, January 198~).
An Axiomatic Approach to Information Flow
67
where P ={/--- n,t~-- n}, and Q = {A < _ d ~ n , local<_ [, global <_n ). (Recall
that Q does not refer to variables local to sort.) For a specific call, sort(n'; A'),
suppose initially A ' <_ c, local <_ l, global <_g (where c , / , and g do not refer to
any variables modified by S), and l ~ g <_ n'. Using the call rule, the following
proof can be produced:
( A ' <_c, local <_l, global <- g}
call sort(n'; A')
(A' <_c • n', local <- l, global <_g • n'}
Note that the above proof of the body of sort only assumes that 1 <- n and
g <- n. It can therefore be used in the proof of any call statement having a local
or global flow at the time of the call of at most n.
4. PROOF RULES FOR PARALLEL PROGRAMS
In sequential programs, information is transmitted either directly by assignments
or indirectly by conditional execution. The same types of flow arise in parallel
programs. Processes can transmit information either directly by shared variables
and message passing or indirectly by synchronization. Flow proof rules for a basic
set of parallel programming constructs are now presented.
4.1 Concurrent Execution
Consider the cobegin statement [5], which has the form:
cobegin
s , II . . . II s N
coend
and means that each of the statements S~ through SN are executed in parallel.
Executing processes concurrently has the same effect as executing them sequentially provided their (correctness) proofs are interference free. The same notion
applies to information flow; as long as the flow proofs of two or more processes
are interference free, executing the processes concurrently results in the same
flows as executing them sequentially. However, the definition of interference free
must be changed slightly from that used for correctness proofs [18] because the
certification variables local and global exist separately for each process (the fact
that one process is executing an if statement or w h i l e loop has no effect on flows
in another process).
Definition. Given a flow proof {V, L, G} S {V', L, G'} and a statement T with
precondition pre(T), T does not interfere with the proof provided that
(1) {V', pre(T)} T (V'};
(2) for any statement S' in S with precondition {U, L, G}, [U, pre(T)} T (U}.
This says that executing T does not invalidate either the result of executing S or
any precondition used in the proof {V, L, G} S {V', L, G'}.
Definition. Let T be an assignment statement within process Si. The flow
proofs {P1}$1 (Q1} . . . . , {PN} SN(QN} are interference free if for all j, j ~ i, T does
not interfere with {Pj}Si (Qi}ACM Transactions on P r o g r a m m i n g Languages and Systems, Vol. 2, No. 1, J a n u a r y 1980.
• 68
G . R . A n d r e w s and R. P. Reitman
T h e proof rule for the c o b e g i n s t a t e m e n t then is
{Vi, L, G} Si {V/, L, G'}
1 _< i _< N
are interference free
{V1. . . . . VN, L, G}
cobegin Sl H... IISNcoend
{Vl', . . . . VN', L, G'}
Since the local flow only changes within a statement, the final local flow is the
same as the initial one. However, the global flow can change; after the cobegin
s t a t e m e n t it is at least as great as the global flow produced by any process.
4.2
Semaphores
For process synchronization, assume t h a t semaphores [5] are available. Abstractly, a semaphore, sere, is a nonnegative integer t h a t is manipulated by two
indivisible operations, w a i t and s i g n a l , which can be defined as follows: s
wait (sem):
signal (sem):
while sere = 0 do skip; sem := sere - 1
sere := sem + 1
B o t h w a i t and s i g n a l alter the value of s e m ; therefore they both cause a flow
like t h a t due to assignment statements. Although the s i g n a l operation completes
immediately, w a i t can cause a process to delay, as shown by the presence of the
w h i l e loop in its definition. Because completion of w a i t is d e p e n d e n t on a n o t h e r
process executing a s i g n a l , information can be transmitted by means of the
semaphore from the signaling to the waiting process. All subsequently altered
variables in the waiting process are affected, so the effect of w a i t is to increase
the g l o b a l flow in the process. Given these considerations, the proof rules for
w a i t and s i g n a l are
(P[sern (-- sere ~ local • global,
g l o b a l ~-- sern ~ local (~ global]}
wait(sern)
(P}
{P[sem ~-- sere ~ local • global]}
signal(sere)
{P}
In flow proofs, the class of a semaphore must be at least as great as the least
upper bound of all the l o c a l and g l o b a l flows t h a t affect its value. Otherwise, the
processes t h a t use it will not have interference-free proofs.
As an alternative to semaphores, Reed and Kanodia have recently proposed
the use of eventcounts [20]. An e v e n t c o u n t is a nonnegative integer manipulated
by r e a d , a d v a n c e , and a w a i t operations. In terms of information flow, r e a d is
like an assignment; a d v a n c e is like a semaphore signal; and a w a i t is like a
semaphore w a i t , with one i m p o r t a n t difference. Whereas the w a i t s t a t e m e n t
causes information to flow into as well as from a semaphore, an a w a i t s t a t e m e n t
only causes information to flow from an eventcount. In terms of a proof rule, this
"Other definitions for semaphore, wait, and signal have the same effect upon information flow.
ACM Transactions on Programming Languages and Systems, Vol. 2, No. 1, January 1980.
An Axiomatic Approach to Information Flow
69
means that a w a i t (E) does not alter the class of eventcount E. Unlike semaphores,
eventcounts cannot be used to transmit information from one process that only
executes a w a i t or r e a d to another.
4.3 Other Synchronization Mechanisms
Information flow rules for other synchronization mechanisms can also be developed using the techniques described above. For example, the information flow
resulting from monitors [1, 9] is a combination of the effects of procedures,
synchronization (which is like that due to semaphores), and the use of permanent
monitor variables (which requires an invariant assertion). Proof rules for monitors, message passing, and critical regions are described in [21], which also
develops rules for classes and case statements and contains a flow proof of a fairly
large Concurrent Pascal [1] program. The main concepts needed to produce every
rule are the information classification scheme and the certification variables
local and global. In fact, using local and global to represent indirect flows is a
key aspect of the approach presented in this paper.
4.4 Example
As an illustration of the information flow in parallel programs that can result
from synchronization, consider the program shown in Figure 2. The program
contains four processes, which interact by using semaphores and one global
variable. Even though the s e n d e r process does not access the global variable, it
transmits information derived from its local variable x to the receiver process as
a result of the collusion of the helpers. S e n d e r does so by using semaphores as
timing signals. If x is positive, sender signals p o s then nonpos; otherwise sender
signals nonpos then pos. S e n d e r therefore orders the helpers" assignments to
value, which in turn controls the result read by the receiver process.
This program is one example of the subtle ways in which synchronization
mechanisms can be used to transmit information; it can in fact be expanded to
transmit the exact value of x from sender to receiver (for example, by encoding
x as a stream of bits). Note also that the program terminates. Global flows due to
synchronization occur independent of deadlock or infinite loops.
To prove formally that the program in Figure 2 transmits information from x
to result, flow proofs for each process are needed and the proofs must be shown
to be interference free.
Flow proofs for each process are developed by using the techniques of Section
3 as well as the proof rules for semaphores. Because the s e n d e r process signals
the p o s and nonpos semaphores conditional on the value of x, information from
x flows into both semaphores. Assuming that x _ C for some class C, the classes
o f p o s and nonpos also become as great as C. Therefore, the global-flow in each
helper process increases by C since it waits-for p o s and nonpos. These considerations lead to the following Now proof outline for helper 1 (helper 2 is similar):
(pos <_ C, take <_ C, value <_ C, local <_low, global <_low}
wait(pos);
value := 1;
signal(take);
signal(synch);
{pos <_ C, nonpos <_ C, synch <_ C, local <_low, global <_ C_}
ACM Transactions on Programming Languages and Systems, Vol. 2, No. 1, January 1980.
70
G. R. Andrews and R. P. Reitman
begin
v a r pos, nonpos, synch:
semaphore initial (0, 0, 0);
take: semaphore initial (0);
value: 0 .. 1;
cobegin
sender: begin var x: integer;
read data into x;
if x > 0
then begin
signal(pos); wait(synch);
signal(nonpos) end
else begin
signal(nonpos); wait(synch);
signal{pos) end;
wait(synch)
end
U
helper 1: begin wait(pos);
value := 1; signal(take);
signal(synch) end
II
helper 2: begin wait(nonpos);
value := 0; signal(take);
signal(synch) end
II
receiver: begin var result: O . . 1;
wait(take); wait(take);
result := value;
if result = 1
then print ("nonpositive")
else print ("positive")
end
coend
end
Fig. 2. Program with flows due to synchronization.
In the above proof outline, the initial assertion might appear to be weaker t h a n
necessary because it asserts t h a t each semaphore is bounded by
C i n s t e a d of low. T h e r e a s o n is t h a t if t h e assertion ( p o s <_ low, t a k e <_
low, v a l u e <_ low, l o c a l <_ low, g l o b a l <_ low} were u s e d instead, it w o u l d n o t be
possible to p r o v e t h a t t h e p r o o f s of t h e p r o c e s s e s are i n t e r f e r e n c e free. T h e s a m e
o b s e r v a t i o n applies to t h e p r o o f s for t h e o t h e r t h r e e processes.
T h e p r o o f s for s e n d e r a n d r e c e i v e r a r e Similar to t h o s e for t h e h e l p e r s . T h e
r e a d e r is e n c o u r a g e d to d e v e l o p t h e s e a n d t h e n to s h o w t h a t t h e following is a
valid p r o o f outline for t h e entire parallel p r o g r a m :
( p o s <_ low, n o n p o s <- low, s y n c h <- low, t a k e <- low, v a l u e <- low, l o c a l <_ low,
g l o b a l <_ low} }{ p o s <- C, n o n p o s <- C, s y n c h <_ C, t a k e <- C, v a l u e <_ C, l o c a l <_ low, g l o b a l <_ low}
cobegin
s e n d e r JJh e l p e r 1 JJh e l p e r 2 JJr e c e i v e r
coend
{ p o s <- C, n o n p o s <- C, s y n c h <- C, t a k e <_ C, v a l u e <
- C, l o c a l <
- low, g l o b a l <
- C}
T h e k e y to d e v e l o p i n g a flow p r o o f for a set of processes is to e n s u r e t h a t t h e
p r o o f s o f t h e processes are i n t e r f e r e n c e free. F o r a n y parallel p r o g r a m , a flow
ACM Transactionson ProgrammingLanguagesand Systems,Vol.2, No. 1, January 1980.
An Axiomatic Approach to Information Flow
71
proof can be developed as follows. First, examine each process to determine how
its local variables and conditional flows affect the global variables and synchronization channels, {semaphores, messages, monitors, etc.). Second, for each global
variable and synchronization channel, compute the least upper bound of the
information that flows into the variable or channel. Third, construct a flow proof
for each process using the classifications determined in the second step as the
initial classifications of the global variables and synchronization channels. Finally,
construct a proof for the set of processes (i.e., the c o b e g i n statement containing
them) and show that the proofs of the processes are interference free. By following
this approach, showing that the proofs are interference free is relatively easy
since the initial assertion used in each proof captures the greatest possible flow
from the processes into global variables and synchronization channels.:
5. APPLICATIONS OF FLOW PROOFS
Th e main use of flow proofs is to certify flow policies. This section first shows
how to describe and to certify flow policies. It then discusses the relationship
between flow proofs and the confinement problem.
5.1 Certification of Information Policies
A flow policy is a specification of the acceptable classifications of the information
in variables. It is defined by means of an assertion about the information state
(e.g., (x _< y}). Two types of policies are of particular interest. A final value policy
is defined by an assertion that must be true when a program terminates; it states
the acceptable result of a computation. A h i g h - w a t e r m a r k [23] policy is defined
by an assertion that must be true for each information state in a program; it is
used when intermediate results must be secure, for example, when a program
may not terminate.
Given a flow proof and a policy assertion, POLICY, a program can be certified
as follows. First, identify every assertion in the program's flow proof where the
policy must hold. Second, show that P ~- POLICY for every assertion P that
holds at the points identified in the first step. The only flow proof assertion of
interest in a final-value policy is the last one (the postcondition); all intermediate
assertions are of interest in a high-water mark policy. If the policy cannot be
certified, either a stronger flow proof or an alternative program must be considered.
Two examples illustrate these concepts.
E x a m p l e 5.1. In the sort program of Figure 1, a desired policy might be
{A <_c • n, temp <_n, maxindex <_n, i <- n}
where A[1 : n] is the array to be sorted and initiallyA _< c for some classification
c. This policy states that after A is sorted, information in A depends on A and n
~done and that no informaion about A is to be stored in the local variables temp,
m a x i n d e x , and i. The sort program flow proof outlined in Step 3.4 of Section 3.6
clearly shows that this final value policy holds. The program does not, however,
satisfy this assertion if it is used as a high-water mark policy since at times the
r With respect to information flow, the effects of semaphores, messages, and monitors are essentially
the same. As noted earlier, eventcounts lead to slightly less information flow.
ACM Transactions on Programming Languages and Systems, Vol. 2, No. 1, January 1980.
72
G.R. Andrews and R. P. Reitman
local variables contain information from A, in particular within the bodies of the
w h i l e loops. Note also that it is not possible to satisfy the final value policy
{ t e m p = m a x i n d e x = i = low) since the potential nontermination of the two
w h i l e loops produces a global flow from n. If a for loop had been used or if the
w h i l e loop is proved to terminate (see Section 6), this stronger policy could be
certified.
E x a m p l e 5.2. The parallel program in Figure 2 can be certified with respect to
the high-water mark policy {x _< C, result <_ C} but not with respect to the policy
{x <_ C, result <_ low}. If the program were changed to replace the assignment
result := value by result := 0 in the receiver process, then the program could be
certified with respect to the final value policy (x <_ C, result <_low) since the
program would always print "positive."
It should be noted that certification of a program is only meaningful if the flow
semantics assumed in its proof are preserved during execution. This requires that
the language be correctly implemented and that adequate run-time checks (e.g.,
on subscript bounds) be performed. A more detailed analysis of the certification
problem and a discussion of the possible approaches are given in [22].
5.2 The Confinement Problem
A module that performs a service is confined if it neither retains its input nor
transmits it to another program [11]. There are two ways in which a procedure
could leak information: by storage channels (i.e., program variables) or by covert
channels (e.g., its execution time). The flow logic presented here deals with
storage channels. Given a module such as a procedure, a flow proof indicates a
bound on the classifications of the variables used by the procedure. Consequently,
one can directly use a flow proof to check for confinement with respect to storage
channels.
Covert flows have been disregarded in the flow rules since they do not use
program variables. However, an upper bound on the potential covert flow resulting from timing channels, such as execution time, can be computed. Jones and
Lipton [10] have observed that execution time varies as a result of conditional
branching. Consequently, it is exactly the tests in a program (e.g., in if and while
statements) that potentially transmit information over timing channels. If a third
auxiliary variable, covert, is added to the proof rules and its class is increased
whenever a conditional flow occurs (namely, covert is set to covert • local G)
g l o b a l in any proof rule that changes local or global), then covert encodes the
maximum potential covert leakage due to timing. Unfortunately, this tells very
little about most programs. Although Fenton [6] and Jones and Lipton [10] have
shown how to prevent covert flow due to running time, as Lipner has observed
[12], it is probably futile to attempt to eliminate covert channels.
6. FLOW PROOFS AND CORRECTNESS PROOFS
The axiomatic logic for information flow presented here is similar to, and was in
fact motivated by, axiomatic logics for correctness [8, 18]. This leads one to
wonder to what extent the flow and correctness logics differ. To certify a program,
one must prove that variables are related only in ways authorized by a given flow
policy. Since a flow proof places an upper bound on the flows in a program, it can
be used readily for certification. If the proof cannot be used to certify the program,
ACM Transactions on Programming Languages and Systems, Vol. 2, No. 1, January 1980.
An Axiomatic Approach to Information Flow
73
one t h e n tries to find a stronger proof that can be. By contrast, in a correctness
proof one normally proceeds in the other direction; that is, one is normally
interested only in those relations between variables that aid in proving the
correctness assertion. T o use a correctness proof for certification, however, one
must state all relations between variables t h a t could affect certification. Specifically, given a precondition and statement, the strongest possible postcondition
must be derived (a problem t h a t is not solvable in general). For example, consider
the s t a t e m e n t
if x _> 0 then y := 1 else y := - 1.
In order to capture the fact that x and y are related, we must use a postcondition
such as {(x _ 0 ~ y = 1) A (x < 0 ~ y = -1)} in a proof of the above statement.
In parallel programs, there are additional difficulties in using correctness logics
for security certification. M a n y parallel programs, such as those found in operating systems, provide continuous service and are not intended to terminate;
others m a y willingly run the risk of deadlock to increase efficiency. T h e possibility
of nontermination and deadlock limit the utility of correctness techniques in
certifying information security since correctness assertions cannot explicitly indicate the occurrence of global flows.
Flow proofs are, however, weaker t h a n correctness proofs because they assume
that all execution paths will be taken even if some cannot be taken. For example,
consider the program (also discussed in [2])
if b t h e n y := x;
if-~b then z := y
where b does not refer to x, y, or z. A flow proof for this program could show t h a t
(y _< x @ b, z _< y • b} I (Y -< x $ b, z _< x $ b} but could not show t h a t z does not
depend on x. This difficulty arises because flow proofs do not deal with variable
values {i.e., b above); they only deal with variable classes. Another consequence
of ignoring values in flow proofs is that the class of an array must be at least as
great as the class of any element because one cannot differentiate between A[i]
and A [ j ] without knowing the values of i a n d j .
T h e s e difficulties can be overcome by using a combined proof system t h a t
allows assertions about both classes and values. Each state in the combined
system is the cross product of an information state and a value state; the proof
rules in the combined system describe how statements transform the composite
state. T h e rules for sequential statements in the combined system are listed in
Figure 3. (Combined proof rules for parallel programming statements are similar.)
Two examples illustrate how the combined logic can be used to prove a stronger
assertion than is possible using the flow logic alone.
E x a m p l e 6.1. Consider the program introduced two paragraphs ago. In the
combined logic it can be proved t h a t
~_ _< lo__ww,z _< low, local <_ low, global <_ low}
if b t h e n y := x;
if -1 b then z := y
( b ~ ( y < - x $ b; z ~
_ low),
~ b ~ (z ~ b,y~_ low)}
ACM Transactions on Programming Languages and Systems, Vol. 2, No. 1, January 1980.
74
G.R. Andrews and R. P. Reitman
simple assignment {P[x ~-- e, x ~-- e • l o c a l • g l o b a l ] } x := e {P}
array assignmentt {P[A[i] ~-- e, A[i] ~ - e • i l o c a l • g l o b a l ] }
A[i] := e
alternation
{P}
{v A B, L', G} S, {V', L', G'},
{V A-~ B, L',G} S.~ {V', L', G'},
V, L, G t- L' [local ,-- local • ~]
iteration
{V, L, G} ifB then S, else S~{V', L, G'}
{V A B, L', G} S {V, L', G},
V, L, G b- L'[local ,--- local • B_],
V, L, G b- G'[global ~ global • local • B_]
composition
consequence
{V, L, G} while B do S {v A -~ B, L, G'}
{P} S, {P,}, {P,} $2 {P2}. . . . . {PN-,} SN{Q}
{P} begin S,; S~; .... SN end {Q}
{P'} S {Q'}, P ~- P', Q'~- Q
{P} S {Q}
Fig. 3. Combined proof rules, t It is assumed that the subscript expression i does not refer to A.
This shows t h a t z depends on b but not on x.
E x a m p l e 6.2. Consider the following program t h a t zeros an array:
i:= 0;
while i < n do
begin i :z-T+ 1;
A[i] := 0
end
Given t h a t n _ low, a desirable final-value policy for this program is {Vi (1 _ i
_ n) ~ A[/] - low}, namely, A[1 : n] has been declassified. T h e above program
can be certified using the combined logic by showing t h a t the assertion
{Vj(1 _<j _< i} ~ A[~] -< low,
i _< low,
i _< n, n _< low,
local <_ low, g l o b a l <_ low}
is a valid loop invariant. Note t h a t the actual value assigned to A [ i ] in the loop
is u n i m p o r t a n t as long as its classification is low.
An even more powerful combined logic is produced by combining the flow logic
with a t o t a l correctness logic t h a t ensures termination of w h i l e loops. In such a
logic, global flows from nonterminating loops cannot occur. However, global flows
due to synchronization cannot be eliminated since they result from timing, not
from nontermination.
7. SUMMARY
This paper has analyzed the semantics of information flow and presented a
deductive logic for proving information flow properties of programs. T h e resulting
system unifies previous work on information flow and yields a n u m b e r of additional benefits. First, a large n u m b e r of b o t h sequential a n d parallel programming
constructs can be axiomatized. Second, the effects of nonterminating programs
can be handled. Finally, the flow logic can be readily combined with correctness
logic.
We have found t h a t deriving a flow proof of a program is m u c h easier t h a n
A C M Transactions on Programming Languages and Systems, Vol. 2, No. I,January 1980.
An Axiomatic Approach to Information Flow
75
d e r i v i n g a c o r r e c t n e s s proof. T h i s r e s u l t s f r o m t h e e l e g a n c e o f t h e l a t t i c e m o d e l
o f s e c u r e i n f o r m a t i o n flow o n w h i c h t h e flow logic is b a s e d [3]. B e c a u s e flow
p r o o f s a r e r e l a t i v e l y e a s y to c o n s t r u c t a n d t h e flow p r o o f s y s t e m is p o w e r f u l
e n o u g h t o h a n d l e a w i d e v a r i e t y o f c o n s t r u c t s ( i n c l u d i n g all t h o s e o f C o n c u r r e n t
P a s c a l ) , t h e r e is h o p e t h a t i n f o r m a t i o n p r o p e r t i e s o f l a r g e p r o g r a m s c o u l d b e
f o r m a l l y certified. H o w e v e r , i t r e m a i n s to e n s u r e t h a t t h e flow s e m a n t i c s o f
l a n g u a g e a r e in f a c t p r e s e r v e d d u r i n g e x e c u t i o n .
ACKNOWLEDGMENTS
This paper was begun while both authors were at Cornell University and has
profited greatly from numerous discussions with our CorneU colleagues David
Gries, C a r l H a u s e r , a n d F r e d S c h n e i d e r . T h e r e f e r e e s also p r o v i d e d m a n y c o m ments that helped strengthen the presentation.
REFERENCES
1. BRINCH HANSEN,P. The programming language Concurrent Pascal. IEEE Trans. Softw. Eng.
SE-1, 2 (June 1975), 199-207.
2. COHEN, E. Information transmission in computational systems. Proc. 6th Syrup. Operating
System Principles (Nov. 1977), pp. 133-139.
3. DENNING, D.E. A lattice model of secure information flow. Commun. ACM 19, 5 (May 1976),
236-243.
4. DENNING, D.E., AND DENNING, P.J. Certification of programs for secure information flow.
Commun. ACM20, 7 (July 1977), 504-513.
5. DIJKSTRA, E.W. Cooperating sequential processes. In Programming Languages (F. Genuys,
Ed.), Academic Press, New York, 1968.
6. FENTON,J.S. Memoryless subsystems. Comput. J. 17, 2 (May 1974), 143-147.
7. GAT, I. Security aspects of computer systems. Ph.D. Thesis, Technion--Israel Inst. of Technology, June 1976.
8. HOARE, C.A.R. An axiomatic basis for computer programming. Commun. ACM 12, 10 (Oct.
1969), 576-583.
9. HOARE,C.A.R. Monitors: An operating system structuring concept. Commun. ACM 17, 10 (Oct.
1974), 549-557.
10. JONES, A.K., AND LIPTON,R.J. The enforcement of security policies for computation. Proc. 5th
Syrup. Operating System Principles (Nov. 1975), pp. 197-206.
11. LAMPSON,B.W. A note on the confinement problem. Commun. ACM 16, 10 (Oct. 1973), 613615.
12. LIPNER, S.B. A comment on the confinement problem. Proc. 5th Syrup. Operating System
Principles (Nov. 1975), pp. 192-196.
13. LONDON,T.B. The semantics of information flow. Ph.D. Thesis, Cornell Univ., Jan. 1977.
14. McGRAw, J.R., AND ANDREWS,G.R. Access control in parallel programs. IEEE Trans. Softw.
Eng. SE-5, 1 (Jan. 1979), 1-9.
15. MILLEN,J.K. Security kernel validation in practice. Commun. ACM 19, 5 (May 1976), 243-250.
16. MOORE, C.G., JR. Potential capabilities in Algol-like programs. Tech. Rep. TR74-211, Cornell
Univ., Sept. 1974.
17. NEUMANN,P.G., ET AL. A provably secure operating system: The system, its applications, and
proofs. Final Rep., SRI Proj. 4332, SRI International, Menlo Park, Calif., Feb. 1977.
18. OWICKI,S., AND GEIES, D. An axiomatic proof technique for parallel programs. Acta Inf. 6
(1976), 319-340.
19. POPEK,G.J., AND FARBER,D.A. A model for verification of data security in operating systems.
Commun. ACM 21, 9 (Sept. 1978), 737-749.
20. REED, D.P., AND KANODIA,R.K. Synchronization with eventcounts and sequencers. Comm.
ACM22, 2 (Feb. 1979), 115-123.
21. REITMAN,R.P. Information flow in parallel programs: An axiomatic approach. Ph.D. Thesis,
Cornell Univ., Aug. 1978.
ACM Transactionson ProgrammingLanguagesand Systems,Vol.2, No. 1, January 1980.
76
G.R. Andrews and R. P. Reitman
22. REITMAN,R.P., AND ANDREWS,G.R. Certifying information flow properties of programs: An
axiomatic approach. Proc. 6th Symp. Principles of Programrning Languages (POPL) (Jan. 1979),
pp. 283-290.
23. WEISSMAN, C. Security controls in the Adept-50 time-sharing system. AF1PS 1969 FJCC, Vol.
35, AFIPS Press, Arlington, Va., pp. 119-133.
Received February 1979; revised June 1979 and September 1979
A C M Transactions on Programming Languages and Systems, Vol. 2, No. I,January 1980.