On the Expressiveness and Complexity of
Randomization in Finite State Monitors
Rohit Chadha
Univ. of Illinois at Urbana-Champaign
and
A. Prasad Sistla
Univ. of Illinois at Chicago
and
Mahesh Viswanathan
Univ. of Illinois at Urbana-Champaign
In this paper we introduce the model of ο¬nite state probabilistic monitors (FPM), which are ο¬nite
state automata on inο¬nite strings that have probabilistic transitions and an absorbing reject state.
FPMs are a natural automata model that can be seen as either randomized run-time monitoring
algorithms or as models of open, probabilistic reactive systems that can fail. We give a number of
results that characterize, topologically as well as with respect to their computational power, the
sets of languages recognized by FPMs. We also study the emptiness and universality problems for
such automata and give exact complexity bounds for these problems.
Categories and Subject Descriptors: D.2.4 [Software Engineering]: Program Verification; F.1.1 [Theory of
Computation]: Models of Computation; F.1.2 [Theory of Computation]: Modes of Computation
1.
INTRODUCTION
In this paper we introduce the model of Finite state Probabilistic Monitors (FPMs) and
study their expressive power and complexity of their decision problems. FPMs are finite
state automata on infinite strings that choose the next state on a transition based on a probability distribution, in addition to the input symbol read. They have a special reject state,
with the property that once the automaton enters the reject state, it remains there no matter
what the input symbol read is. On an infinite input word πΌ, a FPM defines an infinite-state
Markov chain and the word πΌ is said to be accepted (rejected) with probability π₯ if the
probability of reaching the reject state is 1 β π₯ (π₯ respectively).
FPMs are natural models of computation that arise in a variety of settings. They can be
seen as a special case of Probabilistic BuΜchi Automata (PBA) [Baier and GroΜπ½er 2005] 1 .
PBAs are like FPMs, except that instead of a reject state they have a set of accept states.
PBAs are a generalization of probabilistic finite automata of [Rabin 1963; Salomaa 1973;
Paz 1971] to infinite strings and the probability of acceptance of an infinite word πΌ is the
probability measure of runs on πΌ that satisfy the BuΜchi acceptance condition, namely, those
that visit an accepting state infinitely often. A FPM is nothing but a PBA β¬, where instead
1 In
some ways, FPMs considered here are more general than the PBA model. Baier et. al. consider PBAs to be
language acceptors that accept all strings whose measure of accepting runs in non-zero. We, on the other hand,
consider various probability thresholds for acceptance, and as will be seen in this paper, this generalization makes
a big difference to both the expressive power and the complexity of decision problems.
ACM Journal Name, Vol. V, No. N, Month 20YY, Pages 1β42.
2
β
of the BuΜchi acceptance condition, the accepting runs are defined by a π-regular safety
language (or the complement of a safety language) over the states of β¬.
Another context where FPMs arise is as randomized monitoring algorithms, and this was
our original motivation for studying them. Run-time monitoring of system components is
an important technique that has widespread applications. It is used to detect erroneous behavior in a component that either has undergone insufficient testing or has been developed
by third party vendors. It is also used to discover emergent behavior in a distributed network like intrusion, or more generally, perform event correlation. In all these scenarios,
the monitor can be abstractly viewed as an algorithm that observes an unbounded stream
of events to and from the component being examined. Based on what the monitor has seen
up until some point, the monitor may decide that the component is behaving erroneously
(or is under attack) and raise an βalarmβ, or may decide that nothing worrisome has been
observed. Hence, the absence of an error (or intrusion) is indicated implicitly by the monitor by the absence of an alarm. On the flip side, a behavior is deemed incorrect by the
monitor based on what has been observed in the past, and this decision is independent of
what maybe observed in the future. FPMs are finite state monitors that make probabilistic
decisions while they are observing the input stream. The unique, absorbing reject state
corresponds to the fact that once a behavior is deemed incorrect by a monitor, it does not
matter what events are seen in the future.
Finally, FPMs are also models of open, finite state probabilistic systems that can fail.
An open reactive system is a system that takes an infinite sequence of inputs from the environment and performs a computation by going through a sequence of states. Finite state
automata on infinite strings have been used to model the behavior of such systems that have
a finite number of possible states. Probabilistic open reactive systems are systems whose
behavior on an input sequence is probabilistic in nature. Such probabilistic behavior can be
either due to the randomization employed in the algorithm executed by the system or due
to other uncertainties, such as component failure, modeled probabilistically. Probabilistic
BuΜchi Automata (PBA) can be used to model the behavior of such systems having a finite
state space. FPMs which are PBAs having a single absorbing reject state can be used to
model open probabilistic systems that can fail; the failure state is modelled by the reject
state of the FPM. Decision problems on the language accepted by the FPM can help answer
a number of verification problems on such systems, including, determining the probability
of failure on some or all input sequences, and checking π-regular safety properties (or their
complement).
Motivated by these applications we study a number of questions about FPMs. The first
question we ask is whether randomization is useful in the context of run-time monitoring.
We consider both algorithms with one-sided and two-sided errors. We say that a property is
monitorable with strong acceptance (M SA) if there is a monitor for the property that never
deems a correct behavior to be erroneous, but may occasionally pass an incorrect behavior.
On the other hand, a property is monitorable with weak acceptance (M WA) if there is a
monitor that may raise false alarms on a correct behavior, but would never fail to raise an
alarm on an incorrect one. Similarly we define classes of properties that have monitors
with two-sided errors. We say a property is monitorable with strict cut-points (M SC) if, for
some π₯, there is monitor such that on behaviors πΌ satisfying the property, the probability
that the monitor rejects πΌ is strictly less than π₯. Finally, a property is monitorable with nonstrict cut-points (M NC) if, for some π₯, there is monitor such that on behaviors πΌ satisfying
ACM Journal Name, Vol. V, No. N, Month 20YY.
β
3
the property, the probability that the monitor rejects πΌ is at most π₯.
We compare the classes of properties monitorable with FPMs with those that can be
monitored deterministically. Deterministic monitors are well understood [Schneider 2000]
to be only able to detect violations of safety properties [Alpern and Schneider 1985; Lamport 1985; Sistla 1985]. In addition, if the deterministic monitor is finite state, then the
monitorable properties are regular safety languages. We contrast this with our observations
about properties monitorable by FPMs, which are summarized in Figure 5 on page 18. We
show that while the class M SA is exactly the class of π-regular safety properties, the class
M NC not only contains all π-regular safety properties, but also contains some non-regular
safety properties. However, even though the class M NC is uncountable, it is properly
contained in the class of all safety properties. The classes M WA and M SC allow us to
go beyond deterministic monitoring along a different axis. We show that M WA strictly
contains all the π-regular almost safety properties 2 . Even though, M WA contains some
non-regular properties, they are very close to being π-regular. More precisely, we show
that the safety closure of any property in M WA (i.e., the smallest safety property containing
it) and the safety closure of its complement, are both π-regular. We note here that if we
were to define the class M WA for probabilistic automata over finite strings then we will
only obtain regular languages. We show that M WA is strictly contained in M SC which in
turn is strictly contained in the class of all almost safety properties. Finally, we show that
M SC and M NC are incomparable.
We also consider a sub-class of FPMs. Let us call a FPM β³ π₯-robust if for some
π > 0, the probability that β³ rejects any string is bounded away from π₯ by π, i.e., it either
rejects a string with probability greater than π₯ + π or with probability less than π₯ β π. A
robustly monitorable property then is just a property that has a robust monitor. Robustly
monitorable properties are a natural class of properties. They are the constant space analogs
of the complexity classes RP and co-RP, when π₯ is 0 or 1. They also are a generalization
of the notion of isolated cut-points [Rabin 1963; Salomaa 1973; Paz 1971] in finite string
probabilistic automata to infinite strings. We show that robustly monitorable properties are
exactly the same as the class of π-regular safety properties.
We conclude the summary of our expressiveness results with one more observation about
the advantage of randomization in the context of monitoring. Even though M SA and robust
monitors recognize the same class of languages as deterministic monitors, they can be
exponentially more succinct. More specifically, there exists a family {πΏπ }β
π=0 such that
each πΏπ is an π-regular safety language. Each πΏπ can be recognized by a robust FPMs ππ
with π + 2 states that accepts every string in πΏπ with probability 1, and rejects every string
not in πΏπ with probability at least 2π . In contrast the smallest deterministic monitor for
πΏπ has 2π states. Thus, FPMs are space efficient algorithms for monitoring regular safety
properties. Although, non-deterministic automata also exhibit the succinctness property,
they are not directly executable.
The next series of questions that we investigate in this paper, is the complexity of the
emptiness and universality problems for the various classes of FPM languages introduced.
Apart from being natural decision problems for automata, emptiness and universality are
important in this context for a couple of reasons. For FPMs that model monitoring algorithms, emptiness and universality checking serve as sanity checks: if the language of a
monitor is empty then it means that it is too conservative, and if the language is universal
2 An
almost safety property is a countable union of safety properties.
ACM Journal Name, Vol. V, No. N, Month 20YY.
4
β
then it means that it is too liberal. On the other hand, for FPMs modeling open, probabilistic reactive systems, emptiness and universality are the canonical verification problems for
such systems.
We give exact complexity bounds for these decision problems for all the classes of FPMs
considered; our results are summarized in Figure 6 on page 26. We show that the emptiness problems for monitors with one sided error, i.e., for the classes M SA and M WA, are
PSPACE-complete. These results are interesting in the light of the fact that checking nonemptiness of a non-deterministic finite state automaton on infinite strings, with respect
to any of the commonly used acceptance conditions, is in polynomial time. We also show
that the emptiness problem for monitors with two sided errors is undecidable. More specifically, we show that the emptiness problem for the class M NC is R.E.-complete, while it is
co-R.E.-complete for the class M SC. Next, we show that the universality problem for the
class M SA is NL-complete while for M WA it is PSPACE-complete. This problem is coR.E.-complete for the class M NC, while it is Ξ 11 -complete for M SC. Many of these results,
for both the lower and upper bounds, are quite surprising, and their proofs non-trivial. We
conclude the introduction by highlighting the reason why the complexity for the emptiness
and universality problems is so different for M SC and M NC. Words which have rejection
probability π₯ (which is the threshold of the FPM) can only be detected in the limit, whereas
words rejected with probability > π₯ are witnessed by a finite prefix.
Paper Outline. The rest of paper is organized as follows. We first discuss related work.
Then in Section 2, we motivate FPMs with some examples. In Section 3 we give basic
definitions and properties of safety and almost safety languages. We formally define FPMβs
in Section 4 and the language classes M SA, M WA, M NC and M SC. Our expressiveness
results are presented in Section 5 and the complexity, decidability results are presented in
Section 6. We conclude in Section 7.
1.1
Related Work
The model of FPMs introduced here is a special case of Probabilistic BuΜchi Automata
(PBA) introduced in [Baier and GroΜπ½er 2005], which accept infinite strings whose accepting runs have non-zero measure β the language β<1 (β³) of a monitor β³ is the same
as the language β(β¬) of a PBA β¬ which has the same transition structure as β³ and has
as accept states the non-reject states of β³. [Baier and GroΜπ½er 2005] show that there are
PBAs β¬ such that β(β¬) is non-regular. The emptiness and universality problems for such
PBAs has been shown to be undecidable in [Baier et al. 2008]. Thus, our results on M WA
demonstrate that we have identified a restricted form of PBAs that are effective. Under
the almost-sure semantics of PBAs, introduced in [Baier et al. 2008], a PBA β¬ is taken to
accept all strings whose accepting runs have measure 1. Thus, the universality problem of
β<1 (β³) can also be seen as the emptiness problem for almost-sure semantics of special
kinds of PBAs. The emptiness problem of PBAs with almost-sure semantics was shown
to be in EXPTIME in [Baier et al. 2008]; the fact that the universality problem for M WA
is PSPACE-complete once again demonstrates that FPMs are more effective than general
PBAs.
We draw upon, and generalize, many proof techniques introduced in the context of finite strings and probabilistic automata [Rabin 1963; Salomaa 1973; Paz 1971] to infinite
strings. In particular, the iteration lemma and its proof presented here, is closely related
to a similar result in the context of finite strings (though it is generalized here to infinite
strings). Also, the proof that robust monitors define π-regular languages is inspired by a
ACM Journal Name, Vol. V, No. N, Month 20YY.
β
5
similar result for finite strings by [Rabin 1963], that says that probabilistic finite automata
with isolated cut-points define regular (finite word) languages. However, our proof for
infinite word languages, crucially relies on the fact that robust monitors define safety languages, and no such topological property is relied upon in the finite case. Another proof
technique that we use extensively is the co-R.E.-hardness proof for the emptiness of PFAs
on finite words due to [Condon and Lipton 1989]. The proof is by a reduction from deterministic two-counter machines and uses a weak equality test by [Freivalds 1981] designed
to check (using coin tosses and finite memory) whether two counters have the same value.
The co-R.E.-hardness of the emptiness for the class M SC and universality for the class
M NC, proved herein, are proved as a consequence of co-R.E.-hardness of emptiness of
probabilistic automata with cut-points. The R.E.-hardness of the emptiness for the class
M NC and Ξ 11 -hardness for the class M SC shown here are proved by a non-trivial modification of the weak-equality test of [Freivalds 1981].
Our work was initially inspired by a desire to understand the power of randomization
in the context of run time monitoring. Observing the dynamic behavior of a system has
long been understood to be an important technique to detect faults; see [Plattner and Nievergelt 1981; Mansouri-Samani and Sloman 1992; Schroeder 1995] for early surveys about
the area. A number of tools have been built in the recent past to analyze the behavior of
software systems; examples of such systems include [Diaz et al. 1994; rov 1997; Ranum
et al. 1997; Paxson 1997; Jeffery et al. 1998; Kim et al. 1999; Kortenkamp et al. 2001;
Gates and Roach 2001; Havelund and Rosu 2001; Chen and Rosu 2007]. The importance
of this area can be gauged by the success of a workshop series devoted exclusively to this
topic [rv- 2008]. The theoretical foundations of what can and cannot be monitored was initiated by [Schneider 2000] where it was observed that only safety properties [Alpern and
Schneider 1985; Lamport 1985; Sistla 1985] can be deterministically monitored; this characterization was refined to take into account computability considerations in [Viswanathan
2000]. Though safety properties constitute an upper bound on the class of deterministically monitorable properties, in practice, properties other than safety are also monitored by
either under or over approximating them to safety properties [Amorium and Rosu 2005;
Margaria et al. 2005], or by using a multi-valued interpretation of whether formula holds
on a finite prefix [Pnueli and Zaks 2006; Bauer et al. 2006].
There has been quite a bit of work on enforcing different security properties through
run time monitoring by employing different types of deterministic automata. For example,
truncation automata and suppression automata have been proposed in [Schneider 2000]
and [Ligatti et al. 2005], respectively. Both these automata effectively enforce properties
that are only safety properties. In [Ligatti et al. 2005; 2009; Ligatti 2006], a new kind of
automata called edit automata are proposed. These automata can buffer outputs generated
by the system and release them only when the required property is satisfied by the buffered
sequence of outputs. The authors show that edit automata can be used to enforce a class of
properties called reasonable infinite renewable properties, which include many non-safety
properties. In general, edit automata can buffer arbitrary number of outputs and hence
can use unbounded amount of memory. They also can delay as well as transform the
outputs generated by the system that is being monitored. Compared to these, FPMs only
have finite number of states, and do not delay or modify the system outputs, i.e., they are
passive in nature. On the other hand, they employ randomization and can monitor nonsafety properties with some error probability. It will be interesting to investigate whether
ACM Journal Name, Vol. V, No. N, Month 20YY.
6
β
infinite renewable properties can be monitored using FPMs. This will be part of future
work.
Further, in [Hamlen et al. 2006], the authors propose a taxonomy of enforceable security policies. Using this they broadly classify existing enforcement techniques into three
different classes based onβ static analysis, execution monitoring and program rewriting.
Although FPMs are based on execution monitoring, they employ randomization which is
quite novel and is not covered by the above classification. Randomization can make the
monitor unpredictable making it difficult for an adversary to exploit it. Lastly, the use of
statistical techniques is ubiquitous in intrusion detection systems [Group 2001]. However,
the study of designing randomized monitors for formally specified properties was only recently initiated [Sistla and Srinivas 2008]. In this paper we continue this line of work. We
would, however, like to contrast this line of work with that of [Sammapun et al. 2007],
where they consider the design of randomized monitors for probabilistic systems. We also
like to point out that [Gondi et al. 2009] presents and compares different monitoring techniques, i.e., deterministic, probabilistic and hybrid, for monitoring properties, specified by
automata on infinite strings, of systems modeled as Hidden Markov Chains. It proposes
and uses two different accuracy measures, called acceptance and rejection accuracies. It
performs both theoretical as well as experimental analysis of these techniques on realistic
systems and shows that randomization can be quite effective in achieving high levels of
accuracies, especially when combined with deterministic techniques.
Finally, we would like to point out that a preliminary version of these results was presented in [Chadha et al. 2008]. The main difference between these two papers is as follows.
First, since [Chadha et al. 2008] was an extended abstract it did not contain many proofs,
which have been presented fully for the first time here. Second, the proof of Theorem 5.15
had an incorrect expression in [Chadha et al. 2008] which has been corrected here. Finally, while we stated the results for the emptiness problem of M WA and M SC in [Chadha
et al. 2008], we had neither considered the emptiness problem for M SC and M NC, nor the
universality problem for any of the classes of FPMs.
2.
MOTIVATION
In this section, we motivate FPMs with some examples in monitoring and verification. The
first two examples are on monitoring and the last two are on verification.
Monitoring non-safety properties. Consider a system that outputs an infinite sequence
of symbols from {π, π}. Suppose we want to monitor the property which states that the
symbol π should eventually appear in the input sequence. Since this property is not a
safety property, it can not be monitored using deterministic monitors. On the other hand
the probabilistic monitor π , shown in Figure 1, monitors this property. From the initial
state π0 , after each input symbol, β³ acts as follows. If the input symbol is π then β³ goes
to the reject state ππ with probability 21 and stays in the initial state with probability 12 . If
the input symbol is π then it goes to an accept state ππ with probability one. Both ππ and
ππ are absorbing states, i.e., on further inputs β³ remains in those states with probability
one. It is easy to see that, an input sequence that violates the property, i.e., that does not
have π appearing in it, will be rejected with probability one. On the other hand, a good
sequence, i.e., one that satisfies the property, may also be rejected, but with probability
less than one. The probability of rejection of such a sequence gracefully degrades, i.e.,
increases, with the number of input symbols before the first π in the input. It is to be
ACM Journal Name, Vol. V, No. N, Month 20YY.
β
7
a, b, 1
ππ
π1
a, b, 1
π2
π
a, b, 1
ππ
a, 1
b,
1
2
π0
a,
b,
1
2
ππ
a, b, 1
Fig. 1.
FPM for βπ
a,
1
2
1
2
b, 1
a, 1
π0
ππ
b, 1
a, b, 1
Fig. 2.
Robust FPM with exponentially fewer states
noted that one can give deterministic algorithms, e.g., employing counters and/or using
over-approximations, for monitoring this property; such an algorithm rejects the sequence
that does not contain π and that also rejects some other good sequences. Combining such
methods with probabilities, one can ensure that more good sequences are accepted (see
[Sistla and Srinivas 2008; Gondi et al. 2009]).
Succinct monitors. The previous example illustrates a property that can be monitored in
a probabilistic sense, but that can not be monitored using a deterministic monitor. Probabilistic monitors can also be used to generate succinct monitors even for safety properties.
Let π > 0 be a positive integer. Consider the following π-regular safety property πΏπ ,
over the input alphabet {π, π}, that requires that the ππ‘β symbol after every π should be
the symbol π. It is not difficult to show that any deterministic monitor for this property
has to have at least 2π states. On the other hand, the probabilistic monitor π with π + 2
states, shown in Figure 2, monitors this property as follows. In the initial state π0 , when
ever the monitor π sees π, with probability 21 it remains in π0 , and with probability 12 it
goes to another state, say π1 . From π1 it counts π input symbols and checks if the ππ‘β
symbol is π; if that symbol is π then it goes back to the initial state π0 , else it goes to the
reject state ππ . It is not difficult to see that if the input sequence satisfies the property πΏπ
then π rejects it with zero probability. On the other, if it does not satisfy πΏπ then it is
rejected with probability at least 21π . This is because it catches the error if for all the π
symbols before the faulty π the machine did not take the transition π0 to π1 , while at the
faulty π it does take this transition. Notice that π is different from π in the sense that it
rejects all bad sequences with probability at least 21π and rejects all good sequences with
probability 0. It is also to be noted that if the input sequence has infinite number of errors,
i.e., infinite number of occurrences of π such that the ππ‘β symbol after it is is not a π, then
the sequence is rejected by π with probability 1 (in general, the probability of rejecting an
input sequence increases with the number of errors). This monitor has only π + 2 states
and is succinct compared to any deterministic monitors. Thus, we see that probabilistic
monitors can give space efficient monitors. Note that non-deterministic automata can also
be succinct language recognizers, but they are not executable.
The above two examples show that randomization can be used to monitor for liveness
properties and can also be used for obtaining succinct monitors for safety properties. One
ACM Journal Name, Vol. V, No. N, Month 20YY.
8
β
can also give deterministic monitors for both of the above properties that err sometimes,
with the monitor for the later property being succinct. These examples show that randomization is an alternate/complementary technique for monitoring. It has the advantage of
being unpredictable making it difficult to be exploited by an adversary when monitoring
for security properties. The reader is referred to [Sistla and Srinivas 2008; Gondi et al.
2009] for various practical examples and for comparison between deterministic, randomized and hybrid techniques for monitoring various properties, including liveness properties,
specified by automata on infinite strings.
Checking failure freedom property. Consider a system consisting of π number of machines (for some π > 0) and a repair kit (this system may be a robot exploring Martian
surface). At any time, only a subset of the machines are operational. The state of the
system is denoted by the set of machines that are operational. There is also a special state
denoting the failure of the repair kit as well as all the machines. The repair kit repairs
failed machines over time. At any time the system takes an input from outside and performs work denoted by the input. During the processing of the input, some machines may
fail and some previously failed machines may get repaired, or the repair kit and all the
machines may fail. Both failures of machines and the repair kit, as well as repairing of machines is probabilistic in nature. Thus after each input, the state of the system may change
according to a discrete probability distribution which may depend on the input symbol as
well as the current state. One may want to determine the sequences of inputs on which
the systems fails with more than a critical probability, or determine if the system never fail
with more than a critical probability.
Verifying π-regular safety properties of probabilistic open systems. FPMs can also
be used for verifying safety properties of open probabilistic systems. As indicated in the
introduction, such a system can be modeled as PBA, call it as system automaton. Now,
suppose we want to verify that on every infinite input sequence, the sequence of states
the system goes through satisfies a safety property specified by a deterministic finite state
automaton π΄, with probability greater than some π₯. The input symbols to π΄ are the states
of the system automaton. The automaton π΄ has an error state which is an absorbing state.
Any sequence of inputs leading to the error state is a bad sequence. An infinite sequence
of inputs to π΄ is good if none of its prefixes is bad. To perform the verification, we take a
synchronous product of the system automaton and the property automaton π΄. Each state
of the product is a pair of the form (π , π) where π is the system state and π is the state
of π΄. There is a transition from (π , π) to (π β² , π β² ) with probability π on input π iff i) there
is a transition from π on input π to π β² in the system automaton with probability π and ii)
there is a transition on input π from π to π β² in π΄. We convert this product automaton to a
FPM β³, by merging all states of the form (π , π), where π is the error state of π΄, into a
single absorbing state which is the rejecting state. Now, on every infinite input sequence
the system satisfies the safety property specified by π΄ with probability greater than π₯ iff
every input sequence is rejected by β³ with probability less than 1 β π₯.
3.
PRELIMINARIES
Sequences. Let π be a finite set. We let β£πβ£ denote the cardinality of π. Let π
= π 1 , π 2 , . . .
be a possibly infinite sequence over π. The length of π
, denoted as β£π
β£, is defined to be
the number of elements in π
if π
is finite, and π otherwise. π β denotes the set of finite
sequences, π + the set of finite sequences of length β₯ 1 and π π denotes the set of infinite
ACM Journal Name, Vol. V, No. N, Month 20YY.
β
9
sequences. If π is a finite sequence, and π
is either a finite or an infinite sequence then ππ
denotes the concatenation of the two sequences in that order. If π
β π β and π
β² β π β βͺπ π ,
the set π
π
β² = {ππ
β£ π β π
and π β π
β² }.
For integers π and π such that 1 β€ π β€ π < β£π
β£, π
[π : π] denotes the (finite) sequence
π π , . . . π π and the element π
(π) denotes the element π π . A finite prefix of π
is any π
[1 : π] for
π < β£π
β£. We denote the set of π
βs finite prefixes by Pref (π
).
Languages. Given a finite alphabet Ξ£, a language πΏ of finite words over Ξ£ is a set of finite
sequences over Ξ£ and a language β of infinite words over Ξ£ is a set of infinite sequences
over Ξ£.
Metric topology on Ξ£π . Given a finite alphabet Ξ£, one can define a metric π : Ξ£π ×Ξ£π β
β+ on Ξ£π as follows. For πΌ1 , πΌ2 β Ξ£π , πΌ1 β= πΌ2 , π(πΌ1 , πΌ2 ) = 21π where π is the unique
integer such that πΌ1 (π) β= πΌ2 (π) and βπ < π, πΌ1 (π) = πΌ2 (π). Also π(πΌ, πΌ) = 0 for all
πΌ β Ξ£π . Given πΌ β Ξ£π and r β β, the set B(πΌ, r) = {π½ β£ π(πΌ, π½) < r} is said to be an
open ball with center πΌ and radius r. A language β β Ξ£π is said to be open if for every
πΌ β β there is a rπΌ such that B(πΌ, rπΌ ) β β. It can be shown that a language β is open iff
β = πΏΞ£π for some πΏ β Ξ£β . A language β is closed if its complement Ξ£π β β is open.
Given a language β β Ξ£π , the set cl(β) = {πΌ β£ βr, B(πΌ, r) β© β =
β β
} is the smallest closed
set containing β.
Safety Languages. Given an alphabet Ξ£ andβͺa language β β Ξ£π , we denote the set of
prefixes of β by Pref (β), i.e., Pref (β) =
πΌββ Pref (πΌ). Following [Lamport 1985;
Alpern and Schneider 1985], a language β is a safety property (also, called safety language)
if for every πΌ β Ξ£π : Pref (πΌ) β Pref (β) β πΌ β β . In other words, β is a safety
property if it is limit closed β for every infinite string πΌ, if every prefix of πΌ is a prefix of
some string in β, then πΌ itself is in β. Safety languages coincide exactly with the closed
languages in the metric topology π defined above [Perrin and Pin 2004]. We will denote
the set of safety languages by Safety.
Almost Safety Languages. It is well known that the class of safety languages are closed
under finite union and countable intersection [Sistla 1985], but not under countable unions.
We say that a language β is an almost safety property/language if it is a countable union
of safety languages, i.e., β := βͺ0β€π<β βπ where βπ , for 0 β€ π < β, is a safety
language. For β, as given above, and π β₯ 0, let β³π = βͺ0β€πβ€π βπ . It is easy to see
that, for each π β₯ 0, β³π is a safety language and β³π β β³π+1 . Thus the languages
β³0 , ..., β³π , ... form an increasing chain of safety languages with β as its limit. Thus,
we see that, although an almost safety language is not a safety language, it nevertheless
can be approximated more and more accurately by safety languages. Please note that the
complement of an almost safety property can be written as a countable intersection of
open sets of the metric topology π defined above. We will denote the set of almost safety
languages by AlmostSafety.
Automata and π-regular Languages. A BuΜchi automaton π on infinite strings over a
finite alphabet Ξ£ is a 4-tuple (π, Ξ, π0 , πΉ ) where π is a finite set of states, Ξ β π×Ξ£×π
is a transition relation, π0 β π is an initial state, and πΉ β π is a set of accepting/final
automaton states. If for every π β π and π β Ξ£, there is exactly one π β² such that (π, π, π β² ) β
Ξ then π is called a deterministic automaton. Let πΌ = π1 , . . . be an infinite sequence over
ACM Journal Name, Vol. V, No. N, Month 20YY.
10
β
Ξ£. A run π of π on πΌ is an infinite sequence π0 , π1 , . . . over π such that π0 = π0 and for
every π > 0, (ππβ1 , ππ , ππ ) β Ξ. The run π is accepting if some state in πΉ appears infinitely
often in π. The automaton π accepts the string πΌ if it has an accepting run over πΌ. The
language accepted (recognized) by π, denoted by πΏ(π), is the set of strings that π accepts.
A language β is called π-regular if it is accepted by some finite state BuΜchi automaton.
We will denote the set of π-regular languages by Regular. Regular is closed under finite
boolean operations.
A language β β Regular β© AlmostSafety iff there is a deterministic BuΜchi automaton
which accepts its complement Ξ£π β β [Perrin and Pin 2004; Thomas 1990]. It is wellknown that a language β β Regular β© Safety iff there is a deterministic BuΜchi automaton
π = (π, Ξ, π0 , {ππ }) such that βπ β Ξ£, (ππ , π, ππ ) β Ξ and π recognizes the complement
Ξ£π β β. A language β β Regular β© Safety iff there is a BuΜchi automaton π (not necessarily
deterministic) such that each state of π is a final state [Perrin and Pin 2004]. This implies
that a language β β Safety is π-regular iff the set of finite prefixes of β, Pref (β) β Ξ£β ,
is a regular language (here Pref (β) is a language of finite words).
Probability Spaces. Let π be a set and πΈ be a set of subsets of π . We say that πΈ is a πalgebra on π if πΈ contains the empty set, is closed under complementation and also under
finite as well as countable unions. Let πΉ be a set of subsets of π . A π-algebra generated
by πΉ is the smallest π-algebra that contains πΉ . A probability space is a triple (π, πΈ, π)
where πΈ is a π-algebra on π and π is a probability function [Papoulis and Pillai 2002] on
πΈ.
4.
FINITE STATE PROBABILISTIC MONITORS
We will now define finite state probabilistic monitors which can be viewed as probabilistic automata over infinite strings that have a special reject state. The transition relation
from a state on a given input is described as a probability distribution that determines the
probability of transitioning to that state. The transition relation ensures that the probability
of transitioning from the reject state to a non-reject state is 0. A FPM can thus be seen
as a generalization of deterministic finite state monitors where the deterministic transition
is replaced by a probabilistic transition. Alternately, it can be viewed as the restriction
of probabilistic monitors [Sistla and Srinivas 2008] to finite memory. From an automatatheoretic viewpoint, they are special cases of probabilistic BuΜchi automata described in
[Baier and GroΜπ½er 2005] which generalized the probabilistic finite automata [Rabin 1963;
Salomaa 1973; Paz 1971] on finite words to infinite words. The main difference here is that
instead of having a set of accepting states, we have one (absorbing) reject state. Formally,
Definition: A finite state probabilistic monitor (FPM) over a finite alphabet Ξ£ is a tuple
β³ = (π, ππ , ππ , πΏ) where π is a finite set of states; ππ β π is the initial state; ππ β π is
the reject state, and; πΏ : π × Ξ£β× π β [0, 1] is the probabilistic transition relation such
that for all π β π and π β Ξ£, πβ² βπ πΏ(π, π, π β² ) = 1 and πΏ(ππ , π, ππ ) = 1. In addition, if
πΏ(π, π, π β² ) is a rational number for all π, π β² β π, π β Ξ£, then we say that β³ is a rational
finite state probabilistic monitor (RatFPM).
It will be useful to view the transition function πΏ of a FPM on an input π as a matrix πΏπ
with the rows labeled by βcurrentβ state and columns labeled by βnextβ state and the entry
πΏπ (π, π β² ) denoting the probability of transitioning from π to π β² . Formally,
Notation: Given a FPM, β³ = (π, ππ , ππ , πΏ) over Ξ£, and π β Ξ£, πΏπ is a square matrix
ACM Journal Name, Vol. V, No. N, Month 20YY.
β
11
of order β£πβ£ such that πΏπ (π, π β² ) = πΏ(π, π, π β² ). Given a word π’ = π1 π2 . . . ππ β Ξ£+ , πΏπ’
is the matrix product πΏπ1 πΏπ2 . . . πΏππ . Please note that πΏπ is not defined. However, we shall
sometimes abuse notation andβsay that πΏπ (π1 , π2 ) is 1 if π1 = π2 and 0 otherwise. For
π1 β π, we say πΏπ’ (π, π1 ) = π1 βπ1 πΏπ’ (π, π1 ).
Intuitively, the matrix entry πΏπ’ (π, π β² ) denotes the probability β
of being in π β² after having
read the input word π’ and having started in π. Please note that πβ² βπ πΏπ’ (π, π β² ) = 1 for all
π’ β Ξ£+ and π, π β² β π.
The behavior of a FPM β³ on an input word πΌ = π1 π2 , . . . , . can be described as
follows. The FPM starts in the initial state ππ and if reading input symbols π1 π2 π3 . . . ππ
results in state π, then it moves to state π β² with probability πΏππ+1 (π, π β² ) on symbol ππ+1 .
An infinite sequence of states, π β ππ , is a run of the FPM β³. We say that a run π is
rejecting if the reject state occurs infinitely often in π. A run π is said to be accepting if the
run is not a rejecting run. In order to determine the probability of rejecting the word πΌ, the
FPM β³ can be thought of as a infinite state Markov chain which gives rise to the standard
probability measure on Markov Chains [Vardi 1985; Kemeny and Snell 1976]:
Definition: Given a FPM, β³ = (π, ππ , ππ , πΏ) on the alphabet Ξ£ and a word πΌ β Ξ£π ,
the probability space generated by β³ and πΌ is the probability space (ππ , β±β³,πΌ , πβ³,πΌ )
where
ββ±β³,πΌ is the smallest π-algebra on ππ generated by the collection {Cπ β£ π β π+ } where
Cπ = {π β ππ β£ π is a prefix of π}.
βπβ³,πΌ is the unique probability measure on (ππ , β±β³,πΌ ) such that πβ³,πΌ (Cπ0 ...ππ ) is
β0 if π0 β= ππ ,
β1 if π = 0 and π0 = ππ , and
βπΏ(π0 , πΌ(1), π1 ) . . . πΏ(ππβ1 , πΌ(π), ππ ) otherwise.
The set of rejecting runs and the set of accepting runs for a given word πΌ can be shown to
be measurable which gives rise to the following definition.
Definition: Let πβ³,πΌ be the probability measure defined by the FPM, β³ = (π, ππ , ππ , πΏ)
on Ξ£ and the word πΌ β Ξ£π . Let rej = {π β ππ β£ ππ occurs infinitely often in π} and
acc = ππ β rej. The quantity πβ³,πΌ (rej) is said to be the probability of rejecting πΌ and will
be denoted by ππππ
β³, πΌ . The quantity πβ³,πΌ (acc) is said to be the probability of accepting πΌ
πππ
πππ
and will be denoted by ππππ
β³, πΌ . We have that πβ³, πΌ + πβ³, πΌ = 1.
Please note that as the probability of transitioning from a reject state to a non-reject state
is 0, the probability of rejecting an infinite word can be seen as a limit of the probability of
rejecting its finite prefixes. This is the content of the following Lemma.
L EMMA 4.1. For any FPM, β³ = (π, ππ , ππ , πΏ) over Ξ£, and any word πΌ = π1 π2 . . . ... β
Ξ£π , the sequence of reals numbers {πΏπ1 π2 ...ππ (ππ , ππ ) β£ π β β} is an non-decreasing sequence. Furthermore, ππππ
β³, πΌ = limπββ πΏπ1 π2 ...ππ (ππ , ππ ).
The rest of this section is organized as follows. We first present some special monitors
and technical constructions involving FPMs. We then conclude this section by introducing
the class of monitorable languages that we consider in this paper.
ACM Journal Name, Vol. V, No. N, Month 20YY.
12
4.1
β
Some Tools and Techniques
The special monitors and monitor constructions that we present in this section, will be used
both in the definition of the classes of monitorable languages, as well as in establishing
the expressiveness results in Section 5. We will also generalize the iteration lemma for
probabilistic automata over finite words to FPMs.
Scaling acceptance/rejection probabilities. Given a FPM β³ and a real number π₯ β
[0, 1], we can construct monitors, where the acceptance (or rejection) probability of a word
is scaled by a factor π₯. We begin by proving such a proposition for acceptance probabilities,
before turning our attention to rejection probabilities.
P ROPOSITION 4.2. Given a FPM, β³ on Ξ£, and a real number π₯ β [0, 1], there is a
πππ
FPM, β³π₯ , such that for any πΌ β Ξ£π , ππππ
β³π₯ , πΌ = π₯ × πβ³, πΌ .
P ROOF. Let β³ = (π, ππ , ππ , πΏ). Pick a new state ππ 0 not in π and let β³π₯ = (π βͺ
{ππ 0 }, ππ 0 , ππ , πΏπ₯ ) where the transition function πΏπ₯ is defined as follows. For each π β Ξ£
βπΏπ₯ (π, π, π β² ) = πΏ(π, π, π β² ) if π, π β² β π.
βπΏπ₯ (ππ 0 , π, π β² ) = π₯ × πΏ(ππ , π, π β² ) if π β² β π β {ππ }.
βπΏπ₯ (ππ 0 , π, ππ ) = 1 β π₯ + π₯ × πΏ(ππ , π, ππ ).
βπΏπ₯ (π, π, π β² ) = 0 if π β² = ππ 0 .
Now, for any π’ β Ξ£β , we can show by induction that πΏπ₯ (ππ 0 , π’, ππ ) = (1 β π₯) + π₯ ×
πΏ(ππ , π’, ππ ) and πΏπ₯ (ππ 0 , π’, π) = π₯ × πΏ(ππ , π’, π) for π β= ππ . It is easy to see that β³π₯ is the
desired FPM.
Similarly we can construct a FPM β³π₯ which scales the probability of rejecting any
word by a factor of π₯.
P ROPOSITION 4.3. Given a FPM, β³ on Ξ£, and a real number π₯ β [0, 1], there is a
πππ
FPM, β³π₯ , such that for any πΌ β Ξ£π , ππππ
β³π₯ , πΌ = π₯ × πβ³, πΌ .
P ROOF. Let β³ = (π, ππ , ππ , πΏ). Pick two new states ππ0 and ππ1 not occurring in π
and let β³π₯ = (π βͺ {ππ0 , ππ1 }, ππ , ππ0 , πΏ π₯ ) where πΏ π₯ is defined as follows. For each π β Ξ£
βπΏ π₯ (π, π, π β² ) = πΏ(π, π, π β² ) if π, π β² β π β {ππ }.
βπΏ π₯ (ππ , π, ππ0 ) = π₯.
βπΏ π₯ (ππ , π, ππ1 ) = 1 β π₯.
βπΏ π₯ (π, π, π) = 1 if π β {ππ0 , ππ1 }.
βπΏ π₯ (π, π, π β² ) = 0 if none of the above hold.
Now, for any π’ β Ξ£β and π β Ξ£, we can show that πΏ π₯ (ππ , π’π, ππ0 ) = π₯ × πΏ(ππ , π’, ππ ).
Using Lemma 4.1, we get the desired result.
We get as an immediate consequence of Proposition 4.2 and Proposition 4.3.
P ROPOSITION 4.4. Given a FPM β³ on Ξ£ and a real number π₯ β [0, 1], there are
πππ
πππ
FPMβs β³π₯ and β³π₯ such that for any πΌ β Ξ£π , ππππ
β³π₯ , πΌ = π₯ × πβ³, πΌ and πβ³π₯ , πΌ =
π₯ × ππππ
β³, πΌ .
Product automata. Given two FPMβs, β³1 and β³2 , we can construct a new FPM β³
such that the probability that β³ accepts a word πΌ is the product of the probabilities that
β³1 and β³2 accept the same word πΌ.
ACM Journal Name, Vol. V, No. N, Month 20YY.
β
13
P ROPOSITION 4.5. Given two FPM, β³1 and β³2 on Ξ£, there is a FPM, β³1 β β³2
πππ
πππ
such that for any πΌ β Ξ£π , ππππ
β³1 ββ³2 , πΌ = πβ³1 , πΌ × πβ³2 , πΌ .
P ROOF. Let β³1 = (π1 , ππ 1 , ππ1 , πΏ1 ) and β³2 = (π2 , ππ 2 , ππ2 , πΏ2 ). Pick a new state ππ
not occurring in π1 βͺ π2 . Let β³1 β β³2 = (π, ππ , ππ , πΏ) where
βπ = {ππ } βͺ ((π1 β {ππ1 }) × (π2 β {ππ2 }))
βππ = (ππ 1 , ππ 2 )
βFor each π β Ξ£,
βπΏ((π1 , π2 ), π, (π1β² , π2β² )) = πΏ1 (π1 , π, π1β² )πΏ2 (π2 , π, π2β² ).
β
βπΏ((π1 , π2 ), π, ππ ) = 1 β πβ=ππ πΏ((π1 , π2 ), π, π).
βπΏ(ππ , π, ππ ) = 1 and πΏ(ππ , π, π) = 0 for π β= ππ .
Given any finite word π’ β Ξ£+ , we can shown by induction on the length of π’ that for any
π1 , π1β² β π1 β{ππ1 } and π2 , π2β² β π2 β{ππ2 }, we have πΏπ’ ((π1 , π2 ), (π1β² , π2β² )) = πΏ1 (π1 , π’, π1β² )×
πΏ2 (π2 , π’, π2β² ).
Now, given a πΌ = π0 π1 . . ., let π’π = π1 . . . ππ . Using Lemma 4.1, we get
πππ
ππππ
β³1 ββ³2 , πΌ = 1 β πβ³1 ββ³2 , πΌ
= 1 β limπββ πΏπ’π (ππ , ππ )
= limπββ (1
ββ πΏπ’π (ππ , ππ ))
= limπββ ( πβ=ππ πΏπ’π (ππ , π))
β
β
= limπββ πβ² β=ππ
π2β² β=ππ2
1
1
β²
β²
(πΏ1 (π
βπ 1 , π’π , π1 )πΏ2 (ππ 2 , π’π ,β² π2 ))
= limπββ (( πβ² β=ππ πΏ1 (ππ 1 , π’π , π1 ))
β 1 1
×( πβ² β=ππ πΏ2 (ππ 2 , π’π , π2β² ))
2
2
= limπββ ((1 β πΏ1 (ππ 1 , π’π , ππ1 ))
×(1 β πΏ2 (ππ 2 , π’π , ππ2 )))
πππ
= ππππ
β³1 , πΌ × πβ³2 , πΌ .
Given two FPMβs, β³1 and β³2 , we can construct a new FPM β³ such that the probability that β³ rejects a word πΌ is the product of the probabilities that β³1 and β³2 reject
the same word πΌ.
P ROPOSITION 4.6. Given two FPM, β³1 and β³2 on Ξ£, there is a FPM, β³1 × β³2
πππ
πππ
such that for any πΌ β Ξ£π , ππππ
β³1 ×β³2 , πΌ = πβ³1 , πΌ × πβ³2 , πΌ .
P ROOF. Let β³1 = (π1 , ππ 1 , ππ1 , πΏ1 ) and β³2 = (π2 , ππ 2 , ππ2 , πΏ2 ). Let β³1 × β³2 =
(π, ππ , ππ , πΏ) where
βπ = π1 × π2
βππ = (ππ 1 , ππ 2 )
βππ = (ππ1 , ππ2 )
βFor each π β Ξ£, πΏ((π1 , π2 ), π, (π1β² , π2β² )) = πΏ1 (π1 , π, π1β² )πΏ2 (π2 , π, π2β² ).
Given any finite word π’ β Ξ£+ , we can show by induction on the length of π’ that for any
π1 , π1β² β π1 and π2 , π2β² β π2 , we have πΏπ’ ((π1 , π2 ), (π1β² , π2β² )) = πΏ1 (π1 , π’, π1β² ) × πΏ2 (π2 , π’, π2β² ).
The result now follows from Lemma 4.1.
ACM Journal Name, Vol. V, No. N, Month 20YY.
14
β
Iteration Lemma. We present herein a technical lemma that will be used to demonstrate
that certain languages are not recognized by probabilistic monitors. Please note that the
iteration lemma is a generalization of a similar lemma for probabilistic automata over finite
words [Nasu and Honda 1968; Paz 1971] and the generalization relies on the fact that the
probability of rejection of an infinite word can be taken as a limit of probability of rejection
of its finite prefixes (see Lemma 4.1).
L EMMA 4.7. Given a FPM, β³ on Ξ£, and a finite word π’ β Ξ£+ , there is a natural
number π and real numbers π0 , . . . ππβ1 β β (depending only upon β³ and π’) such that
for all π£ β Ξ£+ , πΌ β Ξ£π ,
+ . . . + π0 ππππ
= ππβ1 ππππ
ππππ
β³, π£πΌ
β³, π£π’πβ1 πΌ
β³, π£π’π πΌ
and ππβ1 + . . . + π0 = 1.
P ROOF. Let β³ = (π, ππ , ππ , πΏ). Consider the matrix πΏπ’ . From elementary linear algebra there is a polynomial π(π₯) = π₯π β ππβ1 π₯πβ1 β ππβ2 π₯πβ2 β . . . β π0 (the minimal
polynomial of πΏπ’ ) such that
(1) π(πΏπ’ ) = 0 and
(2) π(π) = 0 where π is an eigenvalue of πΏπ’ .
Since π(πΏπ’ ) = 0, we get πΏπ’π = ππβ1 πΏπ’πβ1 + . . . π0 I where I is the identity matrix. Now
if πΌ = π1 π2 . . . then we get for each π > 0,
πΏπ£ πΏπ’π πΏπ1 ...ππ = ππβ1 πΏπ£ πΏπ’πβ1 πΏπ1 ...ππ + . . . + π0 πΏπ£ πΏπ1 ...ππ .
This implies that
πΏπ£π’π π1 ...ππ = ππβ1 πΏπ£π’πβ1 π1 ...ππ + . . . + π0 πΏπ£π1 ...ππ .
Thus,
πΏπ£π’π π1 ...ππ (ππ , ππ ) = ππβ1 πΏπ£π’πβ1 π1 ...ππ (ππ , ππ ) + . . . + π0 πΏπ£π1 ...ππ (ππ , ππ ).
Taking the limit (π β β) on both sides, we get by Lemma 4.1
ππππ
= ππβ1 ππππ
+ . . . + π0 ππππ
β³, π£πΌ .
β³, π£π’π πΌ
β³, π£π’πβ1 πΌ
Now, please note that 1 is an eigenvalue of πΏπ’ (with an eigenvector all of whose entries are
1). Thus, we get π(1) = 0 which implies that ππβ1 + . . . + π0 = 1.
The unit interval as the acceptance (or rejection) probability of a monitor. We will
now define two monitors that will be used for proving expressiveness results. These monitors will be defined on the alphabet Ξ£ = {0, 1}. By associating the numeral 0 to 0 and
associating the numeral 1 to 1, we can associate πΌ = π1 π2 . . . β Ξ£ to a real number bin(πΌ)
by thinking of πΌ as the βbinaryβ real number 0.π1 π2 . . .. Formally,
Definition: Let Ξ£ = {0, 1}, bin(0) = 0 and bin(1) = 1. We define the functions bin(β
) :
π=π
β
ππ
Ξ£+ β [0, 1] as bin(π1 π2 . . . ππ ) =
, and bin(β
) : Ξ£π β [0, 1] as bin(π1 π2 . . .) =
π
2
π=1
ACM Journal Name, Vol. V, No. N, Month 20YY.
β
0,
0, 1,
1
2
1
2
0, 1, 1
0, 1
π1
π
0,
π0
1,
1
2
0,
0, 1, 1
π=β
β
π=1
1, 1
ππ
π2
Fig. 3. Transition diagram for FPMs β³Id and β³1βId
1
2
Fig. 4.
1, 1
15
0, 1, 1
ππ
1
2
FPM β³ used in proof of Theorem 5.8
ππ
. If π₯ β [0, 1] is an irrational number, let wrd(π₯) β Ξ£π be the unique word such
2π
that bin(wrd(π₯)) = π₯.
The following proposition will be useful.
P ROPOSITION 4.8. Let Ξ£ = {0, 1} and let π₯ β [0, 1] be an irrational number. Then
the languages βπ₯ = {πΌ β Ξ£π β£ bin(πΌ) β€ π₯} and βπ₯ = {πΌ β Ξ£π β£ bin(πΌ) < π₯} are not
π-regular.
P ROOF. We will just show that βπ₯ is not π-regular. The proof of non-π-regularity of
βπ₯ is similar.
Please note that the language βπ₯ defines an equivalence relation (the Myhill-Nerode
equivalence) on Ξ£β β π’ β‘βπ₯ π£ iff for all πΌ β Ξ£π , π’πΌ β βπ₯ β π£πΌ β βπ₯ . If βπ₯ is π-regular,
then there should be only a finite number of β‘βπ₯ classes (see [Thomas 1990]). We will
show that this is not the case.
By confusing 0 with the numeral 0 and 1 with the numeral 1, consider the binary expansion of π₯ = .π1 π2 . . .. Let wrd(π₯) = π1 π2 . . . and for each π > 0, π₯π be the suffix
ππ ππ+1 . . .. Since π₯ is irrational, no two suffixes π₯π and π₯π for π < π can be the same
infinite word.
So given π < π, let π’π = π1 π2 . . . ππ and π’π = π1 π2 . . . ππ . Let π be the smallest natural
number such that ππ+π β= ππ+π . Let π½ = ππ+1 ππ+2 . . . ππ+πβ1 10π . Please note that
π½ = ππ+1 ππ+2 . . . ππ+πβ1 10π . There are two cases: either ππ+π = 1 and ππ+π = 0,
or ππ+π = 0 and ππ+π = 1. If ππ+π = 1 and ππ+π = 0, it can be easily shown that
bin(π’π π½) < π₯ while bin(π’π π½) > π₯. If ππ+π = 0 and ππ+π = 1, bin(π’π π½) > π₯ while
bin(π’π π½) < π₯. Hence π’π ββ‘βπ₯ π’π for π < π. Thus, βπ₯ is not π-regular.
We point out here that the proof of non-π-regularity of βπ₯ and βπ₯ is a modification of
the proof that the language of finite words {π’ β Ξ£β β£ bin(π’) < π₯} is not regular (see [Paz
1971; Salomaa 1973]).
We can construct two monitors whose probability of accepting a word πΌ is bin(πΌ) and
1 β bin(πΌ) respectively as follows.
ACM Journal Name, Vol. V, No. N, Month 20YY.
16
β
L EMMA 4.9. Let Ξ£ = {0, 1}. There are π
ππ‘πΉ π π βs, β³Id and β³1βId on Ξ£, such
that for any word πΌ β Ξ£π ,
πππ
ππππ
β³Id , πΌ = bin(πΌ) and πβ³1βId , πΌ = 1 β bin(πΌ).
P ROOF. The transition diagrams of the FPMs β³Id and β³1βId are shown in Figure 3.
Let π = {π0 , π1 , π2 } and πΏ : π × Ξ£ × π β [0, 1] be defined as follows. The states π1
and π2 are absorbing, i.e., πΏ(π1 , 0, π1 ) = πΏ(π1 , 1, π1 ) = πΏ(π2 , 0, π2 ) = πΏ(π2 , 1, π2 ) = 1.
For transitions out of π0 , πΏ(π0 , 0, π0 ) = πΏ(π0 , 0, π1 ) = πΏ(π0 , 1, π0 ) = πΏ(π0 , 1, π2 ) = 12 Let
β³Id = (π, π0 , π1 , πΏ) and β³1βId = (π, π0 , π2 , πΏ).
Given π’ β Ξ£+ , it can be easily shown by induction (on the length of π’) that πΏπ’ (π0 , π0 ) =
1
1
, πΏπ’ (π0 , π2 ) = bin(π’) and πΏπ’ (π0 , π1 ) = 1 β bin(π’) β 2β£π’β£
. Using Lemma 4.1, it can
2β£π’β£
πππ
π
πππ
easily shown that for any word πΌ β Ξ£ , πβ³Id , πΌ = 1 β πβ³Id , πΌ = bin(πΌ) and ππππ
β³1βId , πΌ =
1 β ππππ
β³1βId , πΌ = 1 β bin(πΌ).
We point out here that if we consider the reject state as an accept state and view the monitor
β³1βId as a probabilistic automaton over finite words, the resulting automaton is the probabilistic automaton (see [Salomaa 1973]) often used to show that non-regular languages
are accepted by probabilistic automaton.
4.2
Monitored languages
We conclude this section, by defining the classes of probabilistic monitors that we will
consider in this paper. Recall that in the case of deterministic monitors, the language
rejected is defined as the set of words upon which the monitor reaches the βuniqueβ reject
state. The language permitted by the monitor is defined as the complement of the words
rejected. For probabilistic monitoring, on the other hand, the set of languages permitted
may depend upon the probability of rejecting a word. In other words, it is reasonable to
think of a language permitted by a FPM to be the set of words which are rejected with a
probability strictly less than (or just less than) a threshold π₯. This gives rise to the following
definition.
Definition: Given a FPM β³ on Ξ£ and π₯ β [0, 1],
ββ<π₯ (β³) = {πΌ β Ξ£π β£ ππππ
β³, πΌ < π₯}.
βββ€π₯ (β³) = {πΌ β Ξ£π β£ ππππ
β³, πΌ β€ π₯}.
Thus potentially, given π₯ β [0, 1], we can define two subclasses of languages over an
alphabet Ξ£β {β β£ ββ³ s.t. β = β<π₯ (β³)} and {β β£ ββ³ s.t. β = ββ€π₯ (β³)}. It turns out
that as long as π₯ is strictly between 0 and 1, the exact value of π₯ does not affect the classes
defined.
L EMMA 4.10. Let β³ be a FPM on an alphabet Ξ£. Then, given π₯1 , π₯2 β (0, 1), there
is a β³β² such that ββ€π₯1 (β³) = ββ€π₯2 (β³β² ) and β<π₯1 (β³) = β<π₯2 (β³β² ).
P ROOF. First consider the case π₯2 β€ π₯1 . Let π₯3 = π₯π₯12 . By Proposition 4.4, there is a
πππ
FPM β³π₯3 such that for all πΌ β Ξ£ such that ππππ
β³π₯3 , πΌ = π₯3 × πβ³, πΌ . An easy calculation
shows that ββ€π₯2 (β³π₯3 ) = ββ€π₯1 (β³) and β<π₯2 (β³π₯3 ) = β<π₯1 (β³).
2
Now, consider the case π₯1 β€ π₯2 . Let π₯3 = 1βπ₯
1βπ₯1 . By Proposition 4.4, there is a FPM
ACM Journal Name, Vol. V, No. N, Month 20YY.
β
β³π₯3 such that for all πΌ β Ξ£, ππππ
β³π₯
ππππ
β³π₯
3, πΌ
3
,πΌ
17
= π₯3 × ππππ
β³, πΌ . Now for any word πΌ,
β€ π₯2 β 1 β π₯2 β€ 1 β ππππ
β³π₯3 , πΌ
β 1 β π₯2 β€ ππππ
β³π₯3 , πΌ
β 1 β π₯2 β€ π₯3 × ππππ
β³, πΌ
β 1 β π₯1 β€ ππππ
β³, πΌ
β 1 β ππππ
β³, πΌ β€ π₯1
β ππππ
β³, πΌ β€ π₯1 .
Hence, ββ€π₯2 (β³π₯3 ) = ββ€π₯1 (β³). Similarly, β<π₯2 (β³π₯3 ) = β<π₯1 (β³).
Please note that β<0 (β³) = β
and ββ€1 (β³) = Ξ£π for any FPM. Hence, we restrict
our attention to four classes of languages as follows.
Definition: Given an alphabet Ξ£ and a language β β Ξ£π .
ββ is said to be monitorable with strong acceptance if there is a monitor β³ such that
β = ββ€0 (β³). We will denote the class of such properties by M SA.
ββ is said to be monitorable with weak acceptance if there is a monitor β³ such that
β = β<1 (β³). We will denote the class of such properties by M WA.
ββ is said to be monitorable with strict cut-point if there is a monitor β³ and 0 < π₯ < 1
such that β = β<π₯ (β³). Such properties will be denoted by M SC.
ββ is said to be monitorable with non-strict cut-point if there is a monitor β³ and 0 <
π₯ < 1 such that β = ββ€π₯ (β³). Such properties will be denoted by M NC.
We point out here that in the literature on probabilistic automata over finite words [Rabin
1963; Paz 1971; Salomaa 1973], there is usually no distinction between strict and non-strict
inequality. Also, if we were to define the classes M WA and M SA for probabilistic automata
over finite words, they will turn out to be subclasses of regular languages. As we will see,
over infinite words, while the class of languages M SA coincides with π-regular and safety
languages, the class M WA may contain non-π-regular languages.
5.
EXPRESSIVENESS
In this Section, we will compare the relative expressiveness of the class of Languages
M SA, M WA, M SC and M NC defined in Section 4.2. For this Section, we will assume
that the alphabet Ξ£ is fixed unless otherwise stated. We will also assume that Ξ£ contains
at least 2 elements (if Ξ£ contains only one element, then Ξ£π consists of exactly only one
element). We summarize our results in Figure 5 below. The rest of this section is organized
as follows. We begin by establishing the results for the classes M SA and M NC. We then
consider the classes M WA and M SC. We conclude the section by proving the results for
robust monitors.
5.1
Monitored Languages M SA and M NC.
Please recall that given an alphabet Ξ£, M SA= {β β Ξ£π β£ β FPM β³ s.t. β = ββ€0 (β³)}
and M NC= {β β Ξ£π β£β FPM β³ and π₯ β (0, 1) s.t. β = ββ€π₯ (β³)}. We start by showing
that both M SA and M NC are subclasses of safety languages.
T HEOREM 5.1. M SA,M NC β Safety.
ACM Journal Name, Vol. V, No. N, Month 20YY.
18
β
Regular β© Safety
=
M SA (5.3)
=
5.2,5.4
Robust (5.13)
Regular
β©
AlmostSafety
5.8
M WA
5.10
M SC
5.7,5.11
AlmostSafety
Not Comparable (5.15)
M NC
5.1,5.6
Safety
Fig. 5. An arrow from class A to B indicates the strict containment of class A in B. The
numbers on the arrows refer to the theorem in the paper that establishes this relationship.
P ROOF. Let β³ = (π, ππ , ππ , πΏ) be a FPM on an alphabet Ξ£ and let 0 β€ π₯ < 1. It suffices to show that the set β = Ξ£π βββ€π₯ (β³) is an open set. Pick any word πΌ = π1 π2 . . . β
β. Then, we must have by definition and Lemma 4.1, ππππ
β³, πΌ = lim πΏπ1 ...ππ (ππ , ππ ) > π₯.
πββ
Hence, there is a π > 0 such that πΏπ1 ...ππ (ππ , ππ ) > π₯. Now, consider the open ball
B(πΌ, 21π ) = π1 . . . ππ Ξ£π . Clearly πΌ β B(πΌ, 21π ) and again by Lemma 4.1, ππππ
β³, π½ β₯
1
π
πΏπ1 ...ππ (ππ , ππ ) > π₯ for any π½ β π1 . . . ππ Ξ£ . Thus B(πΌ, 2π ) β β. Hence, β is an open
set.
We will now show that any π-regular safety language is contained in the classes M SA
and M NC.
T HEOREM 5.2. Regular β© Safety β M SA and Regular β© Safety β M NC.
P ROOF. If β is π-regular and safety, then there is a deterministic BuΜchi automaton
β¬ = (π, Ξ, ππ , {ππ }) with (ππ , π, ππ ) β Ξ for each π β Ξ£ such that Ξ£π β β is the
language recognized by β¬ (see Section 3). Now consider the FPM, β³ = (π, ππ , ππ , πΏ)
where πΏ(π, π, π β² ) is 1 if (π, π, π β² ) β Ξ and is 0 otherwise. It can be shown easily that
πππ
π
ππππ
β³, πΌ = 0 β πΌ β β and πβ³, πΌ = 1 β πΌ β Ξ£ β β. Hence β = ββ€π₯ (β³) for all
0 β€ π₯ < 1.
We will now show that the the class of π-regular and safety languages coincides exactly
with the class M SA. Thus, as a consequence of Theorem 5.2, M SA β M NC.
T HEOREM 5.3. M SA= Regular β© Safety.
P ROOF. In light of Theorem 5.2, we only need to show that M SAβ Regular β© Safety.
Pick β βM SA. Let β³ = (π, ππ , ππ , πΏ) be a monitor such that ββ€0 (β³) = β. Please
note that β is a safety language by Theorem 5.1. Thus, it suffices to show that Ξ£π β β is
π-regular.
Now, in order to show that Ξ£π β β is a π-regular set, consider the BuΜchi automaton
β¬ = (π, Ξ, ππ , {ππ }) where (π, π, π β² ) β Ξ iff πΏ(π, π, π β² ) > 0. Let β1 be the language
recognized by β¬. We claim that β1 = Ξ£π β β.
It is easy to show that a word πΌ = π1 π2 . . . β β1 iff there is a finite prefix π1 . . . ππ of
πΌ and a sequence of states π0 = ππ , π1 , . . . ππ = ππ such that (ππ , ππ+1 , ππ+1 ) β Ξ for all
0 β€ π < π. Since (ππ , ππ+1 , ππ+1 ) β Ξ iff πΏ(π, ππ+1 , π β² ) > 0, it can be easily shown that
there is a sequence of states π0 = ππ , π1 , . . . ππ = ππ such that (ππ , ππ+1 , ππ+1 ) β Ξ for all
0 β€ π < π iff πΏπ1 ...ππ (ππ , ππ ) > 0. Thus πΌ β β1 iff there is a finite prefix π1 . . . ππ such that
ACM Journal Name, Vol. V, No. N, Month 20YY.
β
19
π
πΏπ1 ...ππ (ππ , ππ ) > 0. In light of Lemma 4.1, πΌ β β1 iff ππππ
β³, πΌ > 0. Thus, β1 = Ξ£ β β as
required.
Hence, we have M SA β M NC β Safety. We will now show that each of these containments is strict. Please note that since our alphabet is finite, the set M SA= Regular β© Safety
is countable while the set Safety is uncountable. We can show that the set M NC contains an
uncountable number of safety languages that are not π-regular. The proof uses the automaton β³(1βId) constructed in Lemma 4.9. As we pointed out before, the same automaton
viewed as probabilistic automaton over finite words is often used to prove that non-regular
languages (of finite words) can be recognized by probabilistic finite automata.
T HEOREM 5.4. The class M NC contains an uncountable number of safety languages
which are not π-regular. Thus M SA β M NC.
P ROOF. It suffices to prove the result for the case where the alphabet contains two elements. Let Ξ£ = {0, 1}. Consider the FPM β³(1βId) constructed in Lemma 4.9. We have
πππ
for every word πΌ β Ξ£π , ππππ
β³(1βId) , πΌ = 1 β πβ³(1βId) , πΌ = bin(πΌ). Thus, for each irrational
π₯ β (0, 1), the language ββ€π₯ (β³(1βId) ) = {πΌ β£ bin(πΌ) β€ π₯} which is not a π-regular
language by Proposition 4.8.
One may argue that the non-regularity in Theorem 5.4 is a consequence of the irrationality
of cut-point π₯ in the proof. However,
P ROPOSITION 5.5. There is a π
ππ‘πΉ π π , β³ on Ξ£ = {0, 1}, such that the language
ββ€ 21 (β³) is not π-regular.
P ROOF. Consider the FPM β³Id constructed in Lemma 4.9. Let β³ = β³Id β β³Id
2
πππ
πππ
as defined in Proposition 4.5. Now, ππππ
β³, πΌ = πβ³Id , πΌ × πβ³Id , πΌβ= (bin(πΌ)) . Thus,
1
2
ππππ
β³Id , πΌ = 1 β (bin(πΌ)) . Hence, ββ€ 21 (β³) = {πΌ β£ bin(πΌ) β₯
2 } which is not πregular by Proposition 4.8, and the fact that the class of π-regular languages is closed
under complementation.
We finally show that class of safety languages contains strictly the class M NC.
T HEOREM 5.6. M NC β Safety.
P ROOF. In light of Theorem 5.1, we just need to show that the containment is strict. Let
Ξ£ = {0, 1}. We need to show that there is a safety language β β Ξ£π such that for any
monitor β³ and π₯ β (0, 1), β β= ββ€π₯ (β³). Let πΏ = {0π 1(0β 1)β 0π 1 β£ π β β, π > 0}. Let
β1 = πΏΞ£π and β = Ξ£π β β1 . Now β1 is an open set and hence β is a safety language.
Assume that there is some β³ and some π₯ β (0, 1) such that β = ββ€π₯ (β³). Then by
=
Lemma 4.7, there are constants π0 , π1 , . . . , ππ β β such that for all πΌ β Ξ£π ππππ
β³, 0π πΌ
ππβ1 ππππ
+ . . . + π0 ππππ
β³, πΌ and ππβ1 + . . . + π0 = 1.
β³, 0πβ1 πΌ
Consider the set Pos = {π β£ ππ > 0} and let π1 , π2 , . . . , ππ be an enumeration of the
elements of Pos. Please note that Pos is a non-empty set (otherwise ππ βs do not add up-to
1). Also observe that for πΌ = 010π1 +1 10π2 +1 . . . 10ππ +1 1π , it is the case that 0π πΌ β β1
iff π β Pos and hence 0π πΌ β
/ β iff π β Pos. Therefore, each π β POS, we have ππππ
β³, 0π πΌ > π₯
and for π β
/ Pos, ππππ
β³, 0π πΌ β€ π₯. Please note that β = ββ€π₯ (β³) by assumption. Thus, we
ACM Journal Name, Vol. V, No. N, Month 20YY.
β
20
have
ππππ
βπ₯
β³, 0π πΌ
+ . . . + π0 ππππ
= ππβ1 ππππ
β³, πΌ β π₯ × 1 =
β³, 0πβ1 πΌ
πππ
πππ
= ππβ1 πβ³, 0πβ1 πΌ + . . . + π0 πβ³, πΌ β π₯(ππβ1 + . . . + π0 )
= ππβ1 (ππππ
β π₯) + . . . + π0 (ππππ
β³, πΌ β π₯).
β³, 0πβ1 πΌ
Now, the left-hand side of the above equation is β€ 0 as ππππ
β€ π₯. The right hand side
β³, 0π πΌ
πππ
is > 0 as ππ > 0 β ππππ
β³, 0π πΌ β π₯ > 0 and ππ β€ 0 β πβ³, 0π πΌ β π₯ β€ 0. Thus, we obtain a
contradiction.
Consider the language πΏ β Ξ£+ defined in the proof above. We point out here that the
language πΏΞ£β is often used as an example to show that there are context-free languages
over finite words which are not accepted by any probabilistic automaton [Paz 1971]. The
proof uses the iteration lemma for probabilistic finite words and is similar to the proof
outlined above. Summarizing the results of this Section, we have Regular β© Safety = M SA
β M NC β Safety.
5.2
Monitored Languages M WA and M SC.
Recall that given an alphabet Ξ£, M WA= {β β Ξ£π β£ β FPM β³ s.t. β = β<1 (β³)}
and M SC= {β β Ξ£π β£ β FPM β³ and π₯ β (0, 1) s.t. β = β<π₯ (β³)}. Note that if
we were to allow for βinfiniteβ state monitoring, the class M WA would coincide with
AlmostSafety [Sistla and Srinivas 2008]. However, FPMβs as defined in this paper have
only finite memory. We start by showing that both M WA and M SC are subclasses of almost safety languages. Recall that a language β β Ξ£π is an almost safety language if and
only if it can written as a countable union of safety languages. Of course, the containment
M WA β AlmostSafety can also be viewed as a special case of correspondence between
AlmostSafety and the monitoring with infinite states [Sistla and Srinivas 2008].
T HEOREM 5.7. M WA, M SC β AlmostSafety.
P ROOF. Please note that for any FPM β³, any 0 < π₯ β€ 1 and any word πΌ β Ξ£π ,
1
πππ
we have β<π₯ (β³) = βͺβ
π=1 {πΌ β£ πβ³, πΌ β€ π₯ β }. The result follows by observing that
π
πππ
1
{πΌ β£ πβ³, πΌ β€ π₯ β π } is a safety language for each π by Theorem 5.1.
We will now show that the class of π-regular and almost safety languages is strictly
contained in the class M WA. The proof of containment relies on the fact that if a language β β Ξ£π is π-regular and almost safety then its complement is recognized by a
deterministic BuΜchi automaton [Perrin and Pin 2004; Thomas 1990] (see Section 3). Once
this is observed, the proof of containment follows the lines of the proof of the fact that
every AlmostSafety language is monitorable with infinite βmemoryβ [Sistla and Srinivas
2008]. The strictness of the containment is witnessed by a probabilistic monitor which is a
modified version of the probabilistic BuΜchi automaton defined in [Baier and GroΜπ½er 2005].
T HEOREM 5.8. Regular β© AlmostSafety β M WA.
P ROOF. We first show that Regular β© AlmostSafety β M WA. Let β β Regular β©
AlmostSafety. Since β is π-regular and almost safety, there is a deterministic BuΜchi automaton β¬ = (π, Ξ, ππ , ππ ) such that Ξ£π β β is the language recognized by β¬. Now pick
ACM Journal Name, Vol. V, No. N, Month 20YY.
β
21
a new state ππ and consider the FPM, β³ = (π βͺ {ππ }, ππ , ππ , πΏ) where for each π β Ξ£
β§1
π β ππ , π β² = ππ


 21

β¨ 2 π β ππ , π β² β π, (π, π, π β² ) β Ξ
β²
πΏ(π, π, π ) = 1 π = π β² = ππ


1 π β π β ππ , π β² β π, (π, π, π β² ) β Ξ


β©
0 ππ‘βπππ€ππ π
It can be shown easily that β = β<1 (β³). Hence, Regular β© AlmostSafety β M WA.
In order to show that the containment is strict, we just need to show that there is a FPM
β³ such that β<1 (β³) is not a π-regular language. Let Ξ£ = {0, 1} and consider the FPM,
β³ = {π, ππ , ππ , πΏ}, shown in Figure 4 on page 15, where π = {ππ , π, ππ } and πΏ is defined
as follows. πΏ(ππ , 1, ππ ) = 1, πΏ(ππ , 0, π) = 21 , πΏ(ππ , 0, ππ ) = 12 , πΏ(π, 0, π) = 1, πΏ(π, 1, ππ ) =
1 and πΏ(ππ , 0, ππ ) = πΏ(ππ , 1, ππ ) = 1.
Now, note that any word πΌ β Ξ£π that contains two consecutive occurrences of 1 is
rejected by β³ with probability 1. Also note that the word 0π is accepted by β³ with
probability 1. Thus, β<1 (β³) β {0π } βͺ {0π1 10π2 1 . . . 0ππ 10π β£ π1 , π2 , ππ > 0} βͺ
{0π1 10π2 10π3 1 . . . β£ ππ > 0 for each π }. Now, given π1 , π2 , . . . , ππ > 0 and 1 β€ π β€
π, let π’π = 0π1 10π2 1 . . . 10ππ and π£π = 0π1 10π2 1 . . . 10ππ 1. We make the following
observations:
πΏπ£π (ππ , ππ ) = πΏπ’π (ππ , ππ ) × πΏ1 (ππ , ππ ) + πΏπ’π (ππ , π) × πΏ1 (π, ππ ) = πΏπ’π (ππ , π).
πΏπ£π (ππ , π) = πΏπ’π (ππ , ππ ) × πΏ1 (ππ , π) + πΏπ’π (ππ , π) × πΏ1 (π, π) = 0.
πΏπ’π+1 (ππ , ππ ) = πΏπ£π (ππ , ππ )×(πΏ0ππ+1 (ππ , ππ )+πΏ0ππ+1 (π, ππ )) = πΏπ£π (ππ , ππ )×(1/2)ππ+1 .
πΏπ’π+1 (ππ , π) = πΏπ£π (ππ , ππ ) β πΏπ’π+1 (ππ , ππ ) = πΏπ£π (ππ , ππ ) × (1 β (1/2)ππ+1 ).
βπ+1
(5) Thus, we get that πΏπ£π+1 (ππ , ππ ) = πΏπ£π (ππ , ππ ) × (1 β (1/2)ππ+1 ) = π=1 (1 β (1/2)ππ )
and πΏπ£π+1 (ππ , π) = 0.
(1)
(2)
(3)
(4)
Therefore, any word of the form 0π1 10π2 1 . . . 0ππ 10π is accepted by β³ with probability
the probability of accepting a word of the form 0π1 10π2 1 . . .
ββ> 0. Furthermore,
ππ
is π=1 (1 β (1/2) ). Thus, it can be easily checked that β<1 (β³) is the union of
two disjoint languages β1 = 00β (100β )β 0π and β2 = {0π1 10π2 10π3 1...... β£ ππ >
β
β
0 for each π and
(1 β (1/2)ππ ) > 0}. Now β1 is a π-regular language, but β2 is not.
π=1
Thus, β<1 (β³) is not a π-regular language.
We will shortly show that the class M WA is strictly contained in the class M SC. However,
before we proceed, we will need the following lemma which shows that even if a language
β β M WA is not π-regular, the languages cl(β) and cl(Ξ£π β β) must be π-regular. Recall
that for a language β1 , cl(β1 ) is the smallest safety language containing β1 . Hence, πregularity of cl(β1 ) and cl(Ξ£π β β1 ) is a necessary (but not sufficient) condition for an
almost safety language β1 to belong to M WA.
L EMMA 5.9. Let β β M WA. Then the safety languages cl(β) and cl(Ξ£π β β) are
π-regular. There is an almost safety language β1 such that cl(β1 ) and cl(Ξ£π β β1 ) are
π-regular but β1 ββ M WA.
P ROOF. Let β³ = (π, ππ , ππ , πΏ) be such that β = β<1 (β³). For π β π β {ππ }, let
β³π = (π, π, ππ , πΏ), i.e., the FPM obtained by making π the initial state.
ACM Journal Name, Vol. V, No. N, Month 20YY.
22
β
We first show that cl(β) is π-regular. Please note that if β is empty, then this is trivially
true. Let β be non-empty. Let π0 = {π β πβ£π β= ππ and there is some πΌ s.t. ππππ
β³π , πΌ > 0}.
β²
Consider the BuΜchi automaton β¬ = (π0 , Ξ, ππ , π0 ) where Ξ(π, π, π ) iff πΏ(π, π, π β² ) > 0.
We claim that the language β(β¬) recognized by β¬ is the set cl(β<1 (β³)). Please note
that since the set of accepting states of β¬ is the set of states of β¬, the language β(β¬) is a
safety language. Hence, in order to show that β(β¬) = cl(β<1 (β³)), it suffices to show
that β(β¬) β cl(β<1 (β³)) and β<1 (β³) β β(β¬).
We will show that β(β¬) β cl(β<1 (β³)). Let πΌ = π1 π2 . . . β β(β¬) and for π₯ > 0 let
B(πΌ, π₯) be the open ball of radius π₯ centered at πΌ. Pick π > 0 such that 21π < π₯. Clearly,
π1 π2 . . . ππ Ξ£π β B(πΌ, π₯). Therefore, it suffices to show that π1 π2 . . . ππ Ξ£π β© β<1 (β³) β=
β
. Now since πΌ β β(β¬), there are some π + 1 states π0 = ππ , π1 , . . . ππ β π0 such
that (π0 , π1 , π1 ), (π1 , π2 , π2 ), . . . , (ππβ1 , ππ , ππ ) β Ξ. By definition πΏ(ππ , ππ , ππ+1 ) > 0
for each 0 β€ π < π and there is word π½ such that ππππ
β³ππ , π½ > 0. Consider the word πΌ1 =
>
0.
Thus π1 π2 . . . ππ Ξ£π β©β<1 (β³) β= β
.
π0 π1 . . . ππ π½. It can be easily shown that ππππ
β³, πΌ1
Now we show that β<1 (β³) β β(β¬). Pick πΌ = π1 π2 ππ . . . β β<1 and fix it. In
order to show that πΌ β β(β¬), by Koningβs lemma, it suffices to show that for each π > 1
there is a path in β¬ from ππ on input symbols π1 , π2 . . . ππβ1 . Let πΌπ = ππ ππ+1 . . . and
π’π = π1 π2 . . . ππβ1 .
We have by definition ππππ
β³, πΌ > 0. Please note that it can be shown that for each π,
β
πππ
πππ
πΏπ’π (ππ , π)ππππ
πβ³, πΌ =
β³π , πΌπ . Now since πβ³π , π½ = 0 for every π β πβ(π0 βͺ{ππ })
πβπβ{ππ }
and π½ β Ξ£π , we get that ππππ
β³, πΌ =
β
πΏπ’π (ππ , π)ππππ
β³π , πΌπ . If there is no path on input
πβπ0
π1 π2 . . . ππβ1 in the automaton β¬, then πΏπ’π (ππ , π) = 0 for every π β π0 which contradicts
ππππ
β³, πΌ > 0. Hence, there is a path in β¬ on input symbols π1 , π2 . . . ππβ1 .
Now in order to show that cl(Ξ£π ββ<1 (β³)) is π-regular, we can assume that β<1 (β³) β=
π
Ξ£ (otherwise, cl(Ξ£π β β<1 (β³)) = β
which is π-regular). We now construct the BuΜchi
automata β¬ = (π, Ξ, {ππ }, π) where π β β(π) (the power-set of π) and Ξ are defined as follows. A set π1 β π belongs to π iff there is a π½ β Ξ£π such that for any
π β π1 , ππππ
β³π , π½ = 1. Please note that {ππ } β π. Now (π1 , π, π2 ) β Ξ iff post(π1 , π) =
{π β² β£βπ β π1 s.t. πΏ(π, π, π β² ) > 0} β π2 . We can once again show that cl(Ξ£π ββ<1 (β³)) =
β(β¬).
However, the π-regularity of cl(β) and cl(Ξ£π ββ) is not sufficient to guarantee inclusion
of β in M WA. Let Ξ£ = {0, 1}. Now, consider the Language β2 = {0, 1}β 11{0, 1}π . The
set β2 is an open set and hence an almost safety language. Let β1 = β2 βͺ{0102 103 1 . . .}.
Now, the set {0102 103 1 . . .} is a closed set and hence an almost safety language. It can
be easily shown that cl(β1 ) = Ξ£π and cl({0, 1}π β β1 ) = {0, 1}π β β2 both of which are
π-regular.
Now, we show that β1 β= β<1 (β³1 ) for any FPM β³1 by contradiction. Suppose
there is a β³β² such that β = β<1 (β³β² ). Then, by the Lemma 4.7, there are real numbers
π0 , . . . ππβ1 β β such that for all π£ β Ξ£+ , πΌ β Ξ£π ,
ππππ
= ππβ1 ππππ
+ . . . + π0 ππππ
β³β² , π£πΌ
β³β² , π£0π πΌ
β³β² , π£0πβ1 πΌ
and ππβ1 + . . . + π0 = 1.
ACM Journal Name, Vol. V, No. N, Month 20YY.
β
23
Now, pick π£1 = 0102 . . . 10π+1 1. and πΌ1 = 0010π+3 10π+4 1 . . .. We get
ππππ
= ππβ1 ππππ
+ . . . + π0 ππππ
β³β² , π£1 πΌ1 .
β³β² , π£1 0π πΌ1
β³β² , π£1 0πβ1 πΌ1
Now π£1 0π πΌ1 β Ξ£π β β1 for all 0 β€ π < π. Hence, ππππ
β³β² , π£1 0π πΌ1 = 1 for all 0 β€ π < π.
Hence, we get
= ππβ1 + . . . + π0 = 1.
ππππ
β³β² , π£0π πΌ
However, π£1 0π πΌ1 = 0102 103 . . . β β1 and therefore
ππππ
< 1.
β³β² , π£0π πΌ
Thus, we have arrived at a contradiction.
We are ready to show that the class M SC strictly contains the class M WA.
T HEOREM 5.10. M WA β M SC.
P ROOF. We start by showing that M WA β M SC. Given a FPM β³ and π₯ β (0, 1), let
πππ
β³β² = β³π₯ be the FPM defined in Proposition 4.4 such that ππππ
β³β² , πΌ = π₯ × πβ³, πΌ for every
word πΌ. Now, it follows easily that β<1 (β³) = β<π₯ (β³β² ). Hence, M WA β M SC.
In order to show that the containment is strict, construct β³ as in the proof of Theo2
rem 5.5 such that for every πΌ, ππππ
β³, πΌ = 1 β (bin(πΌ)) . Thus, β< 21 (β³) = {πΌ β£ bin(πΌ) >
β
β
1
1
1 (β³)) = {πΌ β£ bin(πΌ) β₯
}.
Now,
it
can
be
shown
that
cl(β
<2
2
2 } which is not
β²
π-regular by Proposition 4.8. Thus, if there is some FPM β³ such that β<1 (β³β² ) =
β< 12 (β³) then cl(β<1 (β³β² )) is not π-regular which contradicts Lemma 5.9.
Finally, we show that the class M SC is strictly contained in the class of almost safety
languages. The proof is similar to the proof of Theorem 5.6 which showed that there is a
safety language β that is not contained in M NC. Indeed the same safety language used in
the proof of Theorem 5.6 (any safety language is also an almost safety language) witnesses
the strictness of the containment M SC β AlmostSafety.
T HEOREM 5.11. M SC β AlmostSafety.
P ROOF. Please note that M SC β AlmostSafety as a consequence of Theorem 5.7.
Consider the safety language β defined in the proof of Theorem 5.6 as follows. Let
πΏ = {0π 1(0β 1)β 0π 1β£π β β, π > 0}. Let β1 = πΏΞ£π and β = Ξ£π β β1 . We can show that
β ββ M SC by an argument similar to proof of β βM
/ NC sketched in Theorem 5.7.
Summarizing the results of this Section, we have Regular β© AlmostSafety β M WA
β M SC β AlmostSafety. Please note that since M SA coincides with π-regular safety
languages, M SA is strictly contained in M WA and M SC. Also, since there are π-regular
almost safety languages which are not safety, it follows immediately that neither M WA nor
M SC is contained in M NC. Therefore, a natural question to ask is if M NC β M SC? We
will answer the question in negative in Section 5.3 as the proof requires a result which we
will prove in Section 5.3.
5.3
Robust Monitors
In this Section, we will show that the class M NC ββ M SC. The proof will utilize a result on
robust monitors. Robust monitors are probabilistic FPMβs such that there is a separation
ACM Journal Name, Vol. V, No. N, Month 20YY.
24
β
between probability of rejecting a word in the language permitted by the monitor and the
probability of rejecting a word in the language rejected by the monitor. Formally,
Definition: A FPM, β³ on Ξ£, is said to be π₯-robust for π₯ β (0, 1) if there is an π > 0 such
that for any πΌ β Ξ£π , β£ππππ
β³, πΌ β π₯β£ > π.
Observe that if β³ is π₯-robust then the languages β<π₯ (β³) and ββ€π₯ (β³) coincide. Robustness is a generalization of the concept of isolated cut-points defined for probabilistic
automata over finite words [Rabin 1963] - π₯ is said to be an isolated cut-point for a probabilistic automaton over finite words if there is an π > 0 such that for every finite word
π’ the probability of accepting π’ is bounded away from π₯ by π. It was shown in [Rabin
1963] that if π₯ is an isolated cut-point then the language of finite words accepted by the
probabilistic automaton is a regular language. We can generalize this result to probabilistic
monitors and demonstrate that the language β<π₯ (β³) is a π-regular safety language. The
proof relies on the fact that ββ€π₯ (β³) is a safety language which implies that π-regularity
of ββ€π₯ (β³) is equivalent to the regularity of the set of its finite prefixes (see Section 3).
This observation is crucial in the proof we present, whereas the result in [Rabin 1963] does
not depend on any such topological consideration. The proof that the set of finite prefixes
of ββ€π₯ (β³) is regular, however, follows an argument similar to the result in [Rabin 1963].
The proof depends on the following result, proved in [Rabin 1963].
π
P ROPOSITION 5.12. Given
βπ π β β, π > 0, let π«π β β be the set of vectors defined as
{(π1 , . . . ππ ) β£ 0 β€ ππ β€ 1, π=1 ππ = 1}. Given π β β, π > 0, let π° β π«π be a set such
βπ
that for any (π1 , . . . ππ ), (π1β² , . . . ππβ² ) β π°, π β£ππ β ππβ² β£ > π. Then π° must be finite.
Using the above observation, we can show,
T HEOREM 5.13. Let β³ be π₯-robust for some π₯ β (0, 1). Then β<π₯ (β³) = ββ€π₯ (β³)
is a π-regular safety language.
P ROOF. Let β³ = (π, ππ , ππ , πΏ) be π₯-robust and let π > 0 be such that β£ππππ
β³, πΌ β π₯β£ > π
for all πΌ β Ξ£π . Let β = ββ€π₯ (β³) and let πΏ β Ξ£β be the set of finite prefixes of β. In
other words πΏ = {π’ β Ξ£β β£ βπΌ β Ξ£π s. t. π’πΌ β β}.
Since β is a safety language, in order to prove that β is π-regular, it suffices to show that
πΏ is a regular language (viewed as a language over finite words). We will demonstrate that
πΏ is a regular language by demonstrating that it has finite Myhill-Nerode index, i.e., the
number of equivalence classes β‘πΏ is finite. Recall that for two finite words π’1 , π’2 β Ξ£β ,
π’1 β‘πΏ π’2 if for all π£ β Ξ£β , π’1 π£ β πΏ iff π’2 π£ β πΏ.
Now assume that π’1 and π’2 are two finite words such that π’1 ββ‘πΏ π’2 . Then, there is
some π£ β Ξ£β such that either π’1 π£ β πΏ and π’2 π£ ββ πΏ or π’1 π£ ββ πΏ and π’2 π£ β πΏ.
First, consider the case π’1 π£ β πΏ and π’2 π£ ββ πΏ. Now, since π’1 π£ β πΏ, there is some
πΌ β Ξ£π such that π’1 π£πΌ β β. Pick one such word, say πΌ0 and fix it. Thus π’1 π£πΌ0 β β. Also
πππ
since π’2 π£ ββ πΏ, π’2 π£πΌ0 ββ β. By definition of β, we get ππππ
β³, π’1 π£πΌ0 β€ π₯ and πβ³, π’2 π£πΌ0 > π₯.
πππ
Since π₯ is an isolated cut-point, we get ππππ
β³, π’1 π£πΌ0 < π₯ β π and πβ³, π’2 π£πΌ0 > π₯ + π.
β²
By Lemma 4.1, there exists a finite prefix of πΌ0 , say π£ , such that πΏπ’1 π£π£β² (ππ , ππ ) <
π₯ β π and πΏπ’2 π£π£β² (ππ , ππ ) > π₯ +β
π. Thus, πΏπ’2 π£π£β² (ππ , ππ ) β πΏπ’1 π£π£β² (ππ , ππ ) > 2π. Now,
β² (ππ , ππ ) β πΏπ’ π£π£ β² (ππ , ππ ) =
β²
πΏπ’2 π£π£β
1
πβπ (πΏπ’2 (ππ , π) β πΏπ’1 (ππ , π))πΏπ£π£ (π, ππ ). Thus, we get
β² (π, ππ ) > 2π.
that πβπ (πΏπ’2 (ππ , π) β
πΏ
(π
,
π))πΏ
π’
π
π£π£
1
β
β
β² (π, ππ ) β€
Now, we have that πβπ (πΏπ’2 (ππ , π) β πΏπ’1 (ππ , π))πΏπ£π£β
πβπ β£πΏπ’2 (ππ , π) β
πΏπ’1 (ππ , π)β£β£πΏπ£π£β² (π, ππ )β£. Since 0 β€ πΏπ£π£β² (π, ππ ) β€ 1, we get πβπ β£πΏπ’2 (ππ , π)βπΏπ’1 (ππ , π)β£ β₯
ACM Journal Name, Vol. V, No. N, Month 20YY.
β
25
β
βπβπ (πΏπ’2 (ππ , π) β πΏπ’1 (ππ , π))πΏπ£π£β² (π, ππ ) > 2π. Similarly, if π’1 π£ ββ πΏ and π’2 π£ β πΏ then
πβπ β£πΏπ’2 (ππ , π) β πΏπ’1 (ππ , π)β£ > 2π.
β
Therefore, if π’1 ββ‘πΏβ
π’2 , it must be the case
β that πβπ β£πΏπ’2 (ππ , π) β πΏπ’1 (ππ , π)β£ > 2π.
Also, please note that πβπ πΏπ’1 (ππ , π) = πβπ πΏπ’2 (ππ , π) = 1. By Proposition 5.12, it
follows that there can only finite number of equivalence classes.
The above proof is sound even if only one side of the rejection probabilities is bounded
away. Therefore, we get the following corollary.
C OROLLARY 5.14. Let β³ be a monitor such that there is an π > 0 such that for each
πππ
πΌ β Ξ£π either ππππ
β³, πΌ = 1 or πβ³, πΌ β€ 1 β π, then β<1 (β³) β Regular β© Safety.
P ROOF. Follows immediately from the fact that β³ is 1 β 2π -robust and β<1 (β³) =
β<1β 2π (β³) β Regular β© Safety.
Now, we are ready to show that M SC and M NC are incomparable.
T HEOREM 5.15. M SC ββ M NC and M NC ββ M SC.
P ROOF. Observe that since M SC contains almost safety languages that are not safety
languages, we get that M SC ββ M NC. In order to show that M NC ββ M SC, let Ξ£ = {0, 1}.
We will show that there is a RatFPM β³ on Ξ£, such that for any FPM β³β² on Ξ£, and any
β²
15 (β³) β= β<π₯ (β³ ).
π₯ β [0, 1], ββ€ 16
Now, by repeated use of Lemma 4.9, Proposition 4.5 and Proposition 4.6, we can con4
2 2
struct a RatFPM β³ such that for all πΌ β Ξ£π , ππππ
β³, πΌ = (bin(πΌ)) (1 β bin(πΌ) ) . Now
πππ
15
15
1
πππ
4
15 (β³) β π
a word πΌ β ββ€ 16
β³, πΌ β€ 16 β 1 β πβ³, πΌ β€ 16 β 16 β€ (bin(πΌ)) (1 β
bin(πΌ)2 )2 β 41 β€ bin(πΌ)2 (1 β bin(πΌ)2 ) β 0 β€ β 41 + bin(πΌ)2 (1 β bin(πΌ)2 ) β 0 β€
β( 21 β bin(πΌ)2 )2 β β12 = bin(πΌ).
Now there is only one word π½, namely wrd( β12 ), (the βbinary expansionβ of β12 ) such
1
15 (β³) = {wrd( β )}. Now, β 15 (β³) is not π-regular since
that bin(π½) = β12 . Thus ββ€ 16
β€ 16
2
every π-regular language must contain an ultimately periodic word [Perrin and Pin 2004;
Thomas 1990], and wrd( β12 ) is not ultimately periodic (as β12 is irrational).
We proceed by contradiction. Suppose there is some β³β² and π₯ such that ββ€ 15
(β³) =
16
β<π₯ (β³β² ). Let π¦ = ππππ
. By definition π¦ < π₯ and for all words πΌ β= wrd( β12 ),
β³β² , wrd( β1 )
2
π₯+π¦
β²
β²
ππππ
β³β² , πΌ β₯ π₯. Let π₯1 = 2 . Clearly β³ is π₯1 -robust. Thus, by Theorem 5.13, β<π₯1 (β³ )
1
15 (β³) = wrd( β ) is not
is π-regular. This contradicts the fact that β<π₯1 (β³β² ) = ββ€ 16
2
π-regular.
6.
DECISION PROBLEMS
In this section, we consider the problems of checking emptiness and universality of RatFPMs (with rational cut-points). Our results for these problems are summarized in Figure 6.
6.1
Emptiness Problem: Upper Bounds
In this section we will prove the upper bounds for the emptiness problem for monitors in
the classes M SA, M WA, M SC, and M WA. We will assume that the automata given as input
to the emptiness problem is a RatFPM i.e., a probabilistic monitor with rational transition
ACM Journal Name, Vol. V, No. N, Month 20YY.
26
β
M SA
M WA
M SC
M NC
E MPTINESS
PSPACE-complete (Theorems 6.1 and 6.6)
PSPACE-complete (Theorems 6.4 and 6.6)
co-R.E.-complete (Theorems 6.4 and 6.8)
R.E.-complete (Theorems 6.5 and 6.9)
U NIVERSALITY
NL-complete (Theorems 6.10 and 6.16)
PSPACE-complete (Theorems 6.13 and 6.17)
Ξ 11 -complete (Theorems 6.15 and 6.19)
co-R.E.-complete (Theorems 6.14 and 6.18)
Fig. 6. Table summarizing the complexity of the emptiness and universality problems for
the various classes of monitors.
probabilities.
M SA: We begin by considering the class of monitors in M SA. We show that the emptiness
problem is in PSPACE by reducing it to the universality problem of BuΜchi automata.
T HEOREM 6.1. Let β³ = (π, ππ , ππ , πΏ) be a RatFPM on an alphabet Ξ£. The problem
of checking the emptiness of ββ€0 (β³) is in PSPACE.
P ROOF. We construct a non-deterministic BuΜchi automaton β¬, which is essentially the
same as β³, except that the transition probabilities are discarded. Formally, β¬ = (π, Ξ, ππ , ππ )
where Ξ = {(π, π, π β² ) β£ πΏ(π, π, π β² ) > 0}. Note that ππ is the only final state of the BuΜchi
automaton β¬. It is easy to see that ββ€0 (β³) = β
iff πΏ(β¬) = Ξ£π where πΏ(β¬) is the the language accepted by the BuΜchi automaton β¬. The universality problem for BuΜchi automata
is known to be in PSPACE, and hence the result follows.
M WA and M SC: Next we establish that for monitors in M WA and M SC, the emptiness
problems are in PSPACE and R.E., respectively. The proofs for both of these rely on a
technical lemma that we state and prove before presenting the upper bound for the emptiness problem. We begin with some definitions that we will need.
Definition: For a FPM β³ = (π, ππ , ππ , πΏ), the deterministic graph associated with it
is the edge-labeled multi-graph (2π , E), where E β 2π × Ξ£ × 2π is defined as follows:
(π, π, π β² ) β E iff π β² = {π β² β π β£ βπ β π s.t. πΏ(π, π, π β² ) > 0}. We denote the deterministic
graph of β³ by πΊ(β³).
A vertex π β 2π of πΊ(β³) will be said to be good if ππ β
/ π. A cycle in πΊ(β³) is a
sequence of edges (π0 , π0 , π1 ), (π1 , π1 , π2 ), ..., (ππβ1 , ππβ1 , ππ ) such that π0 = ππ ; π0
is the starting vertex of the cycle, and π0 . . . ππβ1 is the input sequence associated with the
cycle. Finally, we say that the cycle is good if all the nodes on it are good.
L EMMA 6.2. Let β³ = (π, ππ , ππ , πΏ) be a FPM on an alphabet Ξ£, and π₯ β (0, 1].
The language β<π₯ (β³) is non-empty iff there exists a finite word π’, a set π β π such
that
β π lies on β²a good cycle in πΊ(β³), and πΏπ’ (ππ , π) > 1 β π₯, where πΏπ’ (ππ , π) =
π β² βπ πΏπ’ (ππ , π ).
P ROOF. We prove the βifβ part of the lemma as follows. Let π’, π be such that πΏπ’ (ππ , π) >
1 β π₯ and π lies on a good cycle in πΊ(β³). Let πΆ be the good cycle on which π lies. Without loss of generality, we assume that π is the starting state of πΆ and π£ is the finite input
sequence associated with πΆ. Since πΆ is a good cycle and πΏπ’ (ππ , π) > 1 β π₯, it is not diffiπππ
cult to see that, for πΌ = π’π£ π , ππππ
β³, πΌ > 1 β π₯, and hence πβ³, πΌ < π₯. Thus πΌ β β<π₯ (β³).
The βonly ifβ part of the lemma is proved as follows. Assume that for some πΌ β Ξ£π ,
πππ
β£πβ£
πβ³, πΌ < π₯. This means ππππ
. Note that
β³, πΌ > 1 β π₯. Let πΌ = π1 π2 . . . and π = 2
ACM Journal Name, Vol. V, No. N, Month 20YY.
β
27
the number of states of πΊ(β³) is bounded by π. For every, π β₯ 1, let ππ be the set of all
π β² β π such that πΏπΌ[π+1:π+π] (π β² , ππ ) > 0; i.e., ππ is the set of all states from which ππ can
be reached on the input sequence πΌ[π + 1 : π + π].
Claim: There exists an π β₯ 1 such that πΏπΌ[1:π] (ππ , ππ ) < π₯.
Before proving the above claim, let us observe that the βonly ifβ part of the lemma
follows from it. Let π β₯ 1 be such that πΏπΌ[1:π] (ππ , ππ ) < π₯. Let ππ be the set of π β² β π β ππ
such that πΏπΌ[1:π] (ππ , π β² ) > 0. It is easy to see that πΏπΌ[1:π] (ππ , ππ ) > 1 β π₯. Note that from
each of the states in ππ , the state ππ can not be reached in the automaton β³ on the input
sequence πΌ[π + 1 : π + π]. From this it follows that, the sequence ππ , ππ+1 , . . . , ππ+π of
vertices of πΊ(β³) reached on the input sequence πΌ[π + 1 : π + π] starting from ππ are all
good. Since π equals the number of vertices of πΊ(β³), using pigeon hole principle, we see
that the above sequence contains a cycle and it is a good cycle. Furthermore, we see that
πΏπΌ[1:π] (ππ , ππ ) > 1 β π₯ for every π such that π β€ π β€ π + π. Thus, the βonly ifβ part of the
lemma follows.
Proof of the claim: We conclude this proof by showing the claim by contradiction. Assume the claim is not true. This means for every π β₯ 1, πΏπΌ[1:π] (ππ , ππ ) β₯ π₯. Let
π0 < π1 < ... < πβ ... be the infinite sequence of natural numbers such that for each
β β₯ 0, πβ = β β
π. Clearly, for each β > 0, πΏπΌ[1:πβ ] (ππ , ππβ ) β₯ π₯ and πβ+1 = πβ + π.
Let π be the value min{πΏπ’β² (π β² , ππ ) β£ π’β² β Ξ£π , πΏπ’β² (π β² , ππ ) > 0}; note that this value is well
defined and > 0. Let πΉ0 = 0 and for every β > 0, let πΉβ = πΏπΌ[1:πβ ] (ππ , ππ ), i.e., it is
the probability that β³ is in state ππ after the finite prefix πΌ[1 : πβ ]. We claim that for each
β>0
πΉβ+1 β₯ πΉβ + (π₯ β πΉβ )π = (1 β π)πΉβ + π₯π.
In order to see this, first observe that ππ β πβ for each π > 0. Also, note that it must be the
case that ππ β π1 (otherwise the claim is simply true). Therefore, πΉ1 = πΏπΌ[1:π1 ] (ππ , ππ ) β₯
π β₯ 0 + (π₯ β 0)π = πΉ0 + (π₯ β πΉ0 )π.
For each π β π and β > 0, let the probability of being in state π after reading the
input
β πΌ[1 : πβ ] be denoted as ππβ (π). Note that ππβ (π) = πΏπΌ[1:πβ ] (ππ , π), ππβ (ππ ) = πΉβ and
πβππβ ππβ (π) = πΏπΌ[1:πβ ] (ππ , ππβ ) β₯ π₯. Also, for each π β π and β > 0, let ππβ+1 (ππ β£π)
be the probability of being in state ππ after reading the input πΌ[πβ + 1 : πβ+1 ] having
started in the state π. In other words, let ππβ+1 (ππ β£π) = πΏπΌ[πβ +1:πβ+1 ] (π, ππ ). Note that
ππβ+1 (ππ β£ππ ) = 1, ππβ+1 (ππ β£π) β₯ π for each π β ππβ and ππβ+1 (ππ β£π) = 0 for all π β
π β ππβ . It follows from definition that for each β > 1,
πΉβ+1 = πΏβ
πΌ[1:πβ+1 ] (ππ , ππ )
=
πβπ ππβ (π)ππβ+1 (ππ β£π)
β
β
= ππβ (ππ )ππβ+1 (ππ β£ππ ) + πβππ β{ππ } ππβ (π)ππβ+1 (ππ β£π) + πβπβπβ ππβ (π)ππβ+1 (ππ β£π)
β
β
= πΉβ + πβππ β{ππ } ππβ (π)ππβ+1 (ππ β£π) + 0
β
β
β₯ πΉβ + πβππ β{ππ } ππβ (π)π
β β
β₯ πΉβ + π(( πβππ ππβ (π)) β ππβ (ππ ))
β
β₯ πΉβ + π(π₯ β πΉβ ).
Thus, we have that for each β, πΉβ+1 β₯ πΉβ + (π₯ β πΉβ )π = (1 β π)πΉβ + π₯π. By induction
on β, it follows that
β
πΉβ+1 β₯ (1 β π)π πΉ0 + π₯π
(1 β π)π
0β€πβ€β
ACM Journal Name, Vol. V, No. N, Month 20YY.
28
β
From this, we see that,
( lim πΉβ ) β₯ π₯π
βββ
Hence
ππππ
β³, πΌ
β
(1 β π)π = π₯
0β€π<β
β₯ π₯, which contradicts the fact that πΌ β β<π₯ (β³).
An immediate consequence of the proof of Lemma 6.2 is that non-emptiness of β<π₯ (β³)
for a FPM β³ means that there is an ultimately periodic word in β<π₯ (β³).
C OROLLARY 6.3. Let β³ = (π, ππ , ππ , πΏ) be a FPM on an alphabet Ξ£, and π₯ β
(0, 1]. β<π₯ (β³) β= β
if and only if there exist π’, π£ β Ξ£β such that π’π£ π β β<π₯ (β³).
We thus obtain the upper bounds for checking the emptiness of M WA and M SC.
T HEOREM 6.4. Let β³ = (π, ππ , ππ , πΏ) be a RatFPM on an alphabet Ξ£, and rational
π₯ β (0, 1) be a rational number. Emptiness of β<1 (β³) can be determined in PSPACE,
while emptiness of β<π₯ (β³) can be determined in co-R.E..
P ROOF. From Lemma 6.2, we know that if β<π₯ (β³) is non-empty then there is π’, π
such that π is on a good cycle and πΏπ’ (ππ , π) > 1 β π₯. So the algorithm for checking nonemptiness, non-deterministically guesses π β π and the string π’. Then the semi-decision
first checks that π is on a good cycle of πΊ(β³), which can be done using space that is
polynomial in the size of β³ without explicitly constructing πΊ(β³). Next, it is easy to see
that there is a semi-decision procedure to check if πΏπ’ (ππ , π) > 1 β π₯. This proves that the
non-emptiness of β<π₯ (β³) is recursively enumerable.
Based on the observations in the previous paragraph, in order to prove that non-emptiness
of β<1 (β³) is in PSPACE, all we need to show is that the check πΏπ’ (ππ , π) > 0 can be
accomplished in PSPACE. Notice that πΏπ’ (ππ , π) > 0 iff the vertex π β² reached on the sequence π’ in πΊ(β³) is such that π β© π β² β= β
. Thus the PSPACE upper bound can be shown
by observing that this check can be done by guessing the symbols of π’ incrementally following a path in πΊ(β³).
M NC: We will conclude this section on upper bounds by showing that the emptiness problem for M NC is recursively enumerable.
T HEOREM 6.5. Let β³ = (π, ππ , ππ , πΏ) be a RatFPM on an alphabet Ξ£, and rational
π₯ β (0, 1) be a rational number. The problem of checking the emptiness of ββ€π₯ (β³) is in
R.E..
P ROOF. A collection π β Ξ£+ of finite strings will be called a witness if for every
π’ β π , πΏπ’ (ππ , ππ ) > π₯ and for every πΌ β Ξ£π , there is π’ β π such that π’ is a prefix of
πΌ. Observe that if there is a witness set π then for every πΌ β Ξ£π , ππππ
β³, πΌ > π₯, and so
ββ€π₯ (β³) = β
.
Our main observation is that ββ€π₯ (β³) = β
if and only if there is a finite set π β Ξ£+
such that π is a witness. Before proving this, let us note that this gives us a semi-decision
procedure to check the emptiness of ββ€π₯ (β³). The algorithm to check emptiness will
guess a finite set of finite strings π , and check that π is indeed a witnessing set. It is easy
to see that checking if π is a witness is decidable, which proves the theorem.
We now prove the key technical claim that ββ€π₯ (β³) = β
if and only if there is a finite
set π β Ξ£+ such that π is a witness. Clearly, the existence of a finite witnessing set
implies that ββ€π₯ (β³) = β
, and so the main challenge is in proving the converse.
ACM Journal Name, Vol. V, No. N, Month 20YY.
β
29
Let ββ€π₯ (β³) = β
. Consider the set πΊ = {π’ β Ξ£β β£ πΏπ’ (ππ , ππ ) β€ π₯}. Note that the
empty string π β πΊ. The set πΊ can be viewed as vertices of a Ξ£-branching tree, because
πΊ is prefix closed. We will abuse notation and view πΊ both as a tree and a collection of
strings. Suppose πΊ is not finite. Then by KoΜnigβs Lemma, πΊ has an infinite path, which
means that there is πΌ β Ξ£π such that every prefix of πΌ is in πΊ. This means that for every
prefix π’ of πΌ, πΏπ’ (ππ , ππ ) β€ π₯, and hence ππππ
β³, πΌ β€ π₯. This contradicts our assumption that
ββ€π₯ (β³) = β
. Hence πΊ must be finite.
Consider the set π = (πΊΞ£) β πΊ. Observe that π is finite. Second, since π β Ξ£+ β πΊ,
we have for every π’ β π , πΏπ’ (ππ , ππ ) > π₯. Finally, since πΊ is finite, for every πΌ β Ξ£π ,
there are infinite number of prefixes of πΌ that are not in πΊ; from this, it follows that there
is a π’ β π such that π’ is a prefix of πΌ. This shows that π is a finite witness set, and this
completes the proof of the theorem.
6.2
Emptiness Problem: Lower Bounds
In this section, we will show that the upper bounds proved in Section 6.1 for the various
classes of monitors are tight. We will first prove lower bounds for M WA and M SA, before
we consider the classes M SC and M NC.
M WA and M SA: We will show the PSPACE-hardness of the emptiness problem for M WA
and M SA.
T HEOREM 6.6. For a RatFPM β³, the problems of checking the emptiness of ββ€0 (β³)
and β<1 (β³) are PSPACE-hard.
P ROOF. The PSPACE-hardness problem for β<1 (β³) might appear deceptively similar to the PSPACE-hardness of the universality problem of non-deterministic automata;
however, we could not find any easy reduction from the later to the former problem. We
prove the hardness result by reducing the membership problem for a language in PSPACE.
Let πΏ βPSPACE and let π be a single tape deterministic Turing machine that accepts πΏ
in space π(π) for some polynomial π where π is the length of its input. Without loss of
generality, let π = (π, Ξ£, Ξ, π΅, πΏ, π0 , ππ , ππ ), where π is a finite set of control states; Ξ£, Ξ
are the input and tape alphabets respectively, such that Ξ£ β Ξ; π΅ β Ξ β Ξ£ is the blank
symbol; πΏ : π × Ξ β π × Ξ × {β1, 1} is the transition function, with β1 denoting moving
the tape head left, and 1 denoting moving the tape head right; π0 β π is the initial state;
ππ β π is the unique accepting state, i.e., if π₯ β πΏ then π on input π₯ eventually reaches
control state ππ ; ππ β π is the unique rejecting state, i.e., if π₯ ββ πΏ then π on input π₯
eventually reaches ππ . Let π = β£Ξβ£ + β£(Ξ × π)β£, π = π(π) and π = ππ . Without loss of
generality, we can make the following simplifying assumptions about π .
1. π cannot take any further steps once its control state is either ππ or ππ . Thus, ππ and ππ
are halting states of the machine π .
2. When started in any configuration, π reaches one of the halting states ππ or ππ within
π steps.
Let Conο¬g = Ξβ (Ξ × π)Ξβ . A configuration of π on an input of length π is a string
of length π = π(π) in Conο¬g, where the unique symbol in Ξ × π indicates the head
position as well as the control state. For a configuration π = π 1 π 2 β
β
β
π π β Conο¬g, π π‘(π )
denotes the control state in π , πππ (π ) denotes the position of the head, and π π¦π(π ) denotes
the input symbol being scanned. More formally, if π = πππ (π ) then π π = (π π‘(π ), π π¦π(π )).
ACM Journal Name, Vol. V, No. N, Month 20YY.
30
β
Let Ξ£π = {1, . . . π(π)} × (Ξ × π). For a configuration π , π π‘πππ(π ) will denote the triple
(πππ (π ), (π π¦π(π ), π π‘(π ))) β Ξ£π and for a sequence of configurations π = π 1 , π 2 , . . . π β ,
π π‘πππ(π) β Ξ£βπ is the word π π‘πππ(π 1 )π π‘πππ(π 2 ) β
β
β
π π‘πππ(π β ).
Recall that a valid computation of π starting from a configuration π is a sequence of
configurations π 1 , π 2 , . . . π β such that π = π 1 and for each π < β, π π+1 follows from π π
by one step of π . The initial configuration on an input of size π is of the form (Ξ£ ×
{π0 })Ξ£πβ1 π΅ πβπ . We will say that a word π’ β Ξ£βπ is valid from configuration π iff there
is a valid computation π starting from π such that π’ = π π‘πππ(π); observe that because π is
deterministic there is at most one valid computation π such that π’ = π π‘πππ(π). Finally we
will say π’ β Ξ£βπ is valid, if it is valid from some configuration π .
Let π = π1 π2 β
β
β
ππ be an input of length π to π . We will construct an RatFPM ππ
such that β<1 (ππ ) β= β
(and ββ€0 (ππ ) β= β
) if and only if π β πΏ. Formally, ππ =
(ππ , ππ π , πππ , πΏ π ) will be a RatFPM over the alphabet Ξ£π = Ξ£π βͺ {π }. The set of states
ππ = {ππ π , πππ } βͺ ({1, . . . , π} × Ξ × π) βͺ ({1, . . . , π} × Ξ). Thus, a state of ππ is either
ππ π , or πππ , or of the form (π, π), where 1 β€ π β€ π and π β Ξ βͺ (Ξ × π). Intuitively, ππ in
state (π, π) denotes that ππ‘β element of the current configuration of the computation of π
has value π.
Before defining the transitions of ππ , we define some concepts that we find useful. Consider any symbol (π, (π, π)) β Ξ£π . Note that such a triple can either be an input symbol or a
state of ππ . Let πΏ(π, π) = (π β² , π, π). For such a pair (π, π), we define πππ₯π‘ π π‘ππ‘π(π, (π, π))
and πππ₯π‘ π£ππ(π, (π, π)) to be π β² and π, respectively. We also define πππ₯π‘ πππ (π, (π, π)) to
be π + π. Given two such triples (π, (π, π)) and (πβ² , (πβ² , π β² )), we say that (πβ² , (πβ² , π β² )) is a
successor of (π, (π, π)) iff πβ² = πππ₯π‘ πππ (π, (π, π)) and π β² = πππ₯π‘ π π‘ππ‘π(π, (π, π)).
The transitions of ππ are defined as follows. From the initial state ππ π , on input π , there
are transitions to the states (π, π’π ) , for each π β {1, ..., π} where π’1 = (1, (π1 , π0 )), and
for 1 < π β€ π, π’π = (π, ππ ), and for π < π β€ π, π’π = (π, π΅); the probability of each of
1
these transitions is π
. Thus the input π βsets upβ the initial configuration when ππ is in the
initial state ππ π . From every other state on input π there is a transition to the reject state πππ
with probability 1. Also, from the state ππ π , on every input other than π there is a transition
to πππ with probability 1. From πππ , on every input there is a transition back to itself with
probability 1.
We now define the transitions on the input symbols in Ξ£π . Consider an input symbol π₯
of the form (π, (π, π)). Intuitively, such an input denotes π π‘πππ(π ) of the next configuration
π in the computation; thus, it denotes the new head position, new state, and the new symbol
being scanned. If π = ππ , then from every state there is a transition to the reject state πππ
with probability 1 on input π₯. We have the following transitions on input π₯ for the case
when π = ππ . From every state, of ππ , of the form (π, π), where π β Ξ, there is a transition
to ππ π with probability 1. From every state of the form (π, (π, π β² )), there is a transition to
ππ π with probability 1 if π₯ is a successor of (π, (π, π β² )); otherwise there is a transition to
πππ from (π, (π, π β² )) with probability 1. Intuitively, these transitions allow the automaton
to start from the initial state again, if the computation described by the input symbols is
accepting. Next, we describe the transitions when π β
/ {ππ , ππ }. From a state of the form
(π, π), where π β Ξ, transitions on input π₯ = (π, (π, π)) are defined as follows. If π = π
and π β= π then there is a single transition with probability 1 to the reject state πππ ; this
means the contents of the cell specified by π₯ does not match with the one specified by the
state. If π = π and π = π then there is a single transition with probability 1 to the state
ACM Journal Name, Vol. V, No. N, Month 20YY.
β
31
(π, (π, π)). If π β= π then there are two transitions each with probability 12 to the states
(π, π) and (π, (π, π)). Note that the former transition is a self loop denoting that the contents
of the cell did not change; the later, called cross transition, is to the state denoting new
head position and control state. Transitions from a state of the form (π, (π, π β² )) on input
π₯ = (π, (π, π)), where π β= ππ , ππ , are defined as follows. If π₯ is a successor of the pair
(π, (π, π β² )) then there are two transitions, each with probability 21 , to the states (π, π) and
(π, (π, π)), where π = πππ₯π‘ π£ππ(π, (π, π β² )). If the above condition does not hold then there
is a single transition to the reject state πππ with probability 1.
Before proving the correctness of the reduction, we formally spell out some properties
ππ satisfies. These properties capture the intuition behind the correctness. Consider an
input sequence π’ β Ξ£βπ such that π’ does not contain π and does not contain any symbol of
the form (π, (π, ππ )) or of the form (π, (π, ππ )) for any π, π.
A. Let π be a valid computation starting from configuration π and ending in configuration
π β² such that π’ = π π‘πππ(π). Suppose further that the π π‘β symbol of π is π β Ξ βͺ (Ξ × π).
Then on input π’ from state (π, π), ππ has a non-zero probability of reaching the state
π β ππ iff π = (π, π) for some π β {1, ..., π}, π β Ξ βͺ (Ξ × π) and the ππ‘β symbol of π β²
is π, and either π = π or π’ contains a symbol of the form (π, (π, π β² )) for some π, π β² . This
is because of the cross transitions. One consequence of this is that, on input π’ from state
(π, π), ππ has probability zero of reaching the rejecting state πππ .
B. If π’ is not valid starting from any configuration then from any state (π, π), on input π’,
1
ππ reaches the reject state πππ with probability at least 2β£π’β£
where β£π’β£ is the length of π’.
The above property is proved to hold as follows. Since π’ is not valid from any configuration, it can be shown that one of the following two conditions is satisfied: (i) there
exist two input symbols π₯, π¦ appearing consecutively in that order in π’ such that π¦ is not
a successor of u; (ii) there exist two input symbols π₯ = (π, (π, π)), π¦ = (π, (π, π)) such
that π₯ appears sometimes before π¦ in π’, no other symbol of the form (π, (πβ² , π β² )), for
any πβ² , π β² , appears in between them and π β= πππ₯π‘ π£ππ(π, (π, π)). Let π’β² be the smallest
prefix of π’ that violates condition (i) or (ii). We show that πππ is reachable from (π, π)
on input π’β² by a sequence of transitions of ππ . The lower bound on the probability follows since probability of each of the transitions is at least 21 . Assume that π’β² violates
condition (i). Then, π’β² = π’β²β² π₯π¦. The state π₯ is reachable from (π, π) on input π’β²β² π₯ and
there is a transition from state π₯ to πππ on input π¦ and hence πππ is reachable from (π, π).
Now assume that π’β² violates condition (ii). In this case π’β² = π£π₯π€π¦ where π£, π€ are input
sequences and π₯, π¦ are input symbols as given in (ii). It should be easy to see that the
states π₯, (π, πππ₯π‘ π£ππ(π₯)), respectively, are reachable from (π, π) on the input sequences
π£π₯, π£π₯π€. From this, we see that πππ is reachable from (π, π) on input π’β² since there is a
transition from (π, πππ₯π‘ π£ππ(π₯)) on the input symbol π¦ to πππ , as π β= πππ₯π‘ π£ππ(π₯).
C. Let π 0 be the initial configuration, i.e., π 0 = (π1 , π0 ), π2 , ..., ππ . Let π’ β Ξ£βπ be such
that it is not valid from π 0 . Then, the finite string π π’ is rejected by ππ with probability
greater than or equal to ( 21 )β£π’β£ .
We will now show that our construction of ππ satisfies the following property. If π is the
π
accepting computation of π on π then ππππ
ππ , πΌ = 1, where πΌ = (π π π‘πππ(π)) . On the other
π
hand, if π does not accept π then ππππ
ππ , πΌ = 1 for every πΌ β Ξ£π . Based on this property,
ββ€0 (ππ ) β= β
and β<1 (ππ ) β= β
iff π accepts π, which proves the PSPACE-hardness of
the non-emptiness problem. We now prove that our claim holds.
ACM Journal Name, Vol. V, No. N, Month 20YY.
32
β
Let π be the accepting computation of π on π. Observe that on input (π, (π, ππ )), ππ
goes to the initial state ππ π with probability 1 from every state except the reject state πππ .
This coupled with property A above, ensures that πΏπ’π (ππ π , ππ π ) = 1, where π’ = π π π‘πππ(π).
Thus, π’π is accepted with probability 1 by ππ .
Now we prove the more difficult part of the claim, namely, that if π rejects π, ππ rejects
every input sequence with probability 1. The proof is by cases on the form of the input
word πΌ to ππ . Observe that if πΌ is not of the right form, i.e., does not begin with π or does
not have a π immediately following a symbol of the form (π, (π, ππ )) or every π (except
the first one) is not preceded by a symbol of the form (π, (π, ππ )) then πΌ is rejected with
probability 1. Next if πΌ contains any symbol of the form (π, (π, ππ )) then also πΌ is rejected
with probability 1.
Let us now consider the case when πΌ has infinitely many symbols of the form (π, (π, ππ )).
Since ππ will reject any input in which such symbols are not immediately followed by π ,
we can assume without loss of generality that πΌ = π π’1 π π’2 π β
β
β
, where π’π β Ξ£βπ and ends
with a symbol of the form (π, (π, ππ )). Now, since π is rejected by π , for every π β₯ 1, we
have the following properties. The sequence π’π is not valid from the initial configuration.
Hence from property C, ππ , on input π π’π , reaches the reject state with probability at least
( 21 )β£π’π β£ . From the simplifying assumption 2, we have β£π’π β£ β€ π, and hence the probability
of rejection of π π’π is at least ( 12 )π . Hence, ππππ
ππ , πΌ = 1.
Finally, suppose πΌ has only finitely many symbols of the form (π, (π, ππ )) (each of which
is followed by π ) and no symbols of the form (π, (π, ππ )). Observe that we may also assume
that every π symbol (except the first) is immediately preceded by a symbol of the form
(π, (π, ππ )) (as otherwise πΌ will be rejected with probability 1). Thus πΌ = π’β² π π½, where
π½ β Ξ£π
π and does not contain any symbol of the form (π, (π, ππ )) or (π, (π, ππ )). Let us
divide π½ into sequences of length π, i.e., π½ = π’1 π’2 β
β
β
, where β£π’π β£ = π. Based on the
simplifying assumption 2, made about π , we can conclude that each π’π is not valid (from
any configuration). Let π0 be the set of states of ππ reached after the input sequence π’β² π ,
and for π β₯ 1, let ππ be the set of states of ππ reached after the input sequence π’β² π π’1 ...π’π .
From the property B, we see that for π β₯ 1, from every state πβ² in ππβ1 , the automaton
state πππ is reachable on the input sequence π’π with probability at least 21π . Hence we see
that πΌ is rejected with probability 1.
M SC: We will show that the emptiness problem for M SC is co-R.E.-hard. The proof
will rely on co-R.E.-hardness of the emptiness problem for Probabilistic Finite Automata
(PFA). Therefore, before presenting our proof, we recall the basic definitions associated
with such automata. Formally, a PFA over alphabet Ξ£ is β³ = (π, ππ , πΉ, πΏ), where π
is a finite set of states, ππ β π is the initial state, πΉ β π are the final states, and πΏ :
π × Ξ£ × π β [0, 1] is the probabilistic transition function that satisfies
the same properties
β
as the one for FPMs. Given π₯ β [0, 1], β>π₯ (β³) = {π’ β Ξ£β β£ πβπΉ πΏπ’ (ππ , π) > π₯}. The
main result that we will use is the following.
T HEOREM 6.7 C ONDON -L IPTON [C ONDON AND L IPTON 1989]. Given a PFA β³ over
alphabet Ξ£ the problem of determining if β> 12 (β³) = β
is co-R.E.-complete.
Using this result we can show,
T HEOREM 6.8. For a FPM β³ over an alphabet Ξ£, and rational π₯ β (0, 1) the problem
of determining if β<π₯ (β³) = β
is co-R.E.-hard.
ACM Journal Name, Vol. V, No. N, Month 20YY.
β
33
P ROOF. We will actually prove this result for the case when π₯ = 12 ; Lemma 4.10 will
then allow us to conclude for any π₯ β (0, 1). The proof of co-R.E.-hardness relies on a
reduction from the emptiness problem of PFAs. Let β³ = (π, ππ , πΉ, πΏ) be a PFA over the
alphabet Ξ£. We will construct a FPM β³β² such that β> 12 (β³) = β
iff β< 21 (β³β² ) = β
.
Pick a new symbol π ββ Ξ£ and let Ξ£β² = Ξ£ βͺ {π }. Formally β³β² = (πβ² , ππ , ππ , πΏ β² ) will be
a FPM over alphabet Ξ£β² . The set of states πβ² = π βͺ {ππ , ππ }, where ππ , ππ will be assumed
to be new states not in π. πΏ β² will be defined as follows. First we describe transitions out
of the new states ππ and ππ : for every π β Ξ£β² , πΏ β² (ππ , π, ππ ) = πΏ β² (ππ , π, ππ ) = 1. Next
we describe the transitions on the new symbol π : if π β πΉ then πΏ β² (π, π, ππ ) = 1 and if
π β π β πΉ then πΏ(π, π, ππ ) = 1. Finally, for all the old states π β π, on all the input
symbols π β Ξ£, we will define transitions as follows:
{1
if π β² = ππ or π β² = ππ
β²
β²
πΏ (π, π, π ) = 31
β²
β²
3 πΏ(π, π, π ) if π β π
We will now prove the correctness of the above reduction. On
any string πΌ β Ξ£π
ββ
1
1 π
(i.e., when πΌ does not have any π symbols), we have ππππ
=
β³β² , πΌ
π=1 ( 3 ) = 2 . Thus,
β< 12 (β³β² ) β© Ξ£π = β
. Let us now consider πΌ β (Ξ£β² )π β Ξ£π . Such a word can be written
β
as: πΌ = π’π πΌβ² , where π’ β Ξ£β . Let π = πβπΉ πΏπ’ (ππ , π), i.e., the acceptance probability of
π’ in β³. Now, suppose π > 12 . Then
ββ£π’β£ 1 π
1 β£π’β£
ππππ
β³β² , πΌ β₯ (
π=1 ( 3 ) ) + ( 3 ) π
= 12 (1 β ( 13 )β£π’β£ ) + ( 13 )β£π’β£ π > 12
1
β²
By a symmetric argument, if π β€ 12 , then ππππ
β³β² , πΌ β₯ 2 . Thus, a word of the form π’π πΌ is in
β²
β²
β< 21 (β³ ) iff π’ β β> 12 (β³). Hence, β< 21 (β³ ) = β
iff β> 21 (β³) = β
.
M NC: We will show that the emptiness problem for M NC is R.E.-hard. The proof will
be a highly non-trivial modification of the proof of co-R.E.-hardness of emptiness of PFA
[Condon and Lipton 1989].
T HEOREM 6.9. For a FPM β³ and rational π₯ β (0, 1) the problem of determining if
ββ€π₯ (β³) = β
is R.E.-hard.
P ROOF. It suffices to show that the problem of determining whether ββ€ 21 (β³) is nonempty is co-R.E.-hard. This is exhibited by a reduction from the problem of non-termination
of a deterministic two-counter machine on an empty input. The reduction is carried in two
steps.
The first step of the construction is a modification of the PFA used in proof of co-R.E.hardness of emptiness of PFA. Given a deterministic 2-counter machine π we construct a
monitor β³ as follows. The monitor β³ has an absorbing accept state ππ and an absorbing
reject state ππ (by an absorbing state π, we mean that probability of transitioning from π to
π is 1 for every input symbol). The input alphabet of β³ is the set of control states of π, left
bracket (, right bracket ), 0, 1, π and π. The constructed monitor checks whether a given
sequence of inputs determine a valid computation of π on empty input. The computation
of π is represented as a sequence of consecutive configurations of π. A configuration of
π represented by the tuple (π, π, πβ² ) is input as left bracket (; followed by a control state
π; followed by two bits which indicate whether the two counters are zero or not (the bit
0 a counter value of 0 and the bit 1 represents a non-zero counter value); followed by
ACM Journal Name, Vol. V, No. N, Month 20YY.
34
β
β²
the input sequence ππ ππ which represent the counter values in unary; and ending in right
bracket ). If ππ is the initial state of β³, the construction will ensure that for any word
πΌ and π β β, πΏπΌ[1:π] (ππ , ππ ) β₯ πΏπΌ[1:π] (ππ , ππ ). If a word πΌ represents a valid infinite
computation then πΏπΌ[1:π] (ππ , ππ ) = πΏπΌ[1:π] (ππ , ππ ) for all π β₯ 1 such that πΌ(π) =). If a
word πΌ does not represent a valid infinite computation then there would be a π0 such that
πΏπΌ[1:π] (ππ , ππ ) > πΏπΌ[1:π] (ππ , ππ ) for all π β₯ π0 .
The machine β³ checks if in the first input configuration, the control state is the start
state of π and both the counter values are 0. Otherwise, it rejects the input with probability
1 (i.e., makes a transition to ππ with probability 1). Similarly, if ever the monitor β³
receives an input symbol of the wrong kind (for example, if the monitor receives a state π
when it is expecting π or π) then β³ rejects the rest of the input with probability 1.
The monitor β³ also has to check that consecutive input configurations, say (π, π, πβ² ) and
β²
(π , π, π β² ), are in accordance with the transition function of the 2-counter automaton. If
there is no transition from π to π β² in π or if one of the counter values π, π β² is zero (or, nonzero) when it is not supposed to be, then the rest of the input is rejected with probability 1.
The main challenge is in checking the counter values across the transition using only finite
number of states. This problem reduces to checking whether two strings have equal length
using only finite memory of the monitors. For the case of PFAβs, this is accomplished by a
weak equality test described in [Condon and Lipton 1989; Freivalds 1981] as follows. We
recall the equality test described there. Suppose we want to check that for two strings ππ
and ππ , π = π. Then while processing ππ
(1a). Toss 2π fair coins and note if all of them turned heads.
(2a). Toss a separate set of π fair coins and note if all of them turned heads.
(3a). Toss yet another set of π fair coins and note if all of them turned heads.
While processing ππ
(1b). Toss 2π fair coins and note if all of them turned heads.
(2b). Toss a separate set of π fair coins and note if all of them turned heads.
(3b). Toss yet another set of π fair coins and note if all of them turned heads.
As in [Condon and Lipton 1989], we define event π΄ as either all coins turn up heads in (1a)
or all coins turn up heads in (1b). We define event π΅ as either all coins turn up heads in
(2a) as well as (2b) or all coins turn up heads in (3a) as well as (3b). We reject (that is make
a transition to ππ with probability 1) if event π΄ is true and π΅ is not true and accept (that
is make a transition to ππ with probability 1) if π΅ is true and π΄ is not true. If π = π then
as explained in [Condon and Lipton 1989], the probability of acceptance and rejection
is equal (note each of them may be less than 21 ), otherwise probability of transitioning
to reject state is strictly larger than probability of acceptance. We shall slightly modify
this test. The main problem in using this test directly is that the input configuration may
have an unbounded number of πβs and in that case we never make a transition to either
ππ or ππ . Hence, in this case, we will not be able to guarantee that there is a π0 such that
πΏπΌ[1:π] (ππ , ππ ) > πΏπΌ[1:π] (ππ , ππ ) for each π > π0 .
We modify the weak equality test as follows. We will still conduct the experiments (1a),
(2a), (3a), as above. Let π΄0 be the event such that all coins turn up heads in (1a). Now,
when we process ππ we will still conduct experiments (1b), (2b) and (3b) but take certain
extra actions while scanning the input as described below.
ACM Journal Name, Vol. V, No. N, Month 20YY.
β
35
βAssume first the case that event π΄0 is true. Now, while scanning ππ , we check if both of
the following events happen- at least one coin turns up tails in (2a) or (2b), and at least
one coin turns up tails in (3a) or (3b). If both of the above events are true then we reject
the input. Note that if π is unbounded, this rejection will happen with probability 1 given
that π΄0 is true. Also note that if π is bounded and if both the above events happened,
then the input would have been rejected anyways in the original weak equality test. If
π is bounded and we have not rejected the input, then at the end of scanning of ππ we
check for events π΄ and π΅ as above. If π΄ is true and π΅ is false, we reject the input. If π΄
is false and π΅ is true, then we accept the input. Otherwise we continue processing the
rest of the input.
βAssume now that π΄0 is false. Now, while scanning ππ , check if all three of the following
events happen- at least one coin turns up tails in (1b), at least one coin turns up tails
in (2a) or (2b), and at least one coin turns up tails in (3a) or (3b). If all three of the
above events are true then we start accepting (that is transiting to ππ ) and rejecting (that
is transiting to ππ ) the succeeding input with probability 31 each until we detect the end
of ππ . Note if π is unbounded, then with probability 1 all three events will happen and
asymptotically we will accept and reject with probability 12 . If π is bounded, then at the
end of scanning of ππ we check for events π΄ and π΅ as above. (note if the above three
events happen at some point while scanning ππ then both π΄ and π΅ are going to be false).
If π΄ is true and π΅ is false, we reject the input. If π΄ is false and π΅ is true, then we accept
the input. Otherwise we continue processing the rest of the input.
These tests ensure that if π is unbounded, then the probability of transitioning to the
reject state is > 12 . In the case π is bounded and π = π then probability of transitioning to
accept state ππ is exactly equal to the probability of transitioning to reject state ππ . If π is
bounded and π β= π then probability of transitioning to accept state ππ is strictly less than
the probability of transitioning to reject state ππ .
Finally, while checking the input, the monitor β³ rejects if the state of the current configuration is a halting state. Let ππ be the start state of the monitor β³ and πΏ the transition function of β³. For πΌ, let πΏπΌ (ππ , ππ ) = limπββ πΏπΌ[1:π] (ππ , ππ ) and πΏπΌ (ππ , ππ ) =
limπββ πΏπΌ[1:π] (ππ , ππ ). It is easy to see that for the monitor β³, a finite word π’ and an
infinite word πΌ, the following facts are trueβ
(1) πΏπ’ (ππ , ππ ) β₯ πΏπ’ (ππ , ππ ).
(2) If π’ represents a valid finite computation of the counter machine π, π’ does not contain
the halting state and π’ ends in the symbol ) then πΏπ’ (ππ , ππ ) = πΏπ’ (ππ , ππ ) < 21 .
(3) If π’ contains a halting state then πΏπ’ (ππ , ππ ) > 12 .
(4) If the input πΌ does not represent a valid infinite computation then there is some π0 such
that πΏπΌ[1:π] (ππ , ππ ) > πΏπΌ[1:π] (ππ , ππ ) for all π > π0 .
(5) If there is some unbounded counter in some input configuration, then πΏπΌ (ππ , ππ ) > 21 .
In particular, if πΌ does not contain an infinite number of states of the counter machine
π then πΏπΌ (ππ , ππ ) > 12 . Furthermore, if word πΌ does not contain an infinite number of
states of the counter machine then πΏπΌ (ππ , ππ ) + πΏπΌ (ππ , ππ ) = 1.
(6) If the input πΌ represents a valid infinite computation then πΏπΌ (ππ , ππ ) = πΏπΌ (ππ , ππ ).
Furthermore, if πΌ(π) =) then πΏπΌ[1:π] (ππ , ππ ) = πΏπΌ[1:π] (ππ , ππ ).
Now, we obtain β³1 by modifying β³ as follows. Whenever we process the control state
in a new input configuration, we accept (that is make a transition to ππ ) and reject (that is
ACM Journal Name, Vol. V, No. N, Month 20YY.
36
β
make a transition to ππ ) with probability 31 . It is easy to see that for β³1 and word πΌ, we
have that
1
(1) If πΌ contains a halting state or represents an invalid computation then ππππ
β³1 , πΌ > 2 .
1
(2) If the input πΌ represents a valid infinite computation then ππππ
β³1 , πΌ = 2 .
Thus, ββ€ 12 (β³1 ) is non-empty iff π has an infinite computation.
6.3
Universality Problem: Upper Bounds
In this section, we will establish upper bounds for the universality problem of RatFPMβs.
M SA: The universality problem for M SA is easily seen to be in NL.
T HEOREM 6.10. For a RatFPM β³ on alphabet Ξ£, the problem of checking the universality of ββ€0 (β³) is in NL.
P ROOF. Let β³ = (π, ππ , ππ , πΏ) and let Ξ£ be the alphabet. Consider the direct graph
G = (π, πΈ) constructed (in log-space) as follows. The set of vertices π is π and (π1 , π2 ) β
πΈ iff there exists an π β Ξ£ such that πΏ(π1 , π, π2 ) > 0. It is easy to see that ββ€0 (β³) is not
universal iff there is a directed path from ππ to ππ in G. The result follows from the fact that
the latter problem is known to be in NL, and NL=co-NL [Szelepcsenyi 1987; Immerman
1988].
M WA: We will show that the universality problem for M WA is in PSPACE. The proof
depends on showing that the set β=1 (β³) = {πΌ β Ξ£π β£ππππ
β³, πΌ = 1} contains an ultimately
periodic word.
L EMMA 6.11. Given a FPM β³ = (π, ππ , ππ , πΏ), a state π β π and a finite word π’ β
Ξ£+ , let post(π, π’) = {π β² β π β£ πΏπ’ (π, π β² ) > 0}. The set β=1 (β³) = {πΌ β Ξ£π β£ππππ
β³, πΌ = 1}
is non-empty iff there are π’, π£ β Ξ£+ such that the following conditions holdβ
βpost(ππ , π’) = post(ππ , π’π£)
βFor each π β post(ππ , π’), ππ β post(π, π£).
P ROOF. (β). Fix πΌ such that ππππ
β³, πΌ = 1. For each integer π > 0, let ππ = post(ππ , πΌ[1 :
π]). Now since ππ β π and π is a finite set, there exists an infinite sequence of natural
numbers 1 < π1 < π2 < π3 , . . . such πππ = πππ for all π, π β β. Let π’ = πΌ[1 : π1 ]. As
ππππ
β³, πΌ = 1, it must be the case that for each π β post(ππ , π’) there is some ππ β₯ 1 such that
πΏπΌ[π1 +1,π1 +ππ ] (π, ππ ) > 0. Pick ππ such that ππ β₯ π1 + ππ for each π β post(ππ , π’). Let
π£ = πΌ[π1 + 1, ππ ]. Clearly π’, π£ are the required words.
(β). Suppose that π’, π£ are such that post(ππ , π’) = post(ππ , π’π£) and for each π β
post(ππ , π’), ππ β post(π, π£). From post(ππ , π’) = post(ππ , π’π£), we have that post(ππ , π’) =
post(ππ , π’π£ π ) for all π β β. Let π0 = post(ππ , π’) = post(ππ , π’π£ π ).
Consider the word πΌ = π’π£ π . We show that π’π£ π β β=1 (β³). Note that we have
πππ
πβ³, πΌ = limπββ πΏπ’π£π (ππ , ππ ) = 1 β limπββ πΏπ’π£π (ππ , π0 β {ππ }). Thus, it suffices to
show that limπββ πΏπ’π£π (ππ , π0 β {ππ }) = 0. Observe thatβ
(1) πΏπ’ (ππ , π0 β {ππ }) β€ 1.
(2) Let π₯ = minπβπ0 (πΏπ£ (π, ππ )). We have that π₯ > 0. It is easy to see that πΏπ’π£π+1 (ππ , π0 β
{ππ }) β€ (1 β π₯)πΏπ’π£π (ππ , π0 β {ππ }).
ACM Journal Name, Vol. V, No. N, Month 20YY.
β
37
Thus, by induction πΏπ’π£π (ππ , π0 β {ππ }) β€ (1 β π₯)π . As π₯ > 0, we get that
lim πΏπ’π£π (ππ , π0 β {ππ }) = 0.
πββ
We get as an immediate corollary that there is an ultimately periodic word in β=1 (β³) =
{πΌ β Ξ£π β£ππππ
β³, πΌ = 1}.
C OROLLARY 6.12. Let β³ = (π, ππ , ππ , πΏ) be a FPM on an alphabet Ξ£. Then β=1 (β³) =
+
π
{πΌ β Ξ£π β£ππππ
β³, πΌ = 1} is non-empty iff there are π’, π£ β Ξ£ such that π’π£ β β=1 (β³).
T HEOREM 6.13. For a RatFPM β³ on alphabet Ξ£, the problem of checking the universality of β<1 (β³) is in PSPACE.
P ROOF. Let β³ = (π, ππ , ππ , πΏ). The PSPACE algorithm actually checks the nonuniversality for β<1 (β³) by appealing to Lemma 6.11. The algorithm proceeds by first
guessing π’ incrementally and storing the βcurrentβ value of post(ππ , π’). After the guess is
complete, it stores post(ππ , π’) in its memory and starts guessing π£ incrementally. While
guessing π£ incrementally, it stores post(π, π£) for each π β post(ππ , π’). After its stops
guessing π£ it checks that a) ππ β post(π, π£) for each π β post(ππ , π’) and b) post(ππ , π’) =
βͺπβpost(ππ ,π’) (post(π, π£)) (note that post(ππ , π’π£) = βͺπβpost(ππ ,π’) (post(π, π£))).
M NC: The universality problem for M NC is easily seen to be in co-R.E..
T HEOREM 6.14. For a RatFPM β³ on alphabet Ξ£ and rational π₯ β (0, 1), the problem
of checking the universality of ββ€π₯ (β³) is in co-R.E..
P ROOF. Let β³ = (π, ππ , ππ , πΏ). Now ββ€π₯ (β³) is not universal iff there is a word
β
πΌ such that ππππ
β³, πΌ > π₯. The latter is true iff there is a finite word π’ β Ξ£ such that
πΏπ’ (ππ , ππ ) > π₯. The result now follows.
M SC: The universality problem for M SC is in Ξ 11 .
T HEOREM 6.15. For a RatFPM β³ on alphabet Ξ£ and rational π₯ β (0, 1), the problem
of checking the universality of β<π₯ (β³) is in Ξ 11 .
P ROOF. Let β³ = (π, ππ , ππ , πΏ). Now β<π₯ (β³) is not universal iff there is a word
πΌ β Ξ£π such that ππππ
β³, πΌ β₯ π₯. Now,
(ππππ
β³, πΌ β₯ π₯) β βπ > 0.βπ > 0.(πΏπΌ[1:π] (ππ , ππ ) > π₯ β 1/π).
Thus, we get
β<π₯ (β³) = Ξ£π β βπΌ.βπ > 0.βπ > 0.(πΏπΌ[1:π] (ππ , ππ ) β€ π₯ β 1/π).
From this, it is easy to see that the problem of checking the universality of β<π₯ (β³) is in
Ξ 11 .
6.4
Universality Problem: Lower Bounds
In this section, we will show that the upper bounds proved in Section 6.3 for the various
classes of monitors are tight.
M SA: The NL-hardness of the universality problem for M SA is proved by a reduction from
graph reachability problem.
ACM Journal Name, Vol. V, No. N, Month 20YY.
38
β
T HEOREM 6.16. For a RatFPM β³, the problem of checking the universality of ββ€0 (β³)
is NL-hard.
P ROOF. We will reduce the reachability problem in directed graphs to the non-universality
problem for M SA. Since reachability is NL-complete, and NL=co-NL [Szelepcsenyi 1987;
Immerman 1988], the hardness of universality follows.
Let G = (π, πΈ) be a directed graph with π as the set of vertices and πΈ as the set of
edges. Given π£1 , π£2 β π the reachability problem is the problem of determining whether
there is a directed path from π£1 to π£2 . For the reachability problem, we can assume that for
any π£ β π , the set πΈπ£ = {π£ β² β π β£ (π£, π£ β² ) β πΈ} is non-empty.
We reduce (in log-space) the reachability problem to the universality problem of M SA as
follows. We pick a symbol π and let Ξ£ = {π}. We construct a monitor β³ = (π, π£1 , π£2 , πΏ)
where πΏ is defined as follows.
β²
πΏ(π£, π, π£ ) =
β§
β¨ 1
1
β£πΈπ£ β£
β©
0
if π£ = π£ β² = π£2
if (π£, π£ β² ) β πΈ
otherwise
It is easy to see that ββ€0 (β³) = Ξ£π iff there is no directed path from π£1 to π£2 in G.
M WA: The PSPACE-hardness of the universality problem for M WA is proved by a reduction from the universality problem of Finite State Machines.
T HEOREM 6.17. For a RatFPM β³, the problem of checking the universality of β<1 (β³)
is PSPACE-hard.
P ROOF. We reduce the problem of universality of Finite State Machines to the universality of β<1 (β³). Given a finite state machine π = (π, Ξ, ππ , πΉ ) over the alphabet Ξ£,
π β π and π β Ξ£, let Ξπ,π = {π β² β£ πΏ(π, π, π β² ) β Ξ}. Please note that for universality
problem, we can assume that Ξπ,π is non-empty, i.e., β£Ξπ,π β£ > 0.
We reduce (in polynomial-time) the universality problem of π to the universality problem of M WA as follows. Pick a new symbol π β
/ Ξ£ and two new states ππ , ππ β
/ π, and
construct a RatFPM β³ = (π βͺ {ππ , ππ }, ππ , ππ , πΏ) over Ξ£ βͺ {π } where πΏ is defined as
followsβ
πΏ(π, π, π β² ) =
β§




β¨




β©
1
1
1
1
β£Ξπ,π β£
0
if π = π β² , π β {ππ , ππ } and π β Ξ£ βͺ {π }
if π β πΉ, π β² = ππ and π = π
if π β π β πΉ, π β² = ππ and π = π
if π, π β² β π and (π, π, π β² ) β Ξ
otherwise
For any word πΌ over the alphabet Ξ£ βͺ {π }, the following are easy to checkβ
(1) If πΌ does not contain π then ππππ
β³, πΌ = 0.
(2) If πΌ = π’π π½ and π’ does not contain π , then ππππ
β³, πΌ = 1 iff π’ is not accepted by π.
Thus π is universal iff β<1 (β³) is universal. The result now follows.
M NC: The co-R.E.-hardness of determining the emptiness of β> 12 (β³) yields the coR.E.-hardness of the universality problem for M NC.
ACM Journal Name, Vol. V, No. N, Month 20YY.
β
39
T HEOREM 6.18. For a RatFPM β³ on an alphabet Ξ£ and π₯ β (0, 1), the problem of
checking the universality of ββ€π₯ (β³) is co-R.E.-hard.
P ROOF. We can assume without loss of generality that π₯ = 21 . Given a PFA β³β² =
(π, ππ , πΉ, πΏ) over the alphabet Ξ£, pick two new states ππ , ππ β
/ π and a new symbol π β
/ Ξ£.
Now, construct a monitor β³ = {π βͺ {ππ , ππ }, ππ , ππ , πΏ β² } over Ξ£ βͺ {π } where πΏ β² is defined
as followsβ
β§


β¨
1
1
β²
β²
πΏ (π, π, π ) =
1


β©
πΏ(π, π, π β² )
if π = π β² , π β {ππ , ππ } and π β Ξ£ βͺ {π }
if π β πΉ, π β² = ππ and π = π
if π β π β πΉ, π β² = ππ and π = π
otherwise
For any word πΌ over the alphabet Ξ£ βͺ {π }, the following are easy to checkβ
(1) If πΌ does not contain π then ππππ
β³, πΌ = 0.
(2) If πΌ = π’π π½ and π’ does not contain π , then ππππ
β³, πΌ =
β
π β² βπΉ
πΏπ’ (ππ , π β² ).
Thus, the language β> 21 (β³β² ) is empty iff ββ€ 12 (β³) is universal. The result follows.
M SC: We now show that the universality problem for M SC is Ξ 11 -hard.
T HEOREM 6.19. For a RatFPM β³ on alphabet Ξ£ and π₯ β (0, 1), the problem of
checking the universality of β<π₯ (β³) is Ξ 11 -hard.
P ROOF. The following problem is known to be Ξ 11 -complete.
Problem: Given a non-deterministic two-counter machine π and a control state π of π,
check whether all computations of π visit π only a finite number of times.
Given a non-deterministic two-counter machine π and a state π of π, we will construct a
RatFPM such that β< 21 (β³) is universal iff computations of π visit π only a finite number
of times. This monitor is constructed in two steps. For the first step, we construct a monitor
which is a small modification of the monitor β³1 constructed in the proof of co-R.E.hardness of emptiness of ββ€ 12 (see Theorem 6.9). Observe that in that construction, given
a deterministic two-counter machine π 1 we constructed a monitor β³1 = (π1 , ππ , ππ , πΏ 1 )
with the following propertiesβ
(1) The alphabet Ξ£ of β³1 consists of (, ), the states of π 1 , 0, 1, π and π.
(2) β³1 has two absorbing states β an absorbing accept state ππ and an absorbing reject
state ππ .
(3) For all finite strings π’ on Ξ£, πΏπ’1 (ππ , ππ ) < 21 .
1
(4) For any infinite word πΌ, limπββ πΏπΌ[1:π]
(ππ , ππ ) = 12 iff πΌ represents an infinite valid
1
computation of π . The infinite computation is represented as a sequence of configurations (π0 , 0, 0)(π1 , π1 , π1 )(π2 , π2 , π2 ) . . . .
Now, given a non-deterministic two-counter machine π, let π‘1 , π‘2 , . . . π‘π be the set of all
the possible transitions of π. We choose new symbols π π for each π‘π . The modified monitor β³2 has as an alphabet Ξ£new = Ξ£ βͺ {π 1 , π 2 , . . . , π π }. β³2 works almost exactly
as β³1 except that it has to learn which transition is being used to go from (ππ , ππ , ππ )
to (ππ+1 , ππ+1 , ππ+1 ). This is done by assuming that the input computation is given in the
ACM Journal Name, Vol. V, No. N, Month 20YY.
40
β
form (π0 , 0, 0)π0 (π1 , π1 , π1 )π1 (π2 , π2 , π2 ) . . . . Here ππ β {π 1 , π 2 , . . . π π } and βinformsβ β³2
which transition to check while processing the new configuration. Of course, if in between
configurations, β³2 does not receive such a symbol, β³2 rejects the rest of the input with
probability 1. It is easy to see that β³2 = (π2 , ππ , ππ , πΏ 2 ) satisfies the following propertiesβ
(1) β³2 has two absorbing states β an absorbing accept state ππ and and an absorbing
reject state ππ .
(2) For all finite strings π’ on Ξ£, πΏπ’2 (ππ , ππ ) < 12 .
2
(3) For any infinite word πΌ, limπββ πΏπΌ[1:π]
(ππ , ππ ) =
computation of π.
1
2
iff πΌ represents an infinite valid
Now, we construct β³ as follows. The alphabet of β³ will be Ξ£new . For the states,
we will pick a new state πnew β
/ π2 and set π = π2 βͺ πnew . The state πnew will be the
reject state of β³. The state ππ is the start state of β³. The transition function πΏ of β³ is
obtained by modifying πΏ 2 as follows. The accept state ππ of β³2 is no longer absorbing.
From ππ we will make a transition to πnew with probability 12 whenever we see the input
symbol π (which is the given state in the Ξ 11 -hard problem). With probability 21 we will
remain in ππ on the input symbol π. In other words πΏ(ππ , π, πnew ) = πΏ(ππ , π, ππ ) = 21 .
πΏ(π2 , π, π2β² ) = πΏ 2 (π2 , π, π2β² ) for all π2 , π2β² β π2 , π β Ξ£new except when π2 is ππ and π is the
input symbol π. Also, πΏ(πnew , π, πnew ) = 1 for all π β Ξ£new . For all other possible states
and symbols, the transition probability is 0.
It is easy to see that for any word πΌ, limπβinf πΏπΌ[1:π] (ππ , πnew ) β€ 21 . Furthermore,
limπβinf πΏπΌ[1:π] (ππ , πnew ) = 12 iff πΌ represents a valid infinite computation of π and visits
π infinitely often. Thus β< 21 (β³) is universal iff all computations of π visit π only finitely
many times.
7.
CONCLUSIONS
In this paper, we investigated the power of randomization in finite state monitors. We have
classified the languages defined by FPMs based on the rejection probability and proved a
number of results characterizing these classes. Interestingly, some of these classes allow us
to go beyond safety and π-regularity, but be within almost safety. We have also presented
complexity results on the problem of checking emptiness and universality of languages defined by FPMs. In the future, we would like to explore applying the techniques developed
here for practical monitoring and verification needs.
7.1
Acknowledgments
Rohit Chadha was supported in part by NSF grants CCF04-29639 and NSF CCF04-48178.
A. Prasad Sistla was supported in part by NSF CCF-0742686. Mahesh Viswanathan was
supported in part by NSF CCF04-48178 and NSF CCF05-09321.
REFERENCES
1997. Time rover. http://www.time-rover.com.
1999β2008. Proceedings of the Workshop in Runtime Verification. Electronic Notes in Theoretical Computer
Science.
A LPERN , B. AND S CHNEIDER , F. 1985. Defining liveness. Information Processing Letters 21, 181β185.
A MORIUM , M. AND ROSU , G. 2005. Efficient monitoring of omega-languages. In Proceedings of the International Conference on Computer Aided Verificationβ. 364β378.
ACM Journal Name, Vol. V, No. N, Month 20YY.
β
41
BAIER , C., B ERTRAND , N., AND G R OΜSSER , M. 2008. On decision problems for probabilistic buΜchi automata.
In 11th International Conference on Foundations of Software Science and Computational Structures, FoSSaCS. 287β301.
BAIER , C. AND G R OΜπ½ ER , M. 2005. Recognizing π-regular languages with probabilistic automata. In Proceedings of the IEEE Symposium on Login in Computer Science. 137β146.
BAUER , A., L EUCKER , M., AND S CHALLHART, C. 2006. Monitoring of real-time properties. In Proceedings
of the 26th Conference on Foundations of Software Technology and Theoretical Computer Science. 260β272.
C HADHA , R., S ISTLA , A. P., AND V ISWANATHAN , M. 2008. On the expressiveness and complexity of randomization in finite state monitors. In LICS 2008- 23rd Annual IEEE Symposium on Logic in Computer Science.
IEEE Computer Society, 18β29.
C HEN , F. AND ROSU , G. 2007. MOP: An efficient and generic runtime verification framework. In Proceedings
of the ACM International Conference on Object-Oriented Programming, Systems, Languages, and Applications. 569β588.
C ONDON , A. AND L IPTON , R. J. 1989. On the complexity of space bounded interactive proofs (extended
abstract). In 30th Annual Symposium on Foundations of Computer Science. 462β467.
D IAZ , M., J UANOLE , G., AND C OURTIAT, J. P. 1994. Observer β a concept for formal on-line validation of
distributed systems. IEEE Transactions on Software Engineering 20, 12, 900β913.
F REIVALDS , R. 1981. Probabilistic two-way machines. In Mathematical Foundations of Computer Science.
33β45.
G ATES , A. AND ROACH , S. 2001. Dynamics: Comprehensive support for run-time monitoring. Vol. 55. Elsevier
Publishing.
G ONDI , K., PATEL , Y., AND S ISTLA , A. P. 2009. Monitoring the full range of omega-regular properties of
stochastic systems. In Proceedings of International Conference on Verification, Model Checking and Abstract
Interpretation. Lecture Notes in Computer Science, vol. 5403. Springer, 105β119.
G ROUP, N. 2001. Intrusion detection systems β Group Test.
H AMLEN , K. W., M ORRISETT, G., AND S CHNEIDER , F. 2006. Computability classes for enforcement mechanisms. ACM Trans. Program. Lang. Syst. 28, 1, 175β205.
H AVELUND , K. AND ROSU , G. 2001. Monitoring Java Programs with JavaPathExplorer. Vol. 55. Elsevier
Publishing.
I MMERMAN , N. 1988. Nondeterministic Space is Closed under Complementation. SIAM Journal of Computing 17, 935β938.
J EFFERY, C., Z HOU , W., T EMPLER , K., AND B RAZELL , M. 1998. A lightweight architechture for program
execution monitoring. In ACM Workshop on Program Analysis for Software Tools and Engineering.
K EMENY, J. AND S NELL , J. 1976. Denumerable Markov Chains. Springer-Verlag.
K IM , M., V ISWANATHAN , M., B.-A BDULLAH , H., K ANNAN , S., L EE , I., AND S OKOLSKY, O. 1999. Formally
specified monitoring of temporal properties. In Proceedings of the IEEE Euromicro Conference on Real Time
Systems. 114β122.
KORTENKAMP, D., M ILAM , T., S IMMONS , R., AND F ERNANDEZ , J. L. 2001. Collecting and Analyzing Data
from Distributed Control Programs. Vol. 55. Elsevier Publishing.
L AMPORT, L. 1985. Logical foundation, distributed systems- methods and tools for specification. SpringerVerlag Lecture Notes in Computer Science 190.
L IGATTI , J. 2006. Policy enforcement via program monitoring. Ph.D. thesis, Princeton University.
L IGATTI , J., BAUER , L., AND WALKER , D. 2005. Edit automata: Enforcement mechanisms for run-time security policies. International Journal of Information Security 4, 1β2 (Feb.), 2β16.
L IGATTI , J., BAUER , L., AND WALKER , D. 2009. Run-Time Enforcement of Nonsafety Policy. ACM Transactions on Information and System Security 12, 3, 1β41.
M ANSOURI -S AMANI , M. AND S LOMAN , M. 1992. Monitoring Distributed Systems (A Survey). Research
Report DOC92/93, Imperial College, London.
M ARGARIA , T., S ISTLA , A. P., S TEFFEN , B., AND Z UCK , L. D. 2005. Taming interface specifications. In
Proceedings of 16th International Conference on Concurrency Theory (CONCUR). 548β561.
NASU , M. AND H ONDA , N. 1968. Fuzzy events realized by finite probabilistic automata. Information and
Control 12, 4, 284β303.
ACM Journal Name, Vol. V, No. N, Month 20YY.
42
β
PAPOULIS , A. AND P ILLAI , S. U. 2002. Probability, Random Variables and Stochastic Processes. McGraw
Hill, New York.
PAXSON , V. 1997. Automated packet trace analysis of TCPimplementations. Computer Communication Review 27, 4.
PAZ , A. 1971. Introduction to Probabilistic Automata. Academic Press.
P ERRIN , D. AND P IN , J.-E. 2004. Infinite Words:Automata, Semigroups, Logic and Games. Elseiver.
P LATTNER , B. AND N IEVERGELT, J. 1981. Monitoring program execution: A survey. IEEE Computer, 76β93.
P NUELI , A. AND Z AKS , A. 2006. Psl model checking and run-time verification via testers. In Proceedings of
the 14th International Symposium on Formal Methods. 573β586.
R ABIN , M. O. 1963. Probabilitic automata. Information and Control 6, 3, 230β245.
R ANUM , M. J., L ANDFIELD , K., S TOLARCHUK , M., S IENKIEWICZ , M., L AMBETH , A., AND WALL , E.
1997. Implementing a generalized tool for network monitoring. In Proceedings of the Systems Administration
Conference. 1β8.
S ALOMAA , A. 1973. Formal Languages. Academic Press.
S AMMAPUN , U., L EE , I., S OKOLSKY, O., AND R EGEHR , J. 2007. Statistical runtime checking of probabilistic
properties. In Proceedings of the 7th International Workshop on Runtime Verification. 164β175.
S CHNEIDER , F. B. 2000. Enforceable security policies. ACM Transactions on Information Systems Security 3, 1,
30β50.
S CHROEDER , B. A. 1995. On-line monitoring: A Tutorial. IEEE Computer, 72β78.
S ISTLA , A. P. 1985. On characterization of safety and liveness properties in temporal logic. In Proceedings of
the ACM Symposium on Principle of Distributed Computing.
S ISTLA , A. P. AND S RINIVAS , A. R. 2008. Monitoring temporal properties of stochastic systems. In Proceedings of International Conference on Verification, Model Checking and Abstract Interpretation. Lecture Notes
in Computer Science, vol. 4905. Springer, 294β308.
S ZELEPCSENYI , R. 1987. The method of forcing for nondeterministic automata. Bulletin of the EATCS 33,
96β100.
T HOMAS , W. 1990. Automata on infinite objects. In Handbook of Theoretical Computer Science. Vol. B.
133β192.
VARDI , M. 1985. Automatic verification of probabilistic concurrent systems. In 26th annual Symposium on
Foundations of Computer Science. IEEE Computer Society Press, 327β338.
V ISWANATHAN , M. 2000. Foundations for the run-time analysis of software systems. Ph.D. thesis, University
of Pennsylvania.
ACM Journal Name, Vol. V, No. N, Month 20YY.
© Copyright 2025 Paperzz