An inference engine toolkit for computing with words

J Ambient Intell Human Comput
DOI 10.1007/s12652-012-0137-8
ORIGINAL RESEARCH
An inference engine toolkit for computing with words
Elham S. Khorasani • Purvag Patel
Shahram Rahimi • Daniel Houle
•
Received: 25 December 2011 / Accepted: 11 May 2012
Springer-Verlag 2012
Abstract Computing with Words is an emerging paradigm
in knowledge representation and information processing. It
provides a mathematical model to represent the meaning of
imprecise words and phrases in natural language and introduces advanced techniques to perform reasoning on inexact
knowledge. Since its introduction, there have been many
studies on computing with words but mostly from the theoretical point of view and the paradigm still lacks sufficient
support from the software side. This paper is an attempt to
fill this gap by presenting an enhanced inference engine
toolkit for supporting computing with words. The scope of
the presented toolkit, as opposed to many available fuzzy
logic tools, goes beyond simple fuzzy-if-then rules and
performs a chain of inferences on complex fuzzy propositions containing fuzzy arithmetics, fuzzy quantifiers, and
fuzzy probabilities. The toolkit may be appealing to
researchers, practitioners, and educators in knowledge based
applications and soft computing as it implements a powerful
declarative language which allows users to express their
knowledge in a more natural and convenient way and performs a chain of reasoning on imprecise propositions.
Keywords Computing with words Fuzzy logic Knowledge based applications Expert systems
E. S. Khorasani (&) P. Patel S. Rahimi D. Houle
Southern Illinois University Carbondale,
Carbondale, IL 62901, USA
e-mail: [email protected]
P. Patel
e-mail: [email protected]
S. Rahimi
e-mail: [email protected]
D. Houle
e-mail: [email protected]
1 Introduction
Human mind has a limited capability to process a huge
amount of detailed information in his environment; thus, to
compensate, the brain groups together the information it
perceives by its similarity, proximity, or functionality and
assigns to each group a name or a ‘‘word’’ in natural language. This classification of information allows human to
perform complex tasks and make intelligent decisions in an
inherently vague and imprecise environment without any
measurements or computation. Inspired by this human
capability, Zadeh (1999) introduced the machinery of
computing with words (CW) as a tool to formulate human
reasoning with imprecise words drawn from natural language and argued that the addition of CW theory to the
existing tools gives rise to the theories with enhanced
capabilities to deal with real-world problems and makes it
possible to design systems with higher level of machine
intelligence. For example, suppose that a knowledge base
contains the fact: ‘‘the life insurance premium is usually
more than average for obese people’’. Based on this
information I would like to know what is the probability
that a tall person who weighs about 210 lbs is paying more
than average for his life insurance’’. The information of
this kind is simply too imprecise for traditional reasoning
methods to deal with. CW provides a reasoning framework
to deal with inexact words and quantifiers such as:
‘‘obese’’, ‘‘usually’’, ‘‘tall’’, etc. More examples of this kind
are cited in (Zadeh 2005).
CW offers two principal components, (1) a language for
representing the meaning of words, this language is called
the Generalized Constraint Language (GCL), and (2) a set
of deduction rules for computing and reasoning with words
instead of numbers. CW is rooted in fuzzy logic; however,
it offers a much more powerful framework for fusion of
123
E. S. Khorasani et al.
natural language propositions and computation with fuzzy
variables. The fuzzy-if-then rules, used widely in machine
control, is just a special case of representing information in
GCL but the scope of GCL goes far beyond simple if-then
rules and includes complex statements with fuzzy arithmetic operations, fuzzy quantifiers, fuzzy probability,
fuzzy relations, and their combinations.
Computing with words has been the subject of intensive
studies in soft computing community over the last decade.
Indeed there are various interpretations of computing with
words resulting in the development of different tools and
techniques that fit each particular interpretation (Mendel
et al. 2010a). Some examples of the related works
regarding different views of CW are listed below:
•
•
•
•
•
the articles that focused on capturing the uncertainty
underlying the meaning of words in CW. The most
important work in this area are promoted by Mendel and
his team (Mendel and Wu 2010; Liu and Mendel 2008;
Wu and Mendel 2010). These articles mostly used interval
type-2 fuzzy sets to model the uncertainty embedded in the
membership function of words. Other related publications
include (Türksen 2002, 2007) and (Lawry 2001).
the works which employed a symbolic approach to CW.
In symbolic approach the computation is preformed
directly on the labels of words distributed on an ordinal
scale. The prominent works in these area are (Yager
1995; Delgado et al. 1993; Herrera and Martinez 2000;
Herrera et al. 2008; Wang and Hao 2007). The
application of this approach to decision making and
information retrieval is studied in (Herrera et al. 2009;
Herrera-Viedma et al. 2009; Herrera-Viedma 2001,
2006; Herrera-Viedma and López-Herrera 2010;
López-Herrera et al. 2009; Morales-del Castillo et al.
2010; Porcel and Herrera-Viedma 2010).
the works which centered on developing reasoning methods for CW. These includes the articles that studied CW in
the framwork of approximate reasoning (Yager 1999,
2006, 2011), as well as other works which formulated
methodologies for systematic application of CW inference
rules to a linguistic knowledge base (Khorasani and
Rahimi 2010, 2011; Khorasani et al. 2009).
the articles that focused on the application of CW in
database management systems. These articles (Zadrozny
and Kacprzyk 1996; Kacprzyk and Zadrozny 2001, 2010a,
b) used fuzzy quantifiers and fuzzy relations along with
ordered weighted average aggregation to handle fuzzy
queries of the form: ‘‘select all the records whose most of
the attributes are satisfied to some degree’’ and implemented an extension to MS Access for supporting such
queries.
the articles which studied the linguistic aspects of CW.
These articles studied the relation of CW to cognitive
123
•
sciences (Juliano 2001), computational linguistics
(Gudwin and Gomide 1999), and ontology (Reformat
and Ly 2009; Raskin and Taylor 2009).
The articles that developed a formal model of computation for CW. These articles (Ying 2002; Cao and Ying
2007; Cao and Chen 2010) interpreted the word ‘‘computing’’ in CW as a formal computational model rather
than a ‘‘reasoning’’ framework. They extended the
classical methods of computation, such as: finite state
and push-down automata and Turing machines to accept
‘‘words’’ as fuzzy subsets over the input alphabet hence
implementing the computation with words.
This paper is oriented towards the knowledge representation
and reasoning aspects of CW and its main apparatus, GCL.
More specifically, we are seeking to develop a software tool
which enable users to express their knowledge in terms of GCL,
perform reasoning, and pose queries to a GCL knowledge base.
In this perspective, a big gap still remains between theory and
application of CW and, to our knowledge, there has not been
yet any working inference engine implemented for the automation of the reasoning in CW. The reasoning tools commonly
used by fuzzy community such as: fuzzyClips (Surhone et al.
2010), FuzzyJess (Orchard 2001), Matlab Fuzzy logic Toolbox, Fool and Fox (Hartwig et al. 1996), and etc. are much
devoted to implementing Mamdani inference system. The
fuzzy information in such system may only be represented in
form of fuzzy-if-then rules and one cannot express or preform a
chain of reasoning on a more complex form of fuzzy information including fuzzy relations, fuzzy arithmetic, fuzzy
quantifiers, fuzzy probabilities, and their combinations.
In a recent paper, Mendel et al. (2010b), Zadeh has distinguished two levels of complexity for CW: level 1 (or basic
CW) and level 2 (or advanced CW). In level 1, the objects of
computation are numbers or words and it consists of simple
assignment or fuzzy if-then rule statements, such as: ‘‘x = 5’’,
‘‘x is small’’, or ‘‘if x is small then y is big’’. Current fuzzy
logic toolboxes mostly lie in this category. Level 2 consists of
more complex assignments or the statements where the
objects of computation may be propositions as well as words
and numbers, for example statements with fuzzy quantifiers
and fuzzy probabilities. For ambitious applications of CW,
such as advanced question/answering systems and natural
language understanding, there is a need for a tool to support
not only basic but also advanced CW knowledge representation and reasoning.
This paper is a first attempt to implement a working CW
inference engine toolkit to support basic and advanced CW
reasoning. A GCL declarative programming language is
developed to allow users to express their knowledge in
form of generalized constraints and pose queries to a GCL
knowledge base. The tool can benefit the researchers in
fuzzy community as it allows them to automatically
An inference engine toolkit for computing with words
perform a chain of reasoning on a complex fuzzy knowledge base. It can also provide an opportunity to add CW to
the syllabus of courses in soft computing and AI as it helps
the students to explore the application of CW. The CW
inference engine is implemented in java and utilizes Jess
engine for pattern matching. We are working to make the
toolkit available for download on http://cwjess.cs.siu.edu/.
The rest of the article is organized as follows: Sect. 2
provides a short background on CW and GCL. The following
section depicts the overall architecture of the CW Inference
engine toolkit. In Sect. 3 the syntax of a GCL program is
defined. The fusion of GCL knowledge and the implementation of CW inference rules is discussed in Sect. 5. A case study
is provided in Sect. 6 to illustrate GCL programming and the
working of the CW inference engine. Finally, conclusion and
future directions are presented in the last section.
A collection of GCs together with a set of logical connectives (such as: and, or, implication, and negation) and a set of
inference rules form the generalized constraint language
(GCL). The inference rules regulates the propagation of GCs.
Table 1 lists instances of GCL inference rules formulated by
Zadeh. As shown in this table, each rule has a symbolic part
and a computational part. The symbolic part shows the general
abstract form (also called the protoform) of the GCs of the
premises and the conclusion of the rule, while the computational part determines the computation that must be performed
on the premises to arrive at the conclusion. The inference rules
of CW are adopted and formalized from various fields such as:
fuzzy probability, fuzzy control, fuzzy relations, fuzzy
quantifiers, and fuzzy arithmetic.
3 CW inference engine toolkit
2 Preliminary: computing with words and generalized
constraint language
This section provides a very brief introduction to generalized
constraint language and CW inference rules. For more
detailed information and examples see (Zadeh 2006). The
core of CW is to represent the meaning of a proposition in
form of a generalized constraint (GC). The idea is that a
majority of the propositions and phrases used in natural language can be viewed as imposing a constraint on the values of
some linguistic variables such as: time, price, taste, age,
relation, size, appearance, and etc. For example the sentence:
‘‘most Indian foods are spicy’’ constrains the two variables: (1)
the taste of Indian food, and (2) the portion of the Indian foods
that are spicy. In general, a generalized constraint is in form of:
GC : X isr R
where X is a linguistic (or constrained) variable whose values
are constrained by the linguistic value R. A linguistic variable
can take various forms: it can be a relation [such as: (X, Y)], a
crisp function of another variable [such as: f(Y)], or it can be
another GC. The small r shows the semantic modality of the
constraint, that is: how X is related to R. various modalities are
characterized by Zadeh, among them are:
•
•
•
•
possibility (r = blank): where R denotes the possibility
distribution of X, e.g., ‘‘X is large.’’
identity(r = ‘‘=’’): where X and R are identical
variables.
fuzzy graph (r = ‘‘isfg’’): where R is a fuzzy estimation
of a function. This modality corresponds to a collection
of fuzzy-if-then rules that share the same variables in
their premises and consequences.
probability (r = ‘‘p’’): where R is the fuzzy probability
distribution of a fuzzy (or crisp) event , e.g., ‘‘(X is
large) isp likely’’.
Figure 1 illustrates the overall scheme of the CW inference
engine toolkit. The toolkit is implemented in java but Jess
Rete network is utilized to implement the symbolic part of
CW inference rules and to perform pattern matching.
The toolkit allows users to state their knowledge in terms
of GCL facts and pose queries to a GCL knowledge base. A
query is in general form: ‘‘X is ?’’ seeking the value of the
linguistic variable X. A set of GCL facts along with declarations and queries constitute a GCL program. Declarations
include introducing the linguistic variables, the fuzzy values
associated with linguistic terms, fuzzy quantifiers, the truth
value system, and etc. The facts and the queries are passed to
the GCL compiler for lexical analysis and parsing according
to GCL grammar defined in the next section. The GCL
compiler parses the facts, creates the corresponding java
objects, and adds them to the Rete network. When a fact is
added to the Rete network, the CW inference rules whose
left-hand-sides match the facts are fired and their right-handsides are computed and placed in to the fact base. After all the
facts are processed, the queries are read and, for each query,
the Rete network is searched to find the matching fact. If
found, the answer to a query is returned in three forms: (1) a
fuzzy set on the domain of the query variable, (2) a linguistic
term that best describes the output fuzzy set, and (2) a
numeric value which is the deffuzification of the output
fuzzy set. In what follows we describe the syntax of GCL
program and its grammar. The implementation of CW
inference rules are discussed in the subsequent section.
4 Syntax of a GCL program
The GCL language, characterized by Zadeh, lacks a formal
syntax and is described in a rather ad-hoc manner by some
examples. Consequently, before implementing a CW
123
E. S. Khorasani et al.
Table 1 Instances of CW
inference rules. u and v are the
universes of discourse of the
linguistic variables X and Y and
; ; and are t-norm and
t-conorm, operations,
respectively
A GCL program consists of three programming elements: declarations, facts, and queries:
hGCL Programi :: ¼ ‘GCL’ ‘Program’hIdenti‘begin0
½hDeclarationsi½hFact Basei
½hQuery Basei‘end 0 :
where hIdenti stands for an identifier (i.e., a sequence
which begins with a letter and contains letters, numbers
and special characters, including underscore and dot).
Fig. 1 Overall sketch of the CW inference engine toolkit
4.1 Declarations
inference engine, one needs to define a formal grammar for
GCL. In what follows the syntax of GCL is defined by an
EBNF grammar.
123
The declaration part defines the truth-value system as well
as the linguistic variables, fuzzy quantifiers, and fuzzy
An inference engine toolkit for computing with words
Table 2 The most common truth value systems. a and b are truth
(membership) values
Turth-value system
T-norm operation
T-conorm
Negation
łukasiewicz
€
Godel
max(0, a?b-1)
min(1, a?b)
min(a,b)
max(a,b)
Product
ab
a?b-ab
1-a
1
0
1
0
Zadeh
min(a,b)
max(a,b)
a¼0
a[0
a¼0
a[0
1-a
possible to assign multiple names to the same linguistic
variable. This is used when we wish to have multiple
atomic linguistic variables with the same range,unit, and
terms, but different names.
The number of parameters must agree with the type of the
membership function and each value must be within the
defined range. Violation of any of these conditions results in a
syntax error and termination of the program. The following
code segment declares a linguistic variable ‘‘oil-price’’ with
three terms ‘‘cheap’’, ‘‘average’’, and ‘‘expensive’’.
probabilities used throughout the program. The truth value
system determines the t-norm, t-conorm, and the negation
operations (Galatos 2007). The most common truth value
systems are listed in Table 2. Zadeh’s truth system is used
by default if not specified otherwise. The EBNF grammar
of the declaration part is as follows.
Two types of linguistic variables are considered: atomic
and composite. The atomic linguistic variable is specified
by a name, an optional unit, a range of valid values, and an
optional set of linguistic terms. Each linguistic term is
defined by its name, the type of membership function, and
a set of numbers which form the parameters of the membership function. The membership function may be of an
standard type, including : triangle, trapezoid, gaussian, Pi,
S, Z, left-shoulder, right-shoulder, gbell, sigmoid, and crisp
interval or a custom type specified by a set of points.
Users may not define two linguistics variables with the
same name, however, as the above grammar suggests, it is
A composite linguistic variable denotes a fuzzy relation
between two or more linguistic variables and is defined by
its name, the atomic linguistic variables of which it is
composed, and an optional set of terms. To reduce the
complexity of computation, the membership function of
terms is assumed to be discrete, i.e. it is specified by a set
of discrete points. Composite linguistic variables with
continuous membership functions will be included in the
future versions of the inference toolkit.
The following grammar specifies the declaration of a composite linguistic variable. The non-terminal hAtomic LVi
specifies the name of an atomic linguistic variable. In the syntax
level it is just an identifier, but in the semantics level, it must be
the name of an atomic linguistic variable which has already
been declared.
The following code segment defines a composite linguistic variable ‘‘size’’ with a linguistic term ‘‘petite’’. The
variable ‘‘size’’ is composed of two atomic linguistic
variables ‘‘weight’’ and ‘‘height’’.
123
E. S. Khorasani et al.
A crisp predicate P with arguments a1 ; . . .; an is in general
form: ‘‘P½a1 ; . . .; an ’’ , where a1 ; . . .; an are constant arguments or object variables. An object variable begins with a
$ sign followed an identifier denoting the name of the
variable. Object variables may be substituted by any constant in the inference process.
Fuzzy quantifiers and probabilities are defined by their
names and membership functions:
For example:
We are only concern about the relative fuzzy quantifiers
(Zadeh 1983), i.e., the quantifiers that lie between the classical
existential and universal quantifiers and denote a proportion,
such as: ‘‘most’’, ‘‘many’’, ‘‘about_half’’, ‘‘more_than_
60%’’,etc. The membership function of a fuzzy quantifier and
a fuzzy probability is, by default, defined on the unit interval.
The following GCL code defines the fuzzy quantifier ‘‘about_half’’ as a gaussian membership function with variance
r = .1 and mean m = 0.5.
4.2 GCL facts
The fact base consists of a set of generalized constraint
assertions. A generalized constraint assertion may be
atomic, quantified, probabilistic, or a fuzzy graph.
4.2.1 Atomic GC assersion
An atomic GC may be a crisp predicate or a generalized
constraint.
hAtomic GCi ::¼ hCrisp PredicateijhGCi:
123
A GC is in general form: ‘‘ X½a1 ; . . .; an isR’’, where X is
a linguistic variable, a1 ; . . .; an are constant arguments or
object variables, and R is a linguistic expression. A linguistic expression may have a simple form, i.e., a single
numeric value or a linguistic term defined for X. Or it might
be a more complex expression consisting of logical connectives (‘‘and’’, ‘‘or’’), linguistic modifiers (‘‘more_or_less’’,‘‘a_little’’,
‘‘slightly’’,
‘‘somewhat’’,
‘‘relatively’’, ‘‘very’’, ‘‘extremely’’, ‘‘indeed’’), and fuzzy
arithmetic operations (‘‘?’’ , ‘‘-’’ , ‘‘*’’ , ‘‘/’’ , ‘‘min’’,
‘‘max’’). Such operations may be performed on numeric
values, linguistic terms associated with X, or even other
linguistic variables, as illustrated in the following grammar. The grammar is simplified and details about the
hierarchy of the operations are excluded to increase readability. The non terminal symbols hLVi and hLTi represent
the name of a linguistic variable (composite or atomic), and
the name of a linguistic term, respectively. On the syntax
level, these non terminal symbols are just identifiers but on
the semantics level they must refer to a valid linguistic
variable and a linguistic term already defined in the declaration part.
An inference engine toolkit for computing with words
The following GCL code segment shows instances of
various forms of GC. The linguistic variables: ‘‘weight’’,
‘‘height’’, and ‘‘size’’ and their associated linguistic terms:
‘‘few_years’’, ‘‘few_kilos’’ ,‘‘tall’’, ‘‘few_inches’’, ‘‘small’’,
and ‘‘petite’’ must be previously defined in the declaration
part or the program produces error messages.
hProbabilistic GCi ::
¼ ‘ð0 hAtomic
GCi‘Þ0 ‘isp0 hModifierihFPi:
where the non-terminal hFPi represents a fuzzy probability
defined in the declartion section. For example:
4.2.3 Fuzzygraph assersion
4.2.2 Quantified and probabilistic GC assersions
Quantifiers are variable binding operators. A quantified GC
assersion is in general form: ‘‘mQ$xðA; BÞ’’ (read as:
mQA0 s are B), where m is an (optional) linguistic modifier,
Q is a relative fuzzy quantifier, $x is an object variable, and
A and B are a set of GC assersions containing the object
variable $x. The quantifier Q binds a free occurrence of the
object variable $x in B and A. In the rest of the paper we
refer to A and B as ‘‘minor’’ and ‘‘major’’ terms, respectively. Several atomic GCs in major and minor terms are
connected by ‘‘&’’. The nesting of fuzzy quantifiers is not
allowed in the current version of the inference engine and
each quantifier may bound only one variable. In the following grammar, the non-terminal symbol hFQi denotes a
fuzzy quantifier which is already defined in the declaration.
For example the following GCL code segments asserts
two quantified GCs stating: ‘‘most diabetes are overweight
or obese’’ and ‘‘very few young women are high risk of
developing breast cancer’’.
As before, the linguistic variables: ‘‘weight’’, ‘‘age’’, and
‘‘riskbc’’ and their corresponding linguistic terms: ‘‘overweight’’, ‘‘obese’’, ‘‘young’’, and ‘‘high’’ as well as the
fuzzy quantifiers ‘‘most’’ and ‘‘few’’ must be previously
declared.
A probabilistic GC-assertion is in general form: ‘‘A isp
m P’’, where A is an atomic GC, m is a linguistic modifier,
and P is a fuzzy probability.
Fuzzygraph has been traditionally represented as a set of
fuzzy implications (or fuzzy-if-then rules), all sharing the
same variables in their premises and conclusion, i.e.,
m
X
if ðX1 isAi1 ; X2 isAi2 . . .Xn isAin Þ then YisBi
i¼1
However, this representation is misleading as, in
practice, a fuzzy graph is not equivalent to the
conjunction of fuzzy implications but rather defines a
functional dependency between the input linguistic
variables ðX1 ; . . .; Xn Þ and the output linguistic variable
(Y) as explained in (Hájek 1998; Novák et al. 1999). In a
GCL program this dependency is expressed via a set of
fuzzy points: ðAi1 ; . . .; Ain ; Bi Þi¼1...m and is interpreted as
disjunction of such points. Hence the following syntax is
considered for a fuzzy graph assertion:
where, as before, the non terminal symbols hLVi and hLTi
represent a previously declared linguistic variable and a
corresponding linguistic term. The following GCL code
segment asserts a fuzzygraph which relates a person’s risk
of developing breast cancer to her age and weight.
Note the use of object variable $x in fuzzyGraph makes
the assertion valid for all substitutions of this variable. As
before, all the linguistic variables and their corresponding
linguistic terms must be previously declared.
123
E. S. Khorasani et al.
4.3 GCL queries
After specifying a set of GCL facts, user can write a set of
queries asking for the value of a linguistic variable, a fuzzy
quantifier, or a fuzzy probability. A linguistic value query
has the general form: ‘‘X½a1 ; . . .; an is?;’’ seeking the value
of the linguistic variable X with arguments (a1 ; . . .; an ). A
fuzzy quantifier query has the general form: ‘‘?$xðA; BÞ’’,
seeking the value of a binary fuzzy quantifier where $x is
an object variable and A and B are atomic generalized
constraints forming the minor and major terms of the
quantifier. Informally, this query asks for the proportion of
objects in A that are also in B, keeping in mind that the
objects can have partial memberships in A or B. The fuzzy
probability query has the general form: ‘‘(A isp?)’’, seeking
the fuzzy probability of an atomic generalized constraint A.
The following GCL code segment shows examples of
various types of queries. These queries may be read as:
‘‘what is the risk that Mary develops a breast cancer?’’,
‘‘how many young women with average weight have a low
chance of developing a breast cancer?’’ and ‘‘what is the
probability that Mary is high risk of developing a breast
cancer?’’.
5.1 Linguistic expressions and fuzzy arithmetic
operations
Before discussing CW inference rules, we need to describe
the operations performed when asserting an atomic GC. As
mentioned before, the value of a linguistic variable in a GC
may be a complex linguistic expression consisting of a
combination of logical connectives, linguistic modifiers
and fuzzy arithmetic operations. When the GCL compiler
reads a GC assersion with a complex linguistic expression,
it performs the operations, computes the resulting fuzzy
value and adds the fact to the Jess Rete network. The
hierarchy of operations are: (1) parenthesis, (2) modifiers,
(3) arithmetic, and (4) logical operations. The logical
operators (‘and’, ‘or’, and ‘not’) are implemented as tnorm, t-conorm, and negation operations according to the
truth value system defined in the declaration. If no truth
value system is specified, Zadeh’s operation are used as
default.
Various models exists for representing the meaning of
fuzzy modifiers such as (Di Lascio et al. 1996; Novák and
Perfilieva 1999). The following are standard built-in truth
function for the linguistic modifiers in Sect. 4.2.1. For any
truth value a 2 ½0; 1 :
•
•
•
•
•
•
•
•
5 Implementation of CW inference rules
Once a GCL program is compiled, the assertions in the
fact-base are translated into Java objects and added to Jess
Rete network. Jess rule engine selects and execute CW
rules whose right hand side matches the GCL assertions in
the fact base which, in turn, triggers fuzzy computation and
updates the fact base. The cycle is continued until no rules
match the facts. We have implemented the computational
part of all CW rules in java and encoded their symbolic
parts (or protoforms) as jess production rules. In what
follows we discuss the implementation of CW inference
rules and the circumstances under which they are fired.
123
veryðaÞ ¼ a2
more or lessðaÞ ¼ a2=3
a littleðaÞ ¼ a1=2
slightlyðaÞ ¼ a1=2
relativelyðaÞ ¼ a1=2
somewhatðaÞ ¼ a1=3
extremelyðaÞ ¼ a3
2
2a
indeedðaÞ ¼
1 2ð1 aÞ2
0 a :5
:5\a 1
An atomic GC with arithmetic operations activates the
extension principle. Extension principle is used in theory to
extend the crisp functions, such as arithmetic operations, to
accept fuzzy arguments; however, in practice, the implementation of extension principle is non-trivial and involves
nonlinear optimization. Hence, approximation methods are
usually used to obtain the membership function of the
resulting fuzzy set. A common practice is to decompose the
membership interval [0,1] into a finite number of values:
ða1 ; . . .; an Þ and, for each ai, take the level set of all the
operands. The level set, or a-cut, of a fuzzy value A is
defined as follows:
Aai ¼ fujlA ðuÞ ai g
where u is an element in the domain of A. If A is a convex
fuzzy set then for each ai, Aai is a single interval. The
arithmetic operations on convex fuzzy sets may be
performed on the a-cut intervals of the operands, using
An inference engine toolkit for computing with words
interval arithmetics (Hanss 2010; Kaufmann and Gupta
1991). This would give the a-cut intervals of the output
fuzzy set, and the membership function of the output (lO)
is produced by its a-cut intervals, as follows:
lO ðuÞ ¼ supfai ju 2 Oai g
This approach is efficiently implemented and provides a
good approximation to the exact solution of the extension
principle. For example, Consider the following GC Assersion which states that the price of oil in January is about
$3 more than about half of its price in December.
then the value of ‘‘oil_price[Dec]’’ would be a non-convex
fuzzy set obtained from taking the union of the fuzzy sets
‘‘about_5’’ and ‘‘approximately_4’’, as illustrated in Fig. 3.
As can be seen in this figure, the a-cut of a non-convex
fuzzy set may produce more than one interval for a given a.
In this case, the interval operations are performed on all the
intervals of each operand which, in turn, results in multi
intervals for the output. If the resulting intervals for an a
value overlaps, then we take the union of the overlapping
intervals.
5.2 Fusion of facts and conjunction rules
When GCL compiler reads this line, it first checks to
find the declaration of the linguistic variable ‘‘oil_price’’ as
well as the terms: ‘‘about_3’’ and ‘‘about_half’’ and
retrieve the fuzzy values defined for such terms. Second, it
checks the Rete network to find a fact containing the value
of ‘‘oil_price[December]’’. Let us assume that this value is
found to be ‘‘approximately-4’’ in Rete network. Next, the
result of the fuzzy arithmetic: ‘‘about_3 ? approximately_4/about_half’’ is computed using the a-cut method.
The computation is illustrated in Fig. 2. We assumed in
this figure that the linguistic terms ‘‘about_3’’, ‘‘approximately_4’’, and ‘‘about_half’’ are all declared as triangularshaped membership functions with parameters: ‘‘1, 3, 5’’ ,
‘‘3, 4, 5’’, and ‘‘0, 0.5, 1’’, respectively. Obviously, a higher
resolution of a leads to a more accurate output. In CW
inference engine, the membership interval [0,1] is decomposed into a thousand a values.
The requirement of convexity of the operands in fuzzy
arithmetic is a too strong assumption. Indeed it happens
often in CWJess that the operands are obtained from some
other computations which do not preserve convexity. For
instance, in the previous example, if the fact base included
the GC: ‘‘oil_price[Dec] is about_5 or approximately_4’’,
Fig. 2 An example of a-cut implementation of the extension
principle. The table shows the a-cuts of the operands and the output
of the arithmetic expression
When there are two pieces of information regarding the
same linguistic variable, fuzzy quantifier, or fuzzy probability the facts are combined in a conjunctive mode. The
conjunctive operations are valid aggregation operations in
the context of a GCL program as all the facts are assumed
to come from the same source (e.g., an expert) and,
therefore, are equally reliable.
The conjunction rule is activated when there are two
different atomic GCs in the fact base with the same linguistic variable and the same arguments. In this case, the
two GCs are combined to a single GC whose value is the
t-norm intersection the linguistic values of the two GCs.
The t-norm operation is by default the minimum operation
unless specified otherwise by truth value system in the
declaration part. For example if a GCL fact base contains
the following GC assertions, then the two GCs are removed
from the facts base and are replaced with a new GC:
‘‘distance[Chicago, Carbondale] is C’’, where C is a fuzzy
value obtained by taking the t-norm intersection of ‘‘near’’
and ‘‘about_300_miles’’.
Fig. 3 An example of implementation of arithmetic operations on
non convex fuzzy values. The table shows the a-cuts of the operands
and the output of the arithmetic expression
123
E. S. Khorasani et al.
The quantifier conjunction rule is activated when there
are two quantified GCs in the fact base with the same minor
and major terms but different fuzzy quantifiers. For
instance the following two facts are combined and replaced
with one fact with the t-norm intersection of the fuzzy
quantifiers ‘‘most’’ and ‘‘about-80
inference is activated and weight of Ellie is computed as
demonstrated in Fig. 4.
The probability conjunction rule is activated when there
are two different facts in the fact base regarding the
probability of the same event, as the following example.
The two facts in this example are replaced with a single
fact containing the t-norm intersection of fuzzy probabilities: ‘‘very likely’’ and ‘‘probably’’
The t-norm operation may result in a subnormal fuzzy
value (i.e., a fuzzy set whose maximum membership value is
less than one); in this case, the resulting fuzzy set is normalized before being added to the fact base. The conjunction
rules have a higher priority than other CW rules as different
constraints on the same variable, quantifier, or probability
must be combined before being used as premises of other
rules. It is wroth noting that the current version of the inference engine does not support conflict detection and/or resolution. If the fact base contains inconsistent information, e.g.,
conflicting values for the same linguistic variable, then the
conjunction might result in a fuzzy set with zero membership
everywhere on its domain which signals the conflict of
information. Hence, the user should ensure the consistency of
the information they assert to the fact base.
5.3 Compositional rule of inference and projection rule
The compositional rule of inference is activated when the
fact base contains two GCs- one, with an atomic linguistic
variable X, and another one with a composite linguistic
variable Z, where Z is composed of X and some other
linguistic variable Y. Then the max–min (or more generally
max-tnorm) composition is used to compute the value of
Y and a new GC is added to the fact base carrying this
information. For example, consider the following code
where the linguistic variable ‘‘size’’ is composed of the
atomic linguistic variables ‘‘weight’’ and ‘‘height’’ where
the information regarding size and height of Ellie is also
available in the fact base. The compositional rule of
123
The projection rule computes the value of an atomic linguistic variable, X, from the value of a composite linguistic
variable which is composed of X and some other linguistic
variables. The projection rule is less informative than the
compositional rule of inference. The conflict resolution
strategy defined for CW inference engine is based on the
degree of specificity, i.e., if all of the conditions of two or
more rules are satisfied and both rules assert or modify values
of the same linguistic variables in their right-hand-side , then
the rule with the most specific left-hand-side is fired. For
instance, in the above example, the projection rule and the
compositional rule of inference are both activated to give the
value of ‘‘weight[Ellie]’’, however, only the compositional
rule of inference is fired to provide such value as it uses more
information in its left-hand-side. However, if the fact base
only includes the GC: ‘‘size[Ellie] is petitie’’ but there is no
other information regarding weight[Ellie], then the projection
rule is activated to compute the value of this variable as
illustrated in Fig. 5.
5.4 Fuzzygraph interpolation
The fuzzy graph interpolation rule is activated when there
is a fuzzygraph in the fact base along with a set of GCs
An inference engine toolkit for computing with words
Fig. 4 The composition of size[Ellie] and height[Ellie] gives the
value of weight[Ellie] using max–min composition
Fig. 6 Example of fuzzy graph interpolation rule. The gray areas in
the first and second column show the membership functions of ‘‘about
55’’, and ‘‘very good’’ for the linguistic variables ‘‘age’’ and ‘‘health’’,
respectively
Fig. 5 The projection rule computes the value of weight[Ellie] by
taking the maximum membership values of size[Ellie] for each
weight value.The projection rule is less informative than the
compositional rule of inference
whose linguistic variables matches the input variables of
the fuzzy graph. In this case, Mamdani’s inference
method (which corresponds to max–min (or more generally, max t-norm composition) is used to find the value of
the output variable. For example, suppose that a factbase
contains the following assertions. The fuzzygraph assertion relates a person’s life insurance premium to their age
and heath condition and the GC assertions carry information about Bob’s age and his health condition. These
assertions activate the fuzzygraph interpolation rule and
Mamdani inference method is used to calculate Bob’s life
insurance premium as illustrated in Fig. 6. The gray areas
in the first and second column show the membership
functions of ‘‘about_55’’, and ‘‘very good’’ for the linguistic variables ‘‘age’’ and ‘‘health’’, respectively. First
the similarity of ‘‘about_55’’ and the input values for age
as well as the similarity between ‘‘very good’’ and all the
input values for ‘‘health_condition’’ is computed as the
maximum degree of intersection of their corresponding
fuzzy sets. Then the similarity value is used to crop the
fuzzy set of the output value. The final value for ‘‘insurance_premimum [Bob]’’ is the union of all the cropped
values.
It is worth noting that the representation of a fuzzy
graph as a set of fuzzy points in a GCL program helps to
avoid the difficulty which arises in fuzzy expert systems
when performing a chain of fuzzygraph interpolations. As
was pointed out in (Pan et al. 1998), fuzzy expert system
shells such as FuzzyClips allow users to assert individual
fuzzy rules about the same input/output variables. As a
result, the output of one fuzzy rule may be used in a chain
of inferences before being aggregated with other similar
fuzzy rules with same input/output variables. This problem
is avoided altogether in the CW inference engine, as all the
fuzzy points (or rules with the same input output/variables)
are used together to activate the fuzzy graph interpolation
rule, and hence the outputs are aggregated before being
used as an input of another CW inference rule.
5.5 Fuzzy syllogism rules
Before proceeding to the fuzzy syllogism rules some definitions are reviewed from (Zadeh 1983). A fuzzy quantifier Q is called monotone nondecreasing (nonincreasing) if
and only if it’s membership function is monotone nondecreasing (nonincreasing). This means Q is nondecreasing
iff:
123
E. S. Khorasani et al.
8u1 ; u2 2 ½0; 1 : u1 u2 $ ðlQ ðu1 Þ lQ ðu2 ÞÞ
and is nonincreasing iff:
8u1 ; u2 2 ½0; 1 : u1 u2 $ ðlQ ðu1 Þ lQ ðu2 ÞÞ
The following properties hold for nondecreasing (nonincreasing) quantifiers:
•
•
•
if Q is nondecreasing fuzzy quantifier then
(CQ = Q), read as: ‘‘at least Q = Q’’.
if Q is nondecreasing fuzzy quantifier then Q A are B
if Q is nonincreasing
implies Q A are B0 , where B B.
fuzzy quantifier then (BQ = Q), read as: ‘‘at most Q =
Q’’.
if Q is nonincreasing fuzzy quantifier then Q A are B
where B B .
implies Q A are B,
The product syllogism rule can be viewed as a fuzzy
version of the classical chaining syllogism. For example,
suppose that the fact base contains the following quantified
GCs stating that ‘‘about_30% of Americans are obese’’ and
that ‘‘most obese Americans eat fast food’’
These facts activate the product syllogism rule which in
turn adds the following facts to the fact base, stating that
‘‘most*about_30% of Americas are obese and eat fast
food’’, and that ‘‘at least most*about_30% of Americans
eat fast food’’. the fuzzy value ‘‘most * about_30%’’ and at
least most * about_30%’’ are computed using fuzzy arithmetic and extension principle (Fig. 7).
The intersection syllogism rule asserts the fact in the
fact base.
where Q denotes the number of women who have cancer
and drink frequently. Q is computed as follows:
maxð0; ðfew þ about 20%
1Þ Q minðfew; about 20%Þ
The inference engine performs such computation using
fuzzy arithmetic methods and extension principle. The
result is presented in Fig. 8. The statistical syllogism is
activated when a GC assersion in the fact base matches
with the minor premise of a quantified GC, as in the following example:
Then it activates the statistical syllogism and the inference engine asserts the following additional fact to the
factbase:
6 A case study: atuo insurance premium
The intersection syllogism rule is activated when there
are two quantified GCs in the fact base with the same minor
but different major terms. The right hand side of this rule
asserts a quantified GC whose major term is the intersection of the major terms of the premises. For example the
following two quantified GCs cause the intersection syllogism rule to activate.
This section presents a case study on implementing a small
GCL program for computing the auto insurance premium
based on imprecise information. The case study is intended
to illustrate writing a GCL program and examine the
inference flow in the CW inference engine. The following
information is taken from the insurance information institute1 and is slightly modified to fit the purpose of this study.
There are many factors that influence the price you pay
for auto insurance. The average American driver spends
about $850 a year for a standard coverage (100/300/50). A
persons premium may be higher or lower depending on the
following factors:
•
Fig. 7 The product syllogism. The fuzzy values ‘‘most’’,
‘‘about_30%’’, ‘‘most*about_30%’’, and ‘‘at least most*about_30%’’
123
1
Age: In general, teenagers and drivers that are over
about 75 years old are considered high risk, everybody
else is considered low risk.
http://www.iii.org
An inference engine toolkit for computing with words
6.1.1 Declaring the linguistic variables and fuzzy
quantifiers
Fig. 8 The intersection syllogism. The fuzzy values ‘‘few’’,
‘‘about_20%’’, and Q, the result of the intersection syllogism
•
•
•
•
•
Residency: The area a person lives may affects his auto
insurance premium. Lower traffic congestion residential areas are considered lower risk. Most urbun areas
have heavy traffic.
Price of the vehicle: The more expensive the vehicle the
more it would cost to insure it.
Credit score: People with a better credit score benefit
from a lower insurance rate.
Commuting distance to work: People who have to drive a
further distance to work everyday pay a higher insurance
premium.
Driving record: People with fewer history of traffic
violations or accident are generally considered lower risk.
The main linguistic variables in this example are: ‘‘insurance_premium’’, ‘‘risk_factor’’, ‘‘‘overall_average_risk’’,
‘‘driving_record’’, ‘‘Age’’, ‘‘traffic_congestion’’, ‘‘auto_price’’, ‘‘credit_score’’, ‘‘commute_distance’’, and ‘‘discount_rate’’ and all the linguistic variables are atomic. The
fuzzy quantifiers used are ‘‘many’’ and ‘‘most’’. Obviously,
one also needs to declare all the linguistic terms associated
with the linguistic variables in the fact base. The following
GCL code segment shows the declaration of linguistic
variables, terms and fuzzy quantifiers.
many Insurance companies would offer low risk drivers up to
about 30 % discount if they agree to install a snapshot device
in their car to monitor their driving habit and most low risk
drivers agree to have their driving habit monitored.
Now suppose that we have the following information
about an individual Sarah:
Sarah lives just outside of Chicago, she drives a
modest car and commutes about 30 miles to work.
She had never had an accident but had few minor
traffic violations which accumulated about 15 points
in her driving record2. Sarah’s credit score is in the
neighborhood of 700. We think Sarah is in her 50s
but we also know that she has a teenage son to whom
she gave birth when she was young. As a commonsense knowledge, we also know that urban areas such
as Chicago generally have a heavy traffic congestion.
Given this information, we are interested to estimate Sarah’s insurance premium as well as her average insurance risk.
6.1 Constructing the GCL program
The first step to encode all these knowledge into a GCL program, is to identify and declare all the linguistic variables, fuzzy
quantifiers, and fuzzy probabilities as well as their corresponding linguistic terms. The next step is to identify the type of
facts that needs to be asserted in to the fact base (i.e., a fuzzy
graph, generalized constraint, crisp predicate, or a quantified or
probabilistic GC). Finally the type of the queries should be
identified and formulated for information retrieval. For this
case study we choose the standard Zadeh’s truth value system.
2
The points values based on DMV point system in Illinois
123
E. S. Khorasani et al.
average risk value. In other words, if a person’s overall
average risk is low (or high) then her insurance premium is
less (or more) than about $850. In addition, the knowledge
regarding the individual ‘‘Sarah’’ is modeled as atomic
GCs. The following code segment shows the GCL fact base
corresponding to the knowledge under study.
6.1.2 Asserting facts and posing queries
To formulate our knowledge in terms of GCL facts, we first
need to decide what type of fact can best represent each
piece of knowledge. The dependency of risk factors to
variables ‘‘age’’, ‘‘traffic_congestion’’, ‘‘credit_score’’,
‘‘auto_price’’, ‘‘driving_record’’, and ‘‘commute_distance’’
is best modeled as a fuzzy graph. To represent the affect of
risk factors on the premium, it seems reasonable to first
compute a person’s overall average risk by taking the fuzzy
arithmetic mean of all her risk factors and then assume that
her premium is increased (or decreased) by the amount that
her average overall risk exceeds (or falls below) the
123
An inference engine toolkit for computing with words
After defining all the linguistic variables and inputting
the knowledge in terms of GCL facts, one can pose various
queries to the system. The queries which we are interested
in this case study is: how much would it cost Sarah to buy
auto insurance? and what is her average risk? One can also
ask more general questions such as: how many safe drivers
(i.e. low risk drivers) get about 30 % discount?
When the program is executed, the GCL compiler parses
the facts, creates corresponding java objects, and adds them
to Jess Rete network. then the right-hand-sides of all CW
rules, activated by the facts, are computed and derived facts
are placed to the Rete network. Conclusively, the CW
inference toolkit performs forward reasoning on GCL facts.
Table 3 shows Jess fact base and the activation of the rules
in agenda as each GCL fact is added to the program. Recall
that the GCL formulas are added as java objects (or shadow
facts) to Jess fact base.
As shown in the table the facts regarding Sarah’s driving_record, credit_score, commute_distance, auto_price, and
residency_traffic along with the related fuzzy graph facts
activate the fuzzygraph interpolation rule which, in turn,
compute the values of the corresponding risk factors. There
are two pieces of information regarding Sarah’s age in the fact
base: one that explicitly states that Sarah is in her-50s and
another one that computes Sarah’s age as her son’s age plus
the age when she gave birth to him. The latter activates the
extension principle and as a result Sarah’s age is computed
using fuzzy addition. The two separate values achieved for
age[Sarah] are combined into a single value using the conjunction rule and then the result is used in fuzzy graph interpolation to compute risk_factor[age, Sarah]. The values
obtained for different risk factors are used in calculation of
Sarah’s overall_average_risk which, in turn, is used to calculate her final insurance premium. In addition, the two
quantified GCs activate the product syllogism rule which
computes the number of safe drivers who agree to use snapshot device in their car and receive about 30 % discount; since
the resulting fuzzy value (most * many) is monotonic, this is
equal to the number of safe drivers that get about_30% discount. The length of the longest inference chain in this case
study is five, beginning with computing the value of age[Sarah] and ending in computing her insurance_premium.
123
E. S. Khorasani et al.
Table 3 Jess factbase and agenda as each GCL fact is added to the GCL program
123
An inference engine toolkit for computing with words
Table 3 continued
The results of executing the above GCL program is
shown in Fig. 9. The output displays the graph of
fuzzy values obtained for each query. Along side each
graph is also given the numeric value for the output
(using the center of gravity defuzzification method) as
well as a linguistic term which can best describes the
output. Such a linguistic term is chosen from the
vocabulary of terms defined in the declaration part and
is determined using Jaccard Similarity measure (Cross
and Sudkamp 2002). Note that the results are consistent with a human evaluation of the situation. Since
Sarah’s risk factors are mostly low then we would
expect that her overall average risk to be also low and
her insurance premium to be less than the average of
$850 and the output of the GCL program in Fig. 9
confirms our intuition.
7 Discussion and summary
This paper reports the implementation of an inference
engine toolkit for supporting CW reasoning. A GCL language is developed for the inference engine to allow users
to express their knowledge in form of generalized constraints and pose queries to a GCL knowledge base. The
scope of the inference engine, as opposed to many available fuzzy logic toolboxes, goes beyond simple fuzzy-ifthen rules by performing a chain of inferences on complex
fuzzy propositions composed of fuzzy arithmetic operations, fuzzy quantifiers, and fuzzy probabilities.
The paper is a continuation of our earlier work
(Khorasani et al. 2011) in which we used jess syntax
to express generalized constraints. The GCL language
presented in this paper allows users to express their
123
E. S. Khorasani et al.
Fig. 9 The output of the case
study
knowledge in a more convenient way and hides the
complexity of the underlying jess program. In addition,
many more CW rules are implemented and added to the
inference engine.
For future direction of this work we will primarily focus
on the the fusion of possibilistic and probabilistic information as well as the implementation of a probabilistic
extension principle. For example, suppose that, in the case
study of the previous section, we pose an additional query
to the knowledgebase asking for the probability that Sarah
would be eligible for a 30 % discount:
But the inference engine cannot provide an answer to
this query. If the value obtained for the overall_average_risk[Sarah] was exactly equal to low, then the
123
following statistical syllogism rule would have been activated to compute the inquired probability.
However, this is not the case as this value is close but
not equal to ‘‘low’’. Hence, to compute the probability in
question, we need to develop a modified version of statistical syllogism rule to compute the probability when the
premises only partially match, i.e.,
Q1 A are B
0
X is A
ðX is BÞisp ?
An inference engine toolkit for computing with words
Once this probability is obtained another difficulty is to
incorporate the probabilistic information about the discount
rate in calculation of the insurance_premium, i.e., to make
the following inference:
This inference is an instance of probabilistic extension principle. Zadeh (2011), in a recent article, proposed to represent
a probabilistic generalized constraint: ‘‘(X is A) isp P’’, as
an ordered pair of fuzzy numbers, (A, P), referred to as
Z-numbers. Where A is the possibility distribution of the
linguistic variable X, and P is a restriction on the underlying
probability distribution of X. He then outlined a methodology
to perform arithmetic operations on Z-numbers but such
methodology, as it stands, is too complex to implement and
appropriate approximation methods are needed to be developed before it can be applied in practice.
References
Cao Y, Chen G (2010) A fuzzy petri-nets model for computing with
words. IEEE Trans Fuzzy Syst 18(3):486–499
Cao Y, Ying M (2007) Retraction and generalized extension of
computing with words. IEEE Trans Fuzzy Syst 15(6):1238–1250
Cross V, Sudkamp T (2002) Similarity and compatability in fuzzy set
theory: assessment and applications. Studies in Fuzziness and
Soft Computing, vol 93. Springer, Berlin
Delgado M, Verdegay JL, Vila MA (1993) On aggregation operations
of linguistic labels. Int J Intell Syst 8(3):351–370
Di Lascio L, Gisolfi A, Loia V (1996) A new model for linguistic
modifiers. Int J Approx Reason 15(1):25–47
Galatos N (2007) Residuated lattices: an algebraic glimpse at
substructural logics. In: Studies in logic and the foundations of
mathematics, 1st edn. Elsevier, Amsterdam
Gudwin RR, Gomide F (1999) Object networks: a computational
framework to compute with words, chap 6. Studies in Fuzziness
and Soft Computing, vol 33. Springer, New York, pp 443–478
Hájek P (1998) Metamathematics of fuzzy logic. Kluwer, Dordrecht
Hanss M (2010) Applied fuzzy arithmetic: an introduction with
engineering applications, 1st edn. Springer, Berlin
Hartwig R, Labinsky C, Nordhoff S, Landorff B, Jensch P, Schwanke
J (1996) Free fuzzy logic system design tool: Fool. In:
Proceeding of 4th European congress on intelligent techniques
and soft computing (EUFIT 96), vol 3, pp 2274–2278
Herrera F, Martinez L (2000) A 2-tuple fuzzy linguistic representation
model for computing with words. IEEE Trans Fuzzy Syst
8(6):746–752
Herrera F, Herrera-Viedma E, Martinez L (2008) A fuzzy linguistic
methodology to deal with unbalanced linguistic term sets. IEEE
Trans Fuzzy Syst 16(2):354–370
Herrera F, Alonso S, Chiclana F, Herrera-Viedma E (2009) Computing with words in decision making: foundations, trends and
prospects. Fuzzy Optimization and Decision Making
8(4):337–364
Herrera-Viedma E (2001) Modeling the retrieval process for an
information retrieval system using an ordinal fuzzy linguistic
approach. J Am Soc Inf Sci Technol 52(6):460–475
Herrera-Viedma E, López-Herrera A (2010) A review on information
accessing systems based on fuzzy linguistic modelling. Int J
Comput Intell Syst 3(4):420–437
Herrera-Viedma E, Pasi G, Lopez-Herrera AG, Porcel C (2006)
Evaluating the information quality of web sites: a methodology
based on fuzzy computing with words: special topic section on
soft approaches to information retrieval and information access
on the web. J Am Soc Inf Sci Technol 57(4):538–549
Herrera-Viedma E, López-Herrera AG, Alonso S, Moreno JM,
Cabrerizo FJ, Porcel C (2009) A computer-supported learning
system to help teachers to teach fuzzy information retrieval
systems. Inf Retr 12(2):179–200
Juliano B (2001) Cognitive sciences and computing with words, chap 7.
Wiley Series on Intelligent Systems. Wiley, New York, pp 235–250
Kacprzyk J, Zadrozny S (2001) Computing with words in intelligent
database querying: standalone and internet-based applications.
Inf Sci 134(1-4):71–109
Kacprzyk J, Zadrozny S (2010a) Computing with words is an implementable paradigm: fuzzy queries, linguistic data summaries, and
natural-language generation. IEEE Trans Fuzzy Syst 18(3):461–472
Kacprzyk J, Zadrozny S (2010b) Modern data-driven decision support
systems: the role of computing with words and computational
linguistics. Int J General Syst 39(4):379–393
Kaufmann A, Gupta M (1991) Introduction to fuzzy arithmetic:
theory and applications. Electrical-Computer Science and Engineering Series, Van Nostrand Reinhold Co.
Khorasani E, Rahimi S (2010) Towards an automated reasoning for
computing with words. In: 2010 IEEE international conference
on fuzzy systems (FUZZ), pp 1–8
Khorasani E, Rahimi S (2011) Constraint propagation tree for the
realization of a cw question answering system. Int J Comput
Intell Theory Pract 6(2):75–87
Khorasani ES, Rahimi S, Gupta B (2009) A reasoning methodology
for cw-based question answering systems. In: Proceedings of the
8th international workshop on fuzzy logic and applications,
WILF’09. Springer, Berlin, pp 328–335
Khorasani ES, Rahimi S, Patel P, Houle D (2011) Cwjess: an expert
system shell for computing with words. In: 2011 IEEE international conference on information reuse and integration (IRI),
pp 396–399. doi:10.1109/IRI.2011.6009580
Lawry J (2001) An alternative approach to computing with words. Int
J Uncertain Fuzziness Knowl Based Syst 9(Supplement):3–16
Liu F, Mendel JM (2008) Encoding words into interval type-2 fuzzy
sets using an interval approach. IEEE Trans Fuzzy Syst
16(6):1503–1521
López-Herrera AG, Herrera-Viedma E, Herrera F (2009) Applying
multi-objective evolutionary algorithms to the automatic learning
of extended boolean queries in fuzzy ordinal linguistic information
retrieval systems. Fuzzy Sets Syst 160(15):2192–2205
Mendel J, Wu D (2010) Perceptual computing: aiding people in
making subjective judgments. Wiley, New York
Mendel J, Zadeh L, Trillas E, Yager R, Lawry J, Hagras H,
Guadarrama S (2010a) What computing with words means to me
(discussion forum). IEEE Comput Intell Mag 5(1):20–26
Mendel JM, Lawry J, Zadeh LA (2010b) Foreword to the special
section on computing with words. IEEE Trans Fuzzy Syst
18(3):437–440. doi:10.1109/TFUZZ.2010.2047961
Morales-del Castillo JM, Peis E, Ruiz AA, Herrera-Viedma E (2010)
Recommending biomedical resources: a fuzzy linguistic
approach based on semantic web. Int J Intell Syst 25(12):
1143–1157. doi:10.1002/int.20447
123
E. S. Khorasani et al.
Novák V, Perfilieva I (1999) Evaluating linguistic expressions and
functional fuzzy theories in fuzzy logic. In: Zadeh L, Kacprzyk J
(eds) Computing with words in information/intelligent systems
1. Foundations. Physica Verlag, New York
Novák V, Perfilieva I, Mockor J (1999) Mathematical principles of
fuzzy logic. The Kluwer international series in engineering and
computer science, Kluwer Academic, Boston, pp 305–313
(99037210 GB99-66327 Vilem Novak, Irina Perfilieva and Jiri
Mockor. 25 cm. Includes bibliographical references and index)
Orchard R (2001) Fuzzy reasoning in jess: the fuzzy j toolkit and
fuzzy jess. In: Proceedings of the third international conference
on enterprise information systems (ICEIS 2001)
Pan J, Desouza GN, Kak AC (1998) Fuzzyshell: a large-scale expert
system shell using fuzzy logic for uncertainty reasoning. IEEE
Trans Fuzzy Syst 6:563–581
Porcel C, Herrera-Viedma E (2010) Dealing with incomplete
information in a fuzzy linguistic recommender system to
disseminate information in university digital libraries. Knowl
Based Syst 23(1):32–39
Raskin V, Taylor JM (2009) The (not so) unbearable fuzziness of
natural language: the ontological semantic way of computing with words. In: Annual Meeting of the North American
Fuzzy Information Processing Society, 2009. NAFIPS 2009,
pp 1–6
Reformat M, Ly C (2009) Ontological approach to development of
computing with words based systems. Int J Approx Reason
50(1):72–91
Surhone L, Tennoe M, Henssonow S (2010) Fuzzyclips. VDM Verlag
Dr. Mueller AG & Co. http://books.google.com/books?id=4Y8
fkgAACAAJ
Türksen IB (2002) Type 2 representation and reasoning for cww.
Fuzzy Sets Syst 127(1):17–36
Türksen IB (2007) Meta-linguistic axioms as a foundation for
computing with words. Inf Sci 177(2):332–359
123
Wang J, Hao J (2007) An approach to computing with words based on
canonical characteristic values of linguistic labels. IEEE Trans
Fuzzy Syst 15(4):593–604
Wu D, Mendel JM (2010) Computing with words for hierarchical
decision making applied to evaluating a weapon system. IEEE
Trans Fuzzy Syst 18(3):441–460
Yager R (1995) An approach to ordinal decision making. Int J Approx
Reason 12(3–4):237–261
Yager RR (1999) Approximate reasoning as a basis for computing
with words. Studies in Fuzziness and Soft Computing. Springer,
Berlin, pp 50–77
Yager RR (2006) Knowledge trees and protoforms in questionanswering systems: special topic section on soft approaches to
information retrieval and information access on the web. J Am
Soc Inf Sci Technol 57(4):550–563
Yager RR (2011) Reasoning with doubly uncertain soft constraints.
Int J Approx Reasoning 52(4):554–561
Ying M (2002) A formal model of computing with words. IEEE Trans
Fuzzy Syst 10(5):640–652
Zadeh LA (1983) A computational approach to fuzzy quantifiers in
natural languages. Comput Math Appl 9(1):149–184. doi:
10.1016/0898-1221(83)90013-5
Zadeh LA (1999) Fuzzy logic = computing with words. IEEE Trans
Fuzzy Syst 4(2):103–111
Zadeh LA (2005) Toward a generalized theory of uncertainty (gtu):
an outline. Inf Sci Inf Comput Sci 172(1–2):1–40
Zadeh LA (2006) From search engines to question answering systems.
The problems of world knowledge, relevance, deduction and
precisiation. Elsevier, Amsterdam
Zadeh LA (2011) A note on z-numbers. Inf Sci 181:2923–2932
Zadrozny S, Kacprzyk J (1996) Fquery for access: towards human
consistent querying user interface. In: Proceedings of the 1996
ACM symposium on applied computing, SAC ’96. ACM, New
York, pp 532–536