From Feature Models to Decision Models and Back

From Feature Models to Decision Models and Back Again: An
Analysis Based on Formal Transformations
Sascha El-Sharkawy, Stephan Dederichs, Klaus Schmid
Software Systems Engineering, Institute of Computer Science,
University of Hildesheim, Germany
{elscha, dederichs, schmid}@sse.uni-hildesheim.de
Please cite this publication as follows:
Sascha El-Sharkawy, Stephan Dederichs, and Klaus Schmid. “From Feature Models to Decision Models and Back Again: An Analysis Based on Formal Transformations”. In: Proceedings of the 16th
International Software Product Linehinch Conference (SPLC’12). Vol. 1. ACM, 2012, pp. 126–135.
doi: 10.1145/2362536.2362555.
The corresponding BibTEX-entry is:
@INPROCEEDINGS{El-SharkawyDederichsSchmid12a,
author = {Sascha El-Sharkawy and Stephan Dederichs and Klaus Schmid},
title = {From Feature Models to Decision Models and Back Again:
An Analysis Based on Formal Transformations},
booktitle={Proceedings of the 16th International Software Product
Linehinch Conference (SPLC’12)},
publisher = {ACM},
year = {2012},
volume= {1},
pages = {126--135},
doi
= {10.1145/2362536.2362555}
}
ACM, 2012. This is the authors version of the work. It is posted here by permission of the ACM
for your personal use. Not for redistribution. The definitive version was published in Proceedings of
the 16th International Software Product Linehinch Conference (SPLC’12), doi: 10.1145/2362536.
2362555.
From Feature Models to Decision Models and Back Again
An Analysis Based on Formal Transformations
Sascha El-Sharkawy, Stephan Dederichs, Klaus Schmid
University of Hildesheim, Institute of Computer Science,
Marienburger Platz 22, 31141 Hildesheim, Germany
+49 (0)5121 – 833 {768, 767, 761}
{elscha, dederichs, schmid}@sse.uni-hildesheim.de
ABSTRACT
Variability Modeling, Decision Modeling, Feature Diagrams,
Software Product Line Engineering
of the families significant research exists on comparing the
different approaches, but so far very little research addressed
comparisons across the different families.
In this paper, we introduce a formal semantics for both
modeling paradigms. Since there does not exist a definition for basic decision modeling, we develop such a formal
semantics, called Basic Decision Modeling. Based on these
semantics, formal transformations between the two modeling concepts are offered. These transformations enable us to
show constructively the relation between the two paradigms.
More precisely, we will show that the two approaches are
in general not equivalent (i.e., cannot be transformed into
each other), but we also show that with a more powerful
constraint language they are actually equivalent. This enables the transfer of solutions for already solved problems
to the other family of approaches and the creation of an
integrated framework. We expect that this work will also
provide a conceptually sound and formal base for extending
feature modeling tools to decision models and vice versa.
The EASy-Producer tool already supports both views [1].
This paper is structured as follows: In Section 2, we show
related work in the area of formal semantics for variability
modeling and comparisons of different approaches. Section 3
defines formal semantics for minimal feature and decision
modeling. In Section 4 and 5, these formal definitions are
used to describe formal transformations from decision models to feature models and vice versa. The outcome of these
transformations is discussed in Section 6.
1. INTRODUCTION
2. RELATED WORK
In Software Product Line Engineering, variability modeling plays a crucial rule. Over the years, a couple of different modeling paradigms with a plethora of different approaches have been proposed. However, only little attention
was spent to compare these concepts. In this paper, we
compare the capabilities and expressiveness of basic feature
modeling with basic decision modeling.
In this paper, we also present a formalization of basic decision modeling and show that in combination with a powerful
constraint language both approaches are equivalent, while in
their very basic forms they are not equivalent. These results
can be used to transfer existing research results between the
two paradigms.
Categories and Subject Descriptors
D.2.9 [Software Engineering]: Management—Software
configuration management; D.2.13 [Software Engineering]: Reusable Software—Reuse models
General Terms
Algorithms, Management, Theory
Keywords
Software Product Line Engineering (SPLE) has been established to minimize costs and efforts, while maximizing
the quality of products in a family of software products [5].
In SPLE, variability models play a crucial role as they are
used to model all valid products in the product line. For
variability modeling two main paradigms of variability modeling exist: Feature Modeling and Decision Modeling. Both
families consist of a plethora of different approaches and extensions to enable appropriate expressiveness. Within each
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are not
made or distributed for profit or commercial advantage and that copies bear
this notice and the full citation on the first page. Copyrights for components
of this work owned by others than ACM must be honored. Abstracting with
credit is permitted. To copy otherwise, to republish, to post on servers or to
redistribute to lists, requires prior specific permission and/or a fee.
SPLC ’12 September 02 – 07 2012, Salvador, Brazil.
Copyright 2012 ACM 978-1-4503-1094-9/12/09 ...$15.00.
Variability modeling is a core part of product line engineering. Thus, it has received considerable attention over
the years. In this paper, our focus is on feature modeling
and decision modeling as the two main families of variability modeling approaches, thus we focus on these approaches
exclusively. So far, no other work dealt with a formal comparison or transformation between the two categories of approaches. We identify two main categories of papers that
are relevant to our work: comparisons of approaches and
formal work on variability modeling concepts. The remaining section is organized according to these two categories.
Comparison of approaches. Schobbens et al. [16,
17] provide an overview of different feature modeling approaches. Based on this, they provide a generalized syntax and provide a common semantics, called free feature
diagrams. Their survey provides a good analysis of the
commonalities and differences of the various approaches.
Classen et al. [4] developed a formal definition of “feature”
based on a comparison of existing work. All together, these
provide a good overview of different feature modeling approaches.
Schmid et al. [15] present a comparative analysis of representative decision modeling approaches. Based on this analysis, they identify commonalities and variabilities of decision
modeling approaches. This work provides a comprehensive
overview of existing decision modeling approaches and inspired the formal semantics of Basic Decision Modeling in
Section 3.2. It was also the basis to identify the KobrA
approach as a minimal form of decision modeling.
There already exist a comparison of feature modeling with
decision modeling. Czarnecki et al. [6] compare these two
paradigms based on a number of different aspects. The considered aspects cover various topics like historical origins
and rationales, syntactic and semantic richness, and tool
support. However, these comparison is completely informal
as opposed to the work we present here. Thus, it mostly
focused on improving the understanding of the considered
approaches. While we restrict ourselves to the modeling aspects of the two approaches, we provide a formal comparison
of commonalities and differences of the modeling paradigms.
Formal work. The semantics of feature models in general as well as the semantics of individual approaches are
often discussed in literature. Schobbens et al. [17] define
a semantics for feature models. They compare the modeling concepts of different approaches like FODA [11], FORM
[12] or FeatureRSEB [10] and generalize the various syntaxes
through a formal definition, called Free Feature Diagrams.
Czarnecki et al. [7] show a formal option to perform staged
configuration through feature models. In addition to this,
they show how cardinality-based feature models can be specified through a precise and formal semantics. We restrict our
work to basic feature modeling without extensions like cardinalities and compare this with a basic version of decision
modeling.
Czarnecki and Wasowski [8] describe an automated and effective method for turning feature models into propositional
formulas and back again. This leads on the one hand to new
possibilities for applications in the area of reverse engineering as well as refactoring of feature models. On the other
hand this facilitates statements about the expressiveness of
feature models, which help to understand the semantics of
feature models. We use some results from this work in the
context of our transformations of feature constraints.
A lot of work exist that discussed feature model verification [3, 2, 19, 20]. The purpose of this work is to detect
over-constrained feature models or dead features in order to
avoid them. Based on our work, it is possible to apply this
work also in the context of decision modeling.
In contrary to feature models, the semantics of decision
models has been discussed only little. There exist formal [9]
and semi-formal [14] definitions for specific decision-oriented
approaches. However, this work refers only to specific approaches and does not provide a generalization of existing
approaches.
In summary, several work on feature and decision modeling have been published. Despite this fact so far no formal
comparison of feature modeling with decision modeling has
been done. This is the main contribution offered in this
paper.
3. VARIABILITY MODELS
In this section we introduce formal definitions for feature
models and decision models. As far as possible, we used
existing works as basis. In Section 4 and 5, these formal
definitions are used to describe formal transformations from
decision models to feature models and vice versa.
3.1
Free Feature Diagrams
Schobbens et al. [17] introduced Free Feature Diagrams
(FFD) as a formal definition for describing and comparing the syntax and semantics of different Feature Diagrams
(FDs) (e.g. FODA, FORM or FeatureRSEB). Below we will
use this formal definition for our formal transformations as
a solid basis to start from.
FFDs are used to represent the different approaches of
FDs. Schobbens et al. define FDs as directed graphs which
have exactly one root node. The root node describes the concept of the FD and, in some feature modeling approaches like
e.g. FODA, cannot be deselected. Schobbens et al. do not
give an information about the selectability of the root node
so we will assume that the root node cannot be deselected.
All other nodes of the graph represent features, which could
be restricted through graphical or textual constraints between two different features. The edges in the graph represent the relationships between individual features.
Schobbens et al. introduce node types and name only examples for them. These examples are root, compound, prim,
and, or, xor, opt, vp(i,j). Compound nodes are intermediate nodes which can be used for decomposing features.
Primitive nodes are leaf nodes which directly influence the
product. Below we will summarize primitive and compound
nodes to primitive nodes, because the difference does not influence our transformations. We call nodes of the type and,
or, xor, opt, or vp(i,j) as operator nodes. This nodes represent Boolean functions (operators). Operator nodes are
always between two nodes from type root or prim. All operator nodes have an index s (e.g. and3 ), which represents
the arity of the operator. Below, if no arity is specified, we
speak of the superset containing all individual operators of
the same type, e.g., and = {and1 , and2 , . . .}. Below, we will
not consider the operator nodes vp(i,j) and or because they
are not present in all basic feature diagrams.
The edges of the graph can be further divided into decomposition edges and constraint edges, where both kinds
of edges connect exactly two nodes. Decomposition edges
on the one hand are used to connect an operator node with
another node and constraint edges on the other hand represent graphical constraints. Graphical constraints are binary
→
Boolean operators. The authors list only requires (rq) and
←→
mutual exclusive (mx) as examples.
Definition 1. Based on [17] we define Free Feature Diagrams (FFD) as a 7-tuple:
F F D = (N, P, r, λ, DE, CE, Φ)
The individual components of a FFD are:
• N is the set of all nodes in the graph. We further
introduce N as the set of all possible nodes in FFDs,
with N ⊆ N.
• P ⊆ N is the set of all primitive nodes in the graph.
• r ∈ N is the root node in the graph, also called concept.
The root node is unique in the graph.
• λ : N → N T labels each node with an operator from
N T = and ∪ opt ∪ xor ∪ {prim, root}. Schobbens et
al. [17] did not completely define N T . We thus extend
the original definition by the two values prim and root
in order to ensure that λ is a total function on N . We
did not consider the node type compound, because we
do not further distinguish between primitive and compound nodes in our transformations. Further, we do
not included vp and or as there are many basic feature
modeling approaches, which do not support thise kind
of feature decomposition [17].
• DE ⊆ N × N is the set of decomposition edges in the
graph, while (n, n′ ) ∈ DE means n → n′ .
r
opt1
and3
A
B
• Φ are textual constraints. While [17] do not make any
stronger restrictions on Boolean formulas and name
→
←→
only rq and mx as examples for textual constraints,
we also consider arbitrary propositional formulas as
textual constraints like [8]. Without loss of generality,
we assume that the formulas are in conjunctive normal
form. This will simplify the transformations, as we can
draw benefits form the fixed structure.
Figure 1 shows an example of a FFD. At the top of the
FFD is the root node r bearing two operator nodes opt1 and
and3 . The index of the operator node opt1 indicates that
the node refers to exactly one feature. opt means, that A
is an optional feature. Additional to that, the optional feature A contains the operator node and1 . Out of this, the
mandatory feature E is dependent on feature A. The operator node and3 on the first level contains the mandatory
features B, C and D. B and D are primitive features. Feature C is further divided and builds up a feature group. It
contains the operator node xor3 . This operator node indicates, that exactly one of the features F , G and H must be
chosen. Features I and J have the same behavior as A and
E respectively.
3.2
Basic Decision Modeling
Schmid et al. compared and analyzed representative decision modeling approaches in [15]. This work provides a
good overview about capabilities of well-established decision
modeling approaches. We use the outcome of this informal
comparison as a basis for a formal definition of a decision
modeling approach, called Basic Decision Modeling (BDM).
In Section 4, we introduce formal transformations, which are
able to transform arbitrary decision models into free feature
diagrams.
Considering only the modeling aspects of [15], each decision model DM ∈ BDM consists only of a subset of all possible decisions D. Further, a subset of all possible constraints
C define interactions among the decisions. In general, these
constraints are a subset of propositional logics. This is detailed in Definition 5. Decision Modeling approaches, starting with the very earliest: Synthesis [18], also support hierarchies. However, they are not a necessary part of the
paradigm like in feature modeling. Thus, we decided to not
D
xor3
and1
E
F
G
H
and1
opt1
I
J
→ ←→
• CE ⊆ N ×{rq, mx}×N is the set of constraint edges in
the graph which represents the graphical constraints.
→
←→
Requires (rq) and mutex (mx) are binary Boolean operators. Graphical constraints are not available in all
variants of FDs. We do not distinguish between different representations of constraints as this also do not
have a semantic impact for our transformations.
C
Figure 1: Example of a free feature diagram.
take them into account in decision modeling, in order to use
a more simplistic approach to decision modeling as basis.
In feature modeling, it is not possible to ignore hierarchies,
thus we include them in our analysis of feature modeling.
Definition 2. We define a decision model DM ∈ BDM as
a 2-tuple:
DM = hD, Ci
with D ⊆ D, C ⊆ C
Each decision d can be described of at least a unique identifier, a description and a data type θ. Since identifier and
description are only for users and has no semantic impact
on the model, we will not consider this below. All investigated decision modeling approaches support at least the use
of Boolean B as well as enumeration E as data type. Since
any range of values can be freely defined for decisions, a further attribute is needed to describe decisions: the range ρ.
For the description of the data type and the range, we use
a similar description as in [9]. However, we decided to use
a simpler formalization that does not differentiate between
different enumeration types to simplify the formalism. As a
consequence, the data type and the range are needed for the
definition of different enumeration decisions.
Definition 3. We define a decision d using a 2-tuple:
d = hθd , ρd i
In order to access the various elements of a decision, we
use the following shorthand notations:
• θ(d) = θd ∈ {B, E}
and
ρ(d) = ρd
• We write | S | for the cardinality of S
• T , F are shorthand notations for true and false if
θ(d) = B
In decision modeling it is possible to differentiate between
decision (or variables) and possible values. We introduce a
further symbol, which can not be used as a possible value of
a decision to facilitate this while transforming decisions and
values into features (cf. Definitions 6 and 7).
Definition 4. We define ⋆ as an arbitrary but fixed value,
which is not part of the range of any decision:
⋆
with ∀d ∈ D : ⋆ 6∈ ρ(d)
As discussed in [15], different decision modeling approaches support different expressiveness in their constraints. The most restricted form of constraints is supported by KobrA, which uses tables to express constraints
of the form: “if d1 = x then d2 must be equal to y”. This can
be interpreted as an implication (below we write such implications as “d1 = x → d2 = y”). Even in older approaches
any propositional formulas are allowed. Thus KobrA is not
representative. For this reason, we also allow arbitrary constraints in conjunctive normal form. However, we will also
consider the limitations of KobrA in the later transformations and comparison. For better differentiation in case-bycase analysis, we use CK for all possible KobrA constraints,
with CK ⊂ C.
Definition 5. A constraint c ∈ C is a propositional formula
in conjunctive normal form, where the Boolean literals have
one form shown in Table 1. Since implications can be translated to propositional formula, C is a subset of CK in which
only Type I literals are allowed.
Table 1: Allowed literal types in a constraint of a
decision model.
Type
Type I
Type II
Type III
Type IV
Literal
d=a
d 6= a
d1 = d2
d1 6= d2
Explanation of Elements
d ∈ D ∧ a ∈ ρ(d)
d ∈ D ∧ a ∈ ρ(d)
d1 , d2 ∈ D ∧ ρ(d1 ) = ρ(d2 )
d1 , d2 ∈ D ∧ ρ(d1 ) = ρ(d2 )
Table 2 shows a fictive example of a decision model with
three decisions and two constraints. d1 is a Boolean decision, whereas d2 and d3 are enumeration decisions with the
same range. Consequently it is possible to model constraints
demanding that d2 should be equal or not equal to d3 , as it
is the case in c1 .
Table 2: Example of a basic decision model.
di
d1
d2
d3
D⊂D
θ
ρ
B {T , F }
E {a, b, c}
E {a, b, c}
ci
c1
c2
C⊂C
constraint
d2 = d3 ∨ d1 = F
d2 6= a ∨ d1 = T
—
4. TURNING BASIC DECISION MODELS
TO FREE FEATURE DIAGRAMS
In this section, we introduce our transformation function TF F D , which maps an arbitrary decision model DM ∈
BDM onto a feature model F ∈ F F D. Together with
the transformation function TBDM of Section 5 a discussion
about the equivalence of the specific variability modeling
paradigms is possible. This is done in our conclusion.
The transformation works in two steps:
1. First all decisions are translated separately into distinct features. This process must also consider all possible values of the range of each decision. This is done
in the next subsection.
2. Subsequently all decision constraints are mapped onto
feature constraints. This step must assure that the
literals of a decision constraint are mapped onto the
corresponding images of the first step. This transformation is explained and discussed in Section 4.2.
The end of this section describes how the individual parts
of the transformation are fitted together.
4.1
Mapping Decisions to Feature Nodes
For the mapping of decisions onto nodes of a free feature
diagram several possibilities are conceivable. One option is
to map Boolean decisions onto optional features and enumeration decisions onto feature group alternatives. Considering the possibility that KobrA allows constructs like
d1 = F → d2 = x, this would not be sufficient. For this
reason, we map all decisions to feature groups and all decision values to a corresponding alternative. This also has the
advantage that the following constructs are simpler, because
a case-by-case analysis relating to θd is not necessary.
For the transformation of decisions and decision values
into distinct features, we create 2-tuples of decisions with
their values. Further we also combine each decision with ⋆
to facilitate the differentiation between decision values and
the decision itself.
Definition 6. ED is a set of all combinations of decisions
with their possible values/⋆ of a set of decisions D in a Decision Model:
ED = {hd, wi | d ∈ D, w ∈ ρ(d) ∪ {⋆}}
There exist an injective function from ED to N, as now all
decisions and their values can be distinguished. A combination of the unique identifiers of the decisions can be used
to give the images of the translation function unique labels.
As this has no semantic impact of the translations, we do
not consider this any more.
Definition 7. Let σ be an injective function:
σ : ED → N
There exist elements in N which are no images of σ, as
|N| = ∞ and |ED | < ∞. We pick one of these elements and
use this as root for the newly created feature model.
Definition 8. A root node r ∈ N can be chosen arbitrarily,
such that:
∀ǫ ∈ ED : σ(ǫ) 6= r
By means of σ it is possible to map the elements of ED
onto nodes of a feature diagram. However, Schobbens et al.
differ between primitive nodes and operator nodes grouping primitive nodes. This applies figuratively to decisions,
which group possible decision values. For this reason, we
map decision values onto primitive nodes and the decisions
itself onto operator nodes.
Definition 9. We form a set of all value mappings P as
follows:
[
[
Pd
Pd =
{σ(hd, wi)}
and
P =
d∈D
w∈ρ(d)
Analogous, we form a set of operator nodes gathering all
primitive nodes.
Definition 10. We call Nop the set of all decision mappings:
[
Nop =
{σ(hd, ⋆i)}
d∈D
All created nodes must be labeled with Boolean operator
functions, because we only consider a very simple decision
model only xor is needed beside root and prim. Usual decision models are also capable to handle sets and consequently
multiple selection of decision values. Such decision models
need more operators from N T .
Definition 11. Let λ be a label
σ:

 hσ(hd, wi), primi
hσ(hd, wi), xor|ρ(d)| i
λ=
 hr, rooti
function for the images of
if σ(hd, wi) ∈ P
if σ(hd, wi) ∈ Nop
else
At this point, the decomposition edges for describing the
structure of the feature diagram is missing. We construct
these edges in two steps: First, we connect all operator nodes
with the root (DEr ). Afterwards, we connect all value mappings with the corresponding operator node (DEd ).
Definition 12. We call DED as the set of all composition
edges of the newly created feature diagram:
[
DEd
DED = DEr ∪
d∈D
with:
DEr
DEd
=
=
{hr, ni | n ∈ Nop }
{hk, li | k = σ(hd, ⋆i), l ∈ Pd }
r
d1.T
d1.F
xor3
d2.a
d2.b
d3.a
d3.b
d3.c
Figure 2: Transformed decisions of the decision
model from Table 2.
4.2
Definition 13. We define τK : CK → CE as a translation
function, which maps KobrA constraints onto graphical constraints:
→
with:
then τC : C → Φ translates constraints with:
^_
τC (c) =
τL (lij )
i
j
Definition 15. We define τL (l) as a translation function
mapping literals of a decision constraint onto sub constraints
in a feature constraint. These mapping rules are shown in
Table 3.
Type
Type I
Type II
Type III
τK : di = a → dj = b 7→ σ(di , a) rq σ(dj , b)
|
{z
}
|
{z
}
∈CE
di , dj ∈ D, a ∈ ρ(di ), b ∈ ρ(dj )
This translation works in all kind of feature diagrams, because every feature modeling approach we know support
requires-constraint at least textually or graphically. Thus,
if only textual constraints are supported, τK must map into
lij
d1 = a
d1 6= a
d1 = d2
Constraint
σ(hd1 , ai)
¬σ(hd1 , ai)
^
(σ(hd1 , ai) → σ(hd2 , ai))
a∈ρ(d1 )
Type IV
d1 6= d2
^
(σ(hd1 , ai) → ¬σ(hd2 , ai))
a∈ρ(d1 )
Translation of Decision Constraints
In the transformation of the constraints, the different kind
of constraints discussed in Section 3 must be considered.
First we show how KobrA constraints can be translated before we show how arbitrary propositional formulas can be
translated. Within each transformation, we discuss which
constraint types must be supported in the image set. Because KobrA constraints can be regarded as implication,
they can be translated directly into requires constraints.
Therefor, first the literals must be translated into features.
∈C
j
Table 3: Translations of τ L (l), with d1 , d2 ∈ D,
a ∈ ρ(d1 ) = ρ(d2 ).
xor3
d2.c
Definition 14. Let c ∈ C be in form of
^_
c=
lij
i
In Figure 2, we give an example how decisions are mapped
onto nodes in a feature diagram. In case of operator nodes,
we only draw the Boolean functions instead of a label to
keep the example clear and simple. It is readily apparent
that each decision is mapped to an alternative. Thus, the
three decisions of Table 2 are mapped onto three feature
groups.
xor2
Φ instead of CE. It is also possible to convert such implications into conjunctive normal form, which leads the following
transformation τC to be feasible, too. For complex formulas
→
←→
rq and mx constraints are not sufficiently. This is due a
requires constraint can be regarded as an implication, while
←→
an constraint in the form of A mx B can be regarded as
two constraints:A → ¬B and B → ¬A. In turn, these kind
of constraints are not sufficient to express arbitrary propositional formulas.
If propositional formulas are allowed on both sides of the
transformation, only the literals must be translated into features. We assume without loss of generality that the decision
constraints are in conjunctive normal form. For reasons of
clarity, we do not convert the images into conjunctive normal
form.
It should be noted that while we assume that all c ∈ C are
in CNF, but τC (c) ∈ Φ are not necessarily in CNF. However,
this is not a problem as we use the structure of CNF only
for translation. Also the results could be presented in CNF,
we decided to use the representation of Table 3 to keep it
more readable.
4.3
Translation of the whole Decision Model
Different translation functions can be used for the transformation of decision models into feature diagrams, under
different preconditions: decision models with KobrA constraints (CK ) or with constraints in propositional logic (C).
The translation of decisions base in both situations on the
same principles, only for the transformation of decision constraints different functions are used depending on the expressiveness of the respective kind of constraints.
Definition 16. Depending on the used decision modeling
approach, we use different translation functions. For KobrA
we use:
[
τK (c), ∅i
TFKF D (DM ) = hP ∪ Nop ∪ {r}, P, r, λ, DED ,
{z
}
|
c∈C
=N
| {z }
=CE
For BDM we use TF F D :
[
TF F D (DM ) = hP ∪ Nop ∪ {r}, P, r, λ, DED , ∅,
τC (c)i
|
{z
}
c∈C
=N
| {z }
=Φ
Theorem 1. In both cases the respective translation function transforms a decision model into a semantically equivalent feature model (FFD) with propositional logic as constraint language.
Proof 1. For each decision exactly one alternative feature
group with one sub-feature per decision value is created.
This establishes a one-to-one correspondence between possible values (and their combinations) in the decision model
and the translated decision model (feature model), if constraints are not taken into account. It should be noted that
the cases no features selected or features selected simultaneously that correspond to multiple values of a variable or
prohibited as the corresponding features are by construction part of a single alternative. Thus, the decisive factor
is whether the translation of the constraints cause the same
configurations to be valid or not. Ideally, such a proof would
be performed by structural induction over the set of possible
constraints. Due to space limitations, we do not show the
full proofs here, but focus on a discussion of the main proof
ideas.
In the case that on both sides propositional logics are allowed, the main issue is whether the mapping of literals is
correct as the constraint structure of the original and the
translation is identical. For this reason, we consider 5 cases:
are KobrA constraints translated correctly and are the four
kinds of literal types translated correctly?
KobrA constraints
In KobrA the selection of one decision value causes
the selection of other decision values. This has the
same semantics as the requires constraints in feature
diagrams. However, even if a decision is set to false,
this can lead to setting another decision value. A false
value in a Boolean decision is also translated into a feature. Thus, the corresponding implication is set also to
a requires constraint and has the same semantics for a
feature model. However, we note that the translation
of KobrA-models to features hinges upon the translation of the False-value to an independent feature.
Apart from this restrictions the use of the requires relation would be sufficient to translate these models.
Thus, KobrA can be translated also to feature modeling approaches which restrict the expressiveness of the
constraint language to requires and excludes.
Type I literals
The translation of Type I literals is correct, as in this
case τL (l) is identical to the definition of σ (cf. Definition 7).
Type II literals
This is like the previous case. In addition, we point
out that d1 = a and d1 6= a can not hold at the same
time as they are represented by two different features
connected by a xor operator node.
Type III literals
Here, we need to ensure that a) this is true if a value in
d1 corresponds to the same value in d2 and that b) no
other value for d2 is selected. The first part is ensured
by the implication and by quantifying over all values in
the value range. The second part is again ensured by
our encoding of variables as alternative feature groups.
Type IV literals
Type IV literals express that the same values of two
decisions are mutually exclusive. This is ensured by
the negated implication and can be proven by induction over the size of the range ρ.
The next step would be to show that constraint formulas
for decisions, composed of these elementary building blocks,
would be translated to equivalent formulas over decision
models. As the structure is identical (without loss of generality we can assume them to be of a structure like conjunctive
normal form), we refrain from a detailed proof.
5. TURNING FREE FEATURE DIAGRAMS
TO BASIC DECISION MODELS
In this section we introduce the translation function
TBDM which maps an arbitrary feature model F ∈ F F D
onto a decision model DM ∈ BDM . This translation function uses a decomposition of the feature model. Each feature
group will be translated separately into a set of Boolean
decisions. For this operation, the feature diagram will be
separated into subgraphs sn . After the translation of feature groups into decisions, constraints representing the conditions of the feature diagram will be generated.
Below we explain how our translation works. We start
with the decomposition of the feature diagram into subgraphs, which leads directly to the creation of decisions. We
also use these subgraphs to cover hierarchy and structure of
the feature diagram into constraints. In Section 5.3, we show
how graphical and textual constraints can also be mapped
into decisions. Afterwards we put the pieces together and
show how the overall process works. Finally, at the end of
this section, we give a short example.
5.1
Decomposition
Each feature group will be translated into a set of Boolean
decisions. For this step, subgraphs consisting of operator
nodes and its direct successor nodes are defined.
Definition 17. A subgraph sn is a 3-tuple of the following
structure:
sn = hNn , Pn , ni
with:
• n is the root of the subtree and is an operator node
(λ(n) 6∈ {prim, root}).
• Pn = {n′ | hn, n′ i ∈ DE} is the set of all direct successor nodes of n. Pn consists only of primitive nodes.
• Nn = Pn ∪ {n} is the set of all nodes in sn .
Figure 3 shows the feature diagram from Figure 1 with all
identified subgraphs sn illustrated by dashed circles.
Definition 18. We call SF the set of all subgraphs of F ∈
F F D, which are relevant for the transformation. This set is
defined by:
SF = {sn | n ∈ N ∧ λ(n) 6∈ {prim, root}}
With SF it is now possible to map each feature group (=
subgraph) to a set of decisions. It would be possible to map
r
opt1
s1
5.2
So far we only generated a decision for each feature, but
did not yet transform the semantics of the individual operators. This will be done in this subsection. The translation of
the feature diagram structure and the operators to decision
constraints works in two steps:
and3
s2
A
B
xor3
and1
s3
D
C
s4
E
F
G
H
opt1
and1
s5
s6
I
Mapping the Structure to Constraints
J
Figure 3: Separated free feature diagram of Figure 1.
feature group alternatives (λ(n) ∈ xor) onto enum decisions.
However, this kind of translation would lead to unnecessary
complexity for the general case with full propositional logic.
In particular, the case that all of the sub-features need to
be deselected if the predecessor feature is deselected, would
make the transformation unnecessarily complex. Thus, we
decided to translate each feature into Boolean decisions.
Constraints ensure that no more than one feature can be
selected.
Definition 19. We define δ : SF → P(D) a function, which
maps each subgraph onto a set of Boolean decisions:
δ(sn ) = {υ(n′ ) | n′ ∈ Pn }
The function υ(n) maps each primitive node of a subgraph to a Boolean decision. It must be noted that the root
node is not part of SF . However, this node must also be
translated to a decision, as it in combination with its direct
successor nodes results in constraints. For this reason, we
also define υ(n) for the root node. In most feature diagrams
approaches, like FODA [11, 17], the root node (also called
the concept) can not be discarded, because this would lead
to an empty configuration. In these cases, the translation of
the feature diagram must ensure that. We decided to map
the root node to an enumeration decision with the range
{T }.
Definition 20. We define υ which maps primitive nodes
and the root node to decisions:
hB, {T , F }i if λ(n) = prim
υ(n) = dn =
hE, ρn i
if λ(n) = root
Normally, for the root node ρn is only {T }. If a deselection
of the root node and consequently an empty configuration
is valid, like in [13], ρn would become {T , F }.
1. For each operator node a corresponding decision constraint will be created. The optional operator (opt)
can always be evaluated to true, regardless of the assignment of the sub-features. For this reason, we do
not translate opt to a decision constraint, as this constraint would not influence the overall decision model.
2. The hierarchy of the feature diagram must also be
translated into decision constraints to ensure that if
a feature is deselected, all successor nodes are also deselected.
We use implications as in [13, Table 1] to create the
Boolean constraints to represent the hierarchy and the functionality of the operator nodes. In feature diagrams, hierarchy and operator nodes have the same function as constraints in decision modeling. Each sub-feature will be deselected, if the parent node is deselected. For this reason,
the predecessor node of a subgraph is used for the creation
of a set of constraints. Although it is possible that a feature
has several predecessors, while introducing separate operator nodes for each predecessor feature [17], each operator
node gets exactly one predecessor. We introduce a predecessor function p(sn ) which returns the predecessor node of
sn .
Definition 21. We define p(sn ) as the predecessor node of
sn .
With p(sn ) it is now possible to create the aforementioned
constraints. First, we create the constraints representing
the Boolean functions of the operator nodes. In this step,
we must consider that the Boolean function of the operator node has only impact if the predecessor node was not
deselected. We use implications to cover this aspect.
Definition 22. We define χop : SF → C, which maps the
Boolean function of an operator node to a decision constraint:
χop,and (sn ) if λ(n) ∈ and
χop (sn ) =
χop,xor (sn ) if λ(n) ∈ xor
with
χop,and : sn 7→ υ(p(n)) = T →
^
υ(n′ ) = T
n′ ∈Pn
and
χop,xor : sn 7→


_ 

υ(p(n)) = T →
υ(o) = T ∧

o∈Pn 
^
p6=o
p∈Pn




υ(p) = F 


The function χop translates only the Boolean operators of
the operator nodes to appropriate constraints. A selected
feature means that the corresponding predecessor node is
not deselected. This is the inverse direction of what we described and is still missing. We introduce χH for the creation
of such hierarchical constraints. The predecessor must be
true, if at least one of the successors is true. This rule holds
for all kind of operator nodes, consequently, no case-by-case
analysis is needed.
Definition 23. We define χH : SF → C which maps the
hierarchal structure of a feature diagram to a decision constraint:
_
υ(n′ ) = T → υ(p(n)) = T
χH : sn 7→
n′ ∈Pn
We gather all structural constraints in CS to make the
next steps clearer and shorter. The introduction of CS has
no semantic impact of the transformation.
Definition 24. We define CS as the set of all structural
constraints of F ∈ F F D:
[
{χop (sn )} ∪ {χH (sn )}
CS =
sn ∈SF
With the offered mapping of structural constraints, it is
not possible to map feature models to KobrA constraints.
The mapping of feature group alternatives (xor nodes) is not
possible because in KobrA disjunctions are missed, which
are needed for χop,xor . We expect this can be solved with the
introduction of enum decisions holding all successor features
as possible values and a further value for representing the
absence of the whole feature group.
5.3
Mapping of Feature Constraints to Decision Constraints
Above, we only provide a mapping of the structure of the
feature diagram to decision constraints. However, a free
feature diagram can have further constraints, both textual
and graphical. In this section we offer a mapping of these
constraints.
In Section 3.1 we already discussed different kinds of feature constraint. While some feature modeling approaches
support propositional logic, most basic feature modeling ap→
proaches support only requires (rq) and mutual exclusive
←→
(mx) as binary constraint types. We introduce two alternative translation function for feature constraints (Definitions 25 and 27) to allow independent statements regarding
the translatability of feature models.
→
First, we discuss the translation of requires (rq) and mu←→
tual exclusive (mx) constraints. While these constraints can
be represented both graphically and textually, they are semantically identical in both forms. Without loss of generality, we assume these constraints are graphical constraints
(∈ CE). Thus χg is a mapping from CE to C, which would
also work from Φ to C as long Φ also allows only expressions
→
←→
containing rq and mx. In addition, χΦ could also translate
mutex- and requires-constraints into decision constraints, as
χΦ fully supports propositional logic.
→ ←→
Definition 25. Let c = hn, g, n′ i ∈ CE with g ∈ {rq, mx}
be a graphical constraint. We define χg : CE → C as a
translation function:
(
→
υ(n) = T → υ(n′ ) = T if g =rq
′
χg (hn, g, n i) =
←
→
υ(n) = T → υ(n′ ) = F if g =mx
In feature diagrams which allow the use of arbitrary
propositional formulas, features are regarded as Booleans.
Consequently, literals in constraints can contain either positive (fi ) and negative features (¬fi ). For this reason, we
introduce a translation function, which maps such feature
literals to Type I decision literals, thereby we use l as shorthand notion for positive or negative features.
Definition 26. We define τL as translation function for
literals in a feature constraint as follows:
υ(l) = T if l is an positive literal (f )
τL (l) =
υ(l) = F if l is an negative literal (¬f )
The translation of literals τL can now be used for the
translation of complete constraints. For this step, we use
the restriction that the constraints are in conjunctive normal as an auxiliary construct. This does not influence the
generality of this translation.
Definition 27. We define χΦ : Φ → C as translation function for arbitrary textual constraints as follows:
^_
χΦ (c) =
τL (lij )
i
j
with:
c∈
^_
i
lij
j
Definitions 25 and 27 can be used for translating arbitrary
constraints into decision constraints. Definition 25 make use
of the fact that CE is a subset of propositional logic with binary constraints. In this special case, the constraints can be
mapped to KobrA constraints (CK ). In general, not all basic feature modeling approaches make such hard restrictions
that Definition 25 is sufficient to cover alls constraints. In
these cases Definition 27 can be used, which also covers all
constraint types already covered by Definition 25. However,
since Definition 27 allows arbitrary propositional formulas,
KobrA is not able to handle the images of Definition 27.
5.4
Translation of the Whole FFD
So far, we only described how to map the parts of a free
feature diagram to individual pieces of basic decision models.
In this section we put these pieces together to form a consistent decision model DM . This process is rather simple, we
only need to ensure that the constraints must reference the
appropriate decisions. This is already done as the constraint
functions make use of υ.
Definition 28. We define TBDM as translation function for
arbitrary free feature diagrams F ∈ F F D as follows:
[
[
[
TBDM (F ) = h
δ(s), CS ∪
χΦ (c)i
χg (c) ∪
c∈CE
s∈SF
|
{z
D
} |
c∈φ
{z
C
}
The result of these translations is obviously a decision
model. The questions are:
1. Is the resulting decision model valid, i.e., does it not
contain conflicting constraints?
2. Does the resulting decision model allow exactly the
same configurations as the original feature model?
For correct feature models the second question subsumes the
first question. Thus, we only consider the second one.
Theorem 2. Configurations of TBDM (F ) are valid if and
only if the corresponding configurations of F are valid
Table 5: Structural constraints CS of the feature
diagram in Figure 3.
– In χH we
W collect all children of an operator node
in one
composition. The implication to the
predecessor feature is formed over this composition. The resulting formula is equivalent to the
conjunction over the individual constraints given
in [13].
χg As these constraints are only binary constraints, it is
possible to prove the correctness of the transformation
rules with truth tables. We omit this here for brevity.
χΦ The transformation of textual constraints makes use
of the fact that ∨ and ∧ have the same semantics in
Φ as well as in C, as both sets are subsets of the same
superset: propositional logic. What remains open is
the question whether both constraints influence the
same configurations. This is the case since χΦ uses υ,
which maps per definition features to correct decisions.
5.5
Example
At this point, we present an illustrative example for the
transformation of the free feature diagram from Figure 1
and Figure 3, respectively. For clarity we do not use the di
notation for describing decisions, instead we use for decisions
the same symbols as in the original diagram in Figure 1.
Table 4 shows the resulting decisions of δ. It is obvious
that the image of the root node is a single enum decision
holding only T as possible value. All other decisions are
Boolean decisions. This is done as normally the root node
cannot be discarded (cf. Definition 20).
Table 4: Image of the features from the diagram in
Figure 3.
d
r
A
B
C
θ
E
B
B
B
ρ
{T }
{T , F }
{T , F }
{T , F }
d
D
E
F
G
θ
B
B
B
B
D
ρ
{T , F }
{T , F }
{T , F }
{T , F }
d
H
I
J
θ
B
B
B
ρ
{T , F }
{T , F }
{T , F }
—
After the translation of features into decisions, constraints
are created. Table 5 shows the structural constraints. The
combination of the constraints together with a root node
yield in the result that at least r and one of the decisions B,
C, and D must be present in every configuration.
Since we do not specify any textual or graphical constraints in Section 3.1, no further constraints will be generated for this decision model. Thus, the result of the transformation is DM = hD, CS i.
si
s2
s3
s4
s5
s1
s2
s3
s4
s5
s6
CS
Constraint
r =T →B =T ∧C =T ∧D =T
A=T →E=T
C = T → (F = T ∧ G = F ∧ H = F) ∨ (G =
T ∧F = F ∧H = F )∨(H = T ∧F = F ∧G = F)
F =T →I=T
A=T →r=T
B =T ∨C =T ∨D =T →r =T
E=T →A=T
F =T ∨G=T ∨H =T →C =T
I=T →F =T
J =T →H=T
6. CONCLUSION AND FUTURE WORK
In this paper, we analyzed the relationship between feature modeling and decision modeling. As a basis for our
analysis we used two different, but rather basic variants of
feature modeling and decision modeling. For feature modeling we compared basic feature modeling with a constraint
language restricted to requires and mutex vs. basic feature
modeling with full propositional logic as constraint language.
For decision modeling, we took as a minimal approach the
KobrA-method and as alternative extension, we again extended it to constraints defined based on full propositional
logic (with equality and inequality constraints).
This leads to four different combinations and thus eight
different, possible translations (two per direction). We identified that only some of these translations can be performed
in a semantics-preserving way. This is shown in Table 6. The
arrows denote the direction in which a loss-less translation
is possible. The entries must be read: row entry, relation,
column entry.
Table 6: Convertibility of discussed variability modeling concepts.
FFDs
with
– We introduced χop,xor , which allows the selection
of exactly one feature in a feature group. Thus,
the resulting formula has exactly the same semantics as the “1-of-n”-function in [13].
χH
CS These transformation rules correspond to the Boolean
formulas of feature models in [13, Table 1] (cf. Section 5.2). We introduced only two small changes:
χop
Proof 2. For this proof, we assume that the original feature model is consistent. Thus C of Definition 28 is also
consistent, if the individual transformations are correct:
→
←→
rq and mx
Propositional logic
Decision Modeling
KobrA
BDM
←
→
(→)
←
↔
As shown in Table 6, it is always possible to transform
KobrA models into feature diagrams. The opposite direction is in general not possible with our transformation approach. The reason relies in the restriction, that we cannot
represent alternatives adequately (without full propositional
logic). In future work, we will extend our approach to show
equivalence of KobrA and basic feature modeling. However,
it must be considered that KobrA makes very hard restrictions, as even older decision modeling approaches allow a
broader range of constraint possibilities [15].
Also transformations of basic decision models to feature
diagrams are in general not always applicable, as feature
modeling approaches in their very basic form usually operates only of a subset of propositional logic [17]. Therefore,
this kind of feature diagrams can be regarded as intermediate step.
As soon as we allow the full expressiveness of propositional
formula for both approaches (including equality and inequality for decision modeling), both approaches are equivalent.
Thus, in terms of expressiveness (not in terms of the representation), the main question is about the expressiveness
of the constraint language, not about the basic modeling
concepts. However, even while we can translate in both directions in this situation, this does not mean that the two
transformations are inverse. If we translate from a feature
model to a decision model and back again using our approach, we will usually not arrive again at the same model.
A main contribution of this paper is that, due to the
equivalence of both modeling approaches, the large body
of knowledge of feature modeling results (e.g., consistency
checking methods, various proofs of properties, etc.) can be
transferred rather easily to decision modeling approaches.
(Of course also vice versa for decision modeling results.)
However, for the various results it also needs to be checked
whether they rely on the same foundations as our model
translation. This is a major reason why we used FFDs as a
basis, which were shown to subsume many different feature
modeling approaches.
In this paper, we discussed rather basic modeling approaches and their equivalence. Both in feature modeling
and in decision modeling significantly more sophisticated
modeling concepts like feature group cardinalities, feature
cardinalities, sets of decisions, decision groups, and references exist. The analysis of possible translations among
these is open future work.
[7]
[8]
[9]
[10]
[11]
[12]
[13]
7. ACKNOWLEDGMENTS
We would like to thank Patrick Heymans and Krzystof
Czarnecki in answering some questions on the formal foundations of feature modeling.
This work is partially supported by the INDENICA project, funded by the European Commission grant 257483,
area Internet of Services, Software & Virtualisation (ICT2009.1.2) in the 7th framework programme.
[14]
[15]
8. REFERENCES
[1] EASy-Producer project site. available at http:
//www.uni-hildesheim.de/index.php?id=8035&L=1
[Online; May 2012].
[2] D. Batory. Feature Models, Grammars, and
Propositional Formulas. In Proc. of the 9th
International Software Product Lines Conference
(SPLC’05), pages 7–20. 2005.
[3] D. Benavides, S. Segura, and A. Ruiz-Cortes.
Automated Analysis of Feature Models 20 Years
Later: A literature Review. Information Systems,
35(6):615–636, 2010.
[4] A. Classen, P. Heymans, and P.-Y. Schobbens. What’s
in a Feature: A Requirements Engineering
Perspective. In Proc. of the 11th international
Conference on Fundamental Approaches to Software
Engineering (FASE’08/ETAPS’08), pages 16–30,
2008.
[5] P. Clements and L. Northrop. Software Product Lines:
Practices and Patterns. 2002.
[6] K. Czarnecki, P. Grünbacher, R. Rabiser, K. Schmid,
and A. Wasowski. Cool Features and Tough Decisions:
Two Decades of Variability Modeling. In Proc. of the
[16]
[17]
[18]
[19]
[20]
6th International Workshop on Variability Modeling of
Software-Intensive Systems (VaMoS ’12), pages
173–182, 2012.
K. Czarnecki, S. Helsen, and U. Eisenecker. Staged
Configuration Using Feature Models. In Proc. of the
3rd Software Product Line Conference (SPLC’04),
pages 266–283, 2004.
K. Czarnecki and A. Wasowski. Feature Diagrams and
Logics: There and Back Again. In Proc. of the 11th
International Software Product Lines Conference
(SPLC’07), pages 23–34, 2007.
D. Dhungana, P. Heymans, and R. Rabiser. A Formal
Semantics for Decision-oriented Variability Modeling
with DOPLER. In Proc. of the 4th International
Workshop on Variability Modelling of SoftwareIntensive Systems (VaMoS’10), pages 29–35, 2010.
M. Griss, J. Favaro, and M. d’Alessandro. Integrating
feature modelling with the rseb. In International
Conference on Software Reuse, pages 76–85, 1998.
K. C. Kang, S. G. Cohen, J. A. Hess, W. E. Novak,
and A. S. Peterson. Feature-Oriented Domain Analysis
(FODA) Feasibility Study. Software Engineering
Institute Carnegie Mellon University, Tech. Rep.
CMU/SEI-90-TR-21 ESD-90-TR-222, 1990.
K. C. Kang, S. Kim, J. Lee, K. Kim, E. Shin, and
M. Huh. FORM: A Feature-Oriented Reuse Method
with Domain-Specific Reference Architectures. Annals
of Software Engineering, 5:143–168, 1998.
M. Mendonça, A. Wasowski, and K. Czarnecki.
SAT-based analysis of feature models is easy. In Proc.
of the 13th International Software Product Lines
Conference (SPLC’09), pages 231–240, 2009.
K. Schmid and I. John. A Customizable Approach To
Full-Life Cycle Variability Management. Science of
Computer Programming, 53(3):259–284, 2004.
K. Schmid, R. Rabiser, and P. Grünbacher. A
Comparison of Decision Modeling Approaches in
Product Lines. In Proc. of the 5th Variability Modeling
of Software-Intensive Systems (VaMoS’11), pages
119–126, 2011.
P.-Y. Schobbens, P. Heymans, J.-C. Trigaux, and
Y. Bontemps. Feature Diagrams: A Survey and a
Formal Semantics. In Proc. of the 14th IEEE
International Requirements Engineering Conference
(RE’06), pages 139–148, 2006.
P.-Y. Schobbens, P. Heymans, J.-C. Trigaux, and
Y. Bontemps. Generic Semantics of Feature Diagrams.
Computer Networks: The International Journal of
Computer and Telecommunications Networking,
51(2):456–479, 2007.
Software Productivity Consortium Services Corp.,
Tech. Rep. SPC-92019-CMC. Reuse-Driven Software
Processes Guidebook, Version 02.00.03, 1993.
W. Zhang, H. Zhao, and H. Mei. A Propositional
Logic-Based Method for Verification of Feature
Models. In Proc. of the 6th International Conference
on Formal Engineering Methods (ICFEM’04), pages
115–130, 2004.
W. Zhang, H. Zhao, and H. Mei. Binary-Search Based
Verification of Feature Models. In Proc. of the 12th
International Conference on Top Productivity through
Software Reuse (ICSR’11), pages 4–19, 2011.