Efficient Propaganda - Strategies to promote your opinion in a social

Efficient Propaganda
Strategies to promote your opinion in a social network
Michael Franke
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Experimental Setup
Results
Overview
1
introduce DeGroot’s (1974) model of opinion dynamics
basic results on sufficient conditions for convergence and consensus
2
generalize DeGroot’s model
(some) agents may distribute their influence strategically
3
formulate the propaganda problem
most efficient way of having one’s opinion spread in society
theoretical vs. practical propaganda problem
4
discuss results of a numerical simulation
2 / 44
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Experimental Setup
Results
DeGroot’s Model
DeGroot (1974)
• population of n agents
• opinions at time t given by (row) vector x(t) ∈ Rn
P
• n × n influence matrix P with pij ≥ 0
j pij = 1
pij : how much i takes j’s opinion into account
• linear discrete update rule:
x(t + 1) = P x(t)
(1)
3 / 44
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Experimental Setup
Results
DeGroot’s Model
Influence Matrix P → Weighted Directed Graph G (P)

.7
P = .2
.4
.3
.5
.5

0
.3
.1
.7
.5
.3
1
2
.4
.2
0
.5
.3
3
.1
4 / 44
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Experimental Setup
Results
DeGroot’s Model
Stubbornness
pii
how much i holds on to her opinion
diag(P)
stubbornness vector
Power of Influence
i influences j to extend pji
PiT
i’s power vector

.7
P = .2
.4
.3
.5
.5

0
.3
.1
(transposed influence)

.7
P = .2
.4
.3
.5
.5

0
.3
.1

.7
P = .2
.4
.3
.5
.5

0
.3
.1
Neighbors
N(i) = {j | pij > 0 ∧ i 6= j}
N T (i) = {j | pji > 0 ∧ i 6= j}
5 / 44
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Experimental Setup
Results
DeGroot’s Model
Example
.7

.7
P = .2
.4
.3
.5
.5

0
.3
.1
1
.6
.3
2
.2
 
.6
x(0) = .2
.9
0
3
.9
6 / 44
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Experimental Setup
Results
DeGroot’s Model
Example
.7

.7
P = .2
.4
.3
.5
.5

0
.3
.1
1
.48
.3
2
.49
 
.48
x(1) = .49
.43
0
3
.43
6 / 44
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Experimental Setup
Results
DeGroot’s Model
Example
.7

.7
P = .2
.4
.3
.5
.5

0
.3
.1
1
.48
.3
2
.47
 
.48
x(2) = .47
.48
0
3
.48
6 / 44
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Experimental Setup
Results
Convergence and Consensus
Convergence
Opinions converge if x(∞) = limt→∞ x(t) exists.
Consensus
Converging opinions reach a consensus if x(∞)i = x(∞)j for all i, j.
Main Question
What are sufficient and necessary conditions on P for convergence &
consensus?
7 / 44
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Experimental Setup
Results
Convergence and Consensus
Convergence
Opinions converge if x(∞) = limt→∞ x(t) exists.
Consensus
Converging opinions reach a consensus if x(∞)i = x(∞)j for all i, j.
Main Question
What are sufficient and necessary conditions on P for convergence &
consensus?
7 / 44
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Experimental Setup
Results
Reformulation
x(t + 1) = P x(t)
[definition]
= P (P x(t − 1))
[definition; t > 1]
= (P P) x(t − 1)
[associativity]
2
= P x(t − 1)
= ...
= P t+1 x(0)
Convergence
Opinions converge if P ∞ = limt→∞ P t exists.
Consensus
Converging opinions reach a consensus if Pi∞ = Pj∞ for all i, j.
8 / 44
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Experimental Setup
Results
Results
Example: Non-Convergence
P=
0
1
1
0
1
P =
0
2
0
1
3
0
1
1
0
1
0
0
1
P =
Example: Convergence without Consensus
P=
1
0
0
1
P2 =
1
0
0
1
P3 =
9 / 44
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Experimental Setup
Results
Results
Example: Non-Convergence
P=
0
1
1
0
1
P =
0
2
0
1
3
0
1
1
0
1
0
0
1
P =
Example: Convergence without Consensus
P=
1
0
0
1
P2 =
1
0
0
1
P3 =
9 / 44
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Experimental Setup
Results
Results: Convergence
Convergence Theorem
(Jackson, 2008, Theorem 8.1)
Opinions converge iff every set of nodes of G (P) that is strongly
connected and closed is aperiodic.
[details don’t matter]
A set of nodes X is . . .
strongly connected if every node in X reaches every other following
edges with positive weight;
closed if X contains all nodes of G (T ) which can be reached from a
node in X with an edge with positive weight;
aperiodic if the greatest common divisor of all cycles is 1.
10 / 44
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Experimental Setup
Results
Results: Consensus
Consensus Theorem
(DeGroot, 1974)
Opinions reach a consensus if P has at least one column with only
positive values.
[other sufficient conditions exist]
Consensus Theorem
(Jackson, 2008, Corollary 8.2)
A consensus is reached for all initial opinions iff there is exactly one set of
nodes in G (P) that is strongly connected and closed and that set is
aperiodic.
11 / 44
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Experimental Setup
Results
Propaganda Machine
i is a propaganda machine iff i is maximally stubborn pii = 1
[(cf. Jackson, 2008, on mass media)]
with many propaganda machines, consensus is possible only if all
have same initial opinion
12 / 44
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Experimental Setup
Results
Generalizations of DeGroot’s Model
Generalization 1
(Hegselmann and Krause, 2002)
influence matrix P(t, x(t)) depends on time and current opinions:
x(t + 1) = P(t, x(t)) x(t)
(2)
13 / 44
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Experimental Setup
Results
Generalizations of DeGroot’s Model
Generalization 2
(this talk)
P fixes potential influence;
actual influence P(S) depends on strategic choice S of influencers
x(t + 1) = P(S) x(t)
(3)
To do:
define S
define P(S)
14 / 44
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Experimental Setup
Results
Generalizations of DeGroot’s Model
Generalization 2
(this talk)
P fixes potential influence;
actual influence P(S) depends on strategic choice S of influencers
x(t + 1) = P(S) x(t)
(3)
To do:
define S
define P(S)
14 / 44
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Experimental Setup
Results
Strategic Manipulation of Opinions
Strategy Matrix
strategies are allocations of “persuasion effort”
P
S: n × n matrix with sij ≥ 0, j sij = 1 and sii = 0
si : strategy of agent i
sij : how much i focuses on influencing j
Examples

0
S 1 = .4
.2
.9
0
.3

.1
.6
0

0
S2 =  1
.5
.1
0
.5

.9
0
0

0
S 3 = .5
.5
.5
0
.5

.5
.5
0
Neutral Strategies
si is neutral (for P) iff it is a flat distribution over N T (i)
S is neutral (for P) iff si is neutral for all i
15 / 44
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Experimental Setup
Results
Strategic Manipulation of Opinions
Strategy Matrix
strategies are allocations of “persuasion effort”
P
S: n × n matrix with sij ≥ 0, j sij = 1 and sii = 0
si : strategy of agent i
sij : how much i focuses on influencing j
Examples

0
S 1 = .4
.2
.9
0
.3

.1
.6
0

0
S2 =  1
.5
.1
0
.5

.9
0
0

0
S 3 = .5
.5
.5
0
.5

.5
.5
0
Neutral Strategies
si is neutral (for P) iff it is a flat distribution over N T (i)
S is neutral (for P) iff si is neutral for all i
15 / 44
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Experimental Setup
Results
Strategic Manipulation of Opinions
Strategy Matrix
strategies are allocations of “persuasion effort”
P
S: n × n matrix with sij ≥ 0, j sij = 1 and sii = 0
si : strategy of agent i
sij : how much i focuses on influencing j
Examples

0
S 1 = .4
.2
.9
0
.3

.1
.6
0

0
S2 =  1
.5
.1
0
.5

.9
0
0

0
S 3 = .5
.5
.5
0
.5

.5
.5
0
Neutral Strategies
si is neutral (for P) iff it is a flat distribution over N T (i)
S is neutral (for P) iff si is neutral for all i
15 / 44
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Experimental Setup
Results
Strategic Manipulation of Opinions
Desiderata on P(S)
1
DeGroot model as special case
⇒ P(S) = P iff S is neutral
2
agents retain their stubbornness
⇒ diag(P(S)) = diag(P)
16 / 44
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Experimental Setup
Results
Strategic Manipulation of Opinions
Net Influence
• S ∗ be the column-normalized matrix derived from S
• sij∗ is i’s net influence over j
• let S be the neutral strategy (for an implicitly fixed P)
• s ∗ij is i’s neutral net influence over j
• then R =
S ∗/S ∗
is the relative net influence
[convention: x/0 = 0]
17 / 44
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Experimental Setup
Results
Strategic Manipulation of Opinions
Actual Influence Matrix P(S)
P(S) = Q is an n × n matrix with:
(
pij
if i = j
qij =
Ppij rji (1 − pii ) otherwise
pik rki
[convention: x/0 = 0]
k
Example

1
P = .2
.4
0
.5
.5

0
.3
.1

0
S = 0
0
.9
0
1

.1
1
0

1
0
P(S) = .27 .5
.12 .78

0
.23
.1
18 / 44
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Experimental Setup
Results
Strategic Manipulation of Opinions
Actual Influence Matrix P(S)
P(S) = Q is an n × n matrix with:
(
pij
if i = j
qij =
Ppij rji (1 − pii ) otherwise
pik rki
[convention: x/0 = 0]
k
Example

1
P = .2
.4
0
.5
.5

0
.3
.1

0
S = 0
0
.1
0
1

.9
1
0

1
0
P(S) = .06 .5
.53 .37

0
.44
.1
18 / 44
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Experimental Setup
Results
Strategic Manipulation of Opinions
Actual Influence Matrix P(S)
P(S) = Q is an n × n matrix with:
(
pij
if i = j
qij =
Ppij rji (1 − pii ) otherwise
pik rki
[convention: x/0 = 0]
k
Example

1
P = .2
.4
0
.5
.5

0
.3
.1

0
S = 0
0
.5
0
1

.5
1
0

1
P(S) = .2
.4
0
.5
.5

0
.3
.1
18 / 44
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Experimental Setup
Results
Conserving DeGroot’s Model
Fact
P(S) = P.
Proof.
Let Q = P(S). Look at arbitrary qij . If i = j, then trivially qij = pij . If
i 6= j, then
pij rji
qij = P
(1 − pii ) ,
k pik rki
where R = S ∗/S ∗ . As rii = 0 by construction, we get:
pij rji
qij = P
(1 − pii ) .
k6=i pik rki
Moreover, for every k 6= i, rkl = 1 whenever plk > 0, otherwise rkl = 0.
Therefore:
pij
qij = P
(1 − pii ) = pij .
k6=i pik
19 / 44
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Experimental Setup
Results
The Social Dimension of Persuasion
Propaganda Problem
(full version; rough formulation)
If agents would like to actively promote their opinion in the population,
which strategy is optimal?
20 / 44
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Experimental Setup
Results
The Social Dimension of Persuasion
Propaganda Problem
(more tractable version; rough formulation)
Which strategy is optimal for a minority of strategic players (wolfs) if the
majority consists of unstrategic players (sheep) who play a neutral
strategy as in DeGroot’s original model?
21 / 44
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Experimental Setup
Results
Wolves & Sheep
Wolves & Sheep
initial opinion
stubbornness
strategy
proportion
wolf i
sheep i
xi (0) = 1
pii = 1
variable
minority
xi (0) = −1
pii ∈ [0; 1]
neutral
majority
[wolfs ←→ propaganda machines]
22 / 44
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Experimental Setup
Results
Wolves & Sheep
Consensus
If every sheep reaches at least one wolf with a sequence of edges with
positive weight, then the population reaches consensus with consensual
opinion 1.
23 / 44
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Experimental Setup
Results
Propaganda Problem
Average Population Opinion
x̄(t) =
1
n
×
Pn
i=1 xi (t)
Results of Strategy Sequences
For fixed P and x(0), say that x(k) results from a sequence of strategy
matrices S 1 , . . . , S k if for all 0 < i ≤ k: x(i) = P(S i ) x(i − 1).
Propaganda Problem
(tractable “wolf” version)
For a fixed P, x(0) as described and a number
of rounds k > 0, find a
1
k
sequence of k strategy matrices
S
,
.
.
.
,
S
1
such that x̄(k) is maximal
k
for the x(k) that results from S , . . . , S .
24 / 44
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Experimental Setup
Results
Propaganda Problem
Average Population Opinion
x̄(t) =
1
n
×
Pn
i=1 xi (t)
Results of Strategy Sequences
For fixed P and x(0), say that x(k) results from a sequence of strategy
matrices S 1 , . . . , S k if for all 0 < i ≤ k: x(i) = P(S i ) x(i − 1).
Propaganda Problem
(tractable “wolf” version)
For a fixed P, x(0) as described and a number
of rounds k > 0, find a
1
k
sequence of k strategy matrices
S
,
.
.
.
,
S
1
such that x̄(k) is maximal
k
for the x(k) that results from S , . . . , S .
24 / 44
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Greedy Propaganda
Results
[maximize influence in one update step]
given x, find S that maximizes:
n
n
1 XX
Qx=
xj qij
n
∝
Experimental Setup
[with Q = P(S)]
i=1 j=1
n X
n
X
xj qij
[since 1/n is constant]
xj qij
[since xi and qii = pii are constant]
i=1 j=1
∝
∝
n X
n
X
i=1 j6=i
n
X
X
xj qij
[since qij = 0 if j 6∈ N(i)]
i=1 j∈N(i)
∝
n
X
i=1
P
j∈N(i) xj pij sji
(1 − pii ) P
j∈N(i)
pij sji
[definition of qij ]
25 / 44
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Experimental Setup
Results
Greedy Propaganda
The Greedy Propaganda Problem for a Lone Wolf
suppose only i = 1 is a wolf; find S that maximizes:
P
n
X
j∈N(i) xj pij sji
(1 − pii ) P
j∈N(i) pij sji
i=2
P
n
X
pi1 s1i + j∈N(i)\{1} xj pij sji
P
=
(1 − pii )
pi1 s1i + j∈N(i)\{1} pij sji
=
i=2
n
X
i=2
[since x1 = 1]
n
ai
X
bi s1i + ci
=
fi (s1i )
bi s1i + di
i=2
where:
• ai = (1 − pii ) is the inverse stubbornness of i
• bi = pi1 ∈ [0; 1] is the wolf’s power over i
• ci ∈ [0; 1], di ∈ (0; 1] with ci ≤ di
26 / 44
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Experimental Setup
Results
Greedy Propaganda
Example
1
P = .2
.4
x= 1
f2 (s12 ) = .5
f3 (s13 ) = .9
0
.5
.5
−1

0
.3
.1
T
−1
.2 s12 − 0.15
.2 s12 + 0.15
.4 s13 − 0.25
.4 s13 + 0.25
s12
s13
0.2
0
−0.2
fi (s1i )

−0.4
−0.6
−0.8
−1
0
0.2
0.4
0.6
0.8
1
s1i
27 / 44
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Experimental Setup
Results
Greedy Propaganda
Example
1
P = .2
.4
−1

0
.3
.1
T
−1
−0.2
P
x= 1
0
.5
.5
i fi (s1i )

.2 s12 − 0.15
f2 (s12 ) = .5
.2 s12 + 0.15
f3 (s13 ) = .9
.4 s13 − 0.25
.4 s13 + 0.25
−0.4
−0.6
−0.8
0
0.2
0.4
0.6
0.8
1
s12
s12 = 1 − s13
27 / 44
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Experimental Setup
Results
Greedy Propaganda: Example

1
P = .2
.4

0
∗
S = 0
0

0
S = 0
0

0
.3
.1

1/3 1/3
0 2/3
2/3
0

x 1−x
0
1 
1
0
0
.5
.5

0
S = 0
0

0
R = 0
0

∗
P(S) = 
x/x+1
1−x/2−x

0
1/2−x

1/x+1
0
3x/x+1
3−3x/2−x

0
3/4−2x

3/2x+2
0
1
4x/8x+6
36−36x/65−40x
0
1/2
9/26−16x
0

3/8x+6
1/10
28 / 44
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Experimental Setup
Results
Greedy Propaganda: Example

P(S) = 
1
4x/8x+6
36−36x/65−40x
−1
x(t) = 1
0
1/2

0
3/8x+6
9/26−16x
1/10
T
−1
−224x 2 + 136x − 57
x(t + 1) =
−160x 2 + 140x + 195
local max at x = 0.3175
0.1
x(t + 1)
5 · 10−2
0
−5 · 10−2
−0.1
−0.15
0
0.2
0.4
0.6
0.8
1
x
29 / 44
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Experimental Setup
Results
Optimal vs. Natural Propaganda
Added Complexity
• optimal after k rounds
• optimal in concert with other wolfs
• ...
• uncertainty about (some aspects of) P
Propaganda Problem
[Implementable Propaganda]
Which simple & uniform heuristics approximate the optimal solution of
the propaganda problem for arbitrary k and arbitrary P?
30 / 44
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Experimental Setup
Results
Optimal vs. Natural Propaganda
Added Complexity
• optimal after k rounds
• optimal in concert with other wolfs
• ...
• uncertainty about (some aspects of) P
Propaganda Problem
[Implementable Propaganda]
Which simple & uniform heuristics approximate the optimal solution of
the propaganda problem for arbitrary k and arbitrary P?
30 / 44
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Experimental Setup
Results
Experimental Setup
Structure of a Random Trial
1
pick a random population size from {50, . . . , 1000}
2
generate random scale-free network
3
4
fix a random number of wolves (ca. 10% of population)
allocate wolves randomly on network
5
generate random influence matrix P that respects network topology
6
for each relevant wolf strategy record average opinion for 100 rounds
31 / 44
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Experimental Setup
Results
Scale-Free Networks
Properties
1
2
scale-free: at least some part of the distribution of degrees has a
power law character
small-world:
1
2
short characteristic-path length: it takes relatively few steps to
connect any two nodes of the network
high clustering coefficient: if j and k are neighbors of i, then its
likely that j and k also interact with one another
32 / 44
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Experimental Setup
Results
Scale-Free Networks
Example
39
38
47
48
29
40
32
42
34
45
10
49
13
33
31
17
26 16
6
9
43
1
23
21
18
19
28
41
0
11
12
8
35
7
44
4
2
20
27
30
22
3
24
36
25
5
14
15
46
37
33 / 44
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Experimental Setup
Results
Wolf Strategies
Types of Strategies
• single vs. group: whether a strategy targets a group or focusses on
persuading individuals
• socially-numb vs. socially-aware: whether a strategy is sensitive to
social information in the network and/or the influence matrix
• lone-wolf vs. coalition: whether wolves take other wolves into
account to coordinate behavior
34 / 44
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Experimental Setup
Results
Wolf Strategies
Base-Line Strategies
• sheep: place equal influence on all neighbors of the underlying
unweighted network
[including wolves! sheep 6= neutral _]
¨
• avoid-convinced: place equal influence on all non-convinced
neighbors, i.e., neighbors that do not (yet) hold wolf opinion
Random Strategies
• single-random: each round choose an arbitrary (non-convinced)
neighbor and put all influence on her
• group-random: each round choose a random probability distribution
over all (non-convinced) neighbors
35 / 44
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Experimental Setup
Results
Wolf Strategies
Example: The “Opinion” Family
4 strategies that condition the allocation of influence on the current
opinions held by neighbors
• single-opinion-min: place all influence on the neighbor with the
lowest opinion
• single-opinion-max: place all influence on the neighbor with the
highest opinion
• group-opinion-min: place influence on all neighbors,
anti-proportional to their current opinion
• group-opinion-max: place influence on all neighbors, proportional
to their current opinion
36 / 44
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Experimental Setup
Results
Wolf Strategies
Strategy Families
• opinion: current opinion xj (t)
• influence: influence pji i has on j
• degree: number of neighbors |N(j)|
• betweenness: betweenness centrality of j
• fraction of shortest paths that pass through j
• closeness: closeness centrality of j
• 1 / average of distance shortest paths from j to all other nodes
• clustering: clustering coefficient of j
• number of possible triangles / number of actual triangles
37 / 44
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Experimental Setup
Results
Wolf Strategies
Coalition Strategies
• pack: influence proportional to number of wolves in N(j)
• unpack: influence anti-proportional to number of wolves in N(j)
• communication: like unpack, except that each round all wolves
coordinate who focuses on whom:
each round
for each non-convinced sheep i
look at WN(i) = {j | j ∈ N(i) ∧ j is a wolf}
assign i as target to the j with highest power over i
wolves assign 100 times more effort to assigned targets
38 / 44
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Experimental Setup
Results
average opinion difference to sheep
Results
group-influence-max
communication
unpack
group-closeness-min
group-degree-min
avoid-convinced
group-clustering-max
sheep
0.15
0.1
5 · 10−2
0
0
20
40
60
80
100
rounds
39 / 44
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Experimental Setup
Results
average opinion difference to sheep
Results
sheep
single-clustering-min
single-value-max
single-influence-min
single-random
0
−0.5
−1
−1.5
0
20
40
60
80
100
rounds
40 / 44
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Experimental Setup
Results
Rank Matrix after 10 Rounds
200
group-influence-max
communication
unpack
group-degree-min
group-clustering-max
group-closeness-min
group-value-max
avoid-convinced
group-value-min
group-closeness-max
group-betweenness-min
sheep
group-random
group-clustering-min
pack
group-degree-max
clique-max
clique-min
group-betweenness-max
group-influence-min
single-clustering-min
single-closeness-max
single-betweenness-max
single-degree-max
single-value-max
single-closeness-min
single-clustering-max
single-betweenness-min
single-degree-min
single-influence-min
single-value-min
single-influence-max
single-random
180
160
140
120
100
80
60
40
20
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33
0
rank
41 / 44
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Experimental Setup
Results
Rank Matrix after 100 Rounds
180
communication
group-influence-max
unpack
group-degree-min
group-closeness-min
group-clustering-max
avoid-convinced
group-value-min
group-value-max
group-closeness-max
sheep
group-betweenness-min
group-random
group-clustering-min
pack
group-degree-max
clique-max
clique-min
group-betweenness-max
group-influence-min
single-clustering-min
single-closeness-max
single-betweenness-max
single-degree-max
single-value-max
single-closeness-min
single-clustering-max
single-betweenness-min
single-influence-min
single-degree-min
single-value-min
single-influence-max
single-random
160
140
120
100
80
60
40
20
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33
0
rank
42 / 44
Opinion Dynamics
Opinion Manipulation
Efficient Propaganda
Experimental Setup
Results
Conclusions
(Preliminary) Advice
if you want to spread your opinion in society . . .
• spread out wide your web of influence
• go for easy targets
• find partners in crime & divide the labor
43 / 44
References
DeGroot, Morris H. (1974). “Reaching a Consensus”. In: Journal of the
American Statistical Association 69.345, pp. 118–121.
Hegselmann, Rainer and Ulrich Krause (2002). “Opinion Dynamics and
Bounded Confidence: Models, Analysis, and Simulation”. In: Journal of
Artificial Societies and Social Simulation 5.3.
Jackson, Matthew O. (2008). Social and Economic Networks. Princeton
University Press.