Evolution of Stories
Evolution of Stories
Evolution of Stories
Rabajante, J.F. and Umali, R.E.DC. (2011). A Mathematical Model of Rumor Propagation for Disaster Management. Journal of Nature Studies, 10(2): 61-70.
ABSTRACT
This paper focuses on the study of rumor propagation for possible
application to disaster management. We introduce a conceptual mathematical
model that simulates rumor spread. The model simulates the mutation of
information during the propagation using Monte Carlo method. This model
reinforces some of the existing rumor theories and rumor control strategies.
INTRODUCTION
Various theories about rumors, especially those pertaining to disasters, were already
formulated by social scientists. Some of these theories are based on intuition and qualitative
studies. Recently, an emerging field called Mathematical Sociology tries to investigate social
phenomenon, such as rumor propagation, using mathematical and computational models.
Experimental or statistical research on rumor propagation is impractical and usually infeasible,
that is why rumor modeling using mathematics is used to mimic rumor spread. The main goal of
this paper is to mimic the evolution of stories during the information propagation using Monte
Carlo simulation.
The basic law of rumor states that rumor strength is directly proportional to the
significance of the subject towards the individual concerned and to the uncertainty of the
evidence at hand (Rosnow and Foster, 2005). A modified theory views rumor-mongering as a
way of handling anxieties and uncertainties during chaotic times by creating and passing on
stories attempting to provide an explanation for behavior and to address confusion (Rosnow
1991, 2001). Specifically, rumors take place when no clear link exists between people and the
correct information, causing ambiguity (Bordia and DiFonzo, 2004). When people fail to find a
plausible answer to their queries, they begin to interpret the situation and make use of the
information at hand to come up with stories (Bordia and DiFonzo, 2004). Also, belief in a rumor
depends on the degree of suggestibility and credulity of the rumormongers involved.
Two basic models on rumor propagation are the Daley-Kendall (Daley and Kendall, 1965)
and Maki-Thompson Models (Maki and Thompson, 1973). Serge Galam, the father of
sociophysics, studied the dynamics of rumor spread originating from minorities regarding the
September 11 Pentagon bombing (Galam, 2003). Galam (2005) and Suo and Chen (2008)
investigated the dynamics of the formulation of public opinions. Dodds and Watts (2005)
formulated a model for social and biological contagion.
Yu and Singh (2003) developed models for detecting deception using Dempster-Shafer
Theory. Matos (2004) studied the relationship between information flow in the society and the
volatility in financial markets. Lind et al (2007), Rabajante and Otsuka (2010), and Salvania and
Pabico (2010) investigated the spread of information, such as gossips, in social networks. Other
researches about rumor spread are done by Moreno, Nekovee and Pacheco (2004), and
Nekovee et al (2006).
There are various strategies in controlling spread of information such as (1) controlling
the entry and exodus of people in the community, (2) regulating the media of communication,
(3) influencing the belief system of people, and (4) introducing an antithesis of the circulating
information.
THE MODEL
Previous studies about information propagation, such as by Galam (2003, 2005), only
consider a pair of conflicting information. As an extension to such researches, this model is
formulated to observe the dynamics of multiple pairs of conflicting information. A Monte Carlo
Simulation algorithm is formulated to mimic information propagation and mutation. The results
showed the evolution of stories considering finite number of time periods and finite number of
actors. The results are summarized using descriptive statistics.
Deterministic rules are integrated inside the formulated stochastic algorithm. The
algorithm uses random numbers which make it prone to statistical errors. Perturbation analysis
should be done to check the stability of the algorithm to the given initial values and
parameters.
The algorithm is carried out using a Scilab program (version 5.3.0). Initial values are
given by the user in order to run the program. A limitation of the algorithm is that a person
cannot handle conflicting stories at the same time.
For example, in the recent Ondoy typhoon, comments are circulating that damages
could have been lesser if certain precautions were made. Hence, even if the typhoon was over,
it left the question “Who is to be blamed about the unexpected but controllable damages
brought about by Typhoon Ondoy?” Having the issue in mind, consider the following basic
information (Mendoza, 2009; GMANews, 2009):
1) The Arroyo Administration is to be blamed;
2) The MMDA Chairman is responsible for the damages; and
3) Engineers of Angat Dam should be blamed.
In the given example, combinations of the three basic information compose the possible
stories. One story could be, ‘Arroyo administration is to be blamed, the MMDA chairman is not
responsible, and the Engineers of Angat Dam should be blamed.’ Another could be ‘Arroyo
administration is to be blamed, together with the MMDA Chairman and the engineers of Angat
Dam.’
The axes of the hypercube represent the basic information. Each basic information is
assigned with a positive or a negative value. For the example above, the three basic information
correspond to the three axes, say 𝑥, 𝑦, and 𝑧 axes, respectively. The positive 𝑥-axis represents
the information that Arroyo administration is to be blamed in the effects of the typhoon, while
the negative 𝑥-axis represents the information that the Arroyo administration is not to be
blamed. Similarly, the positive 𝑦-axis represents the information that the MMDA Chairman is to
blamed, while the negative 𝑦-axis represents the information that the MMDA Chairman is not
to be blamed. Lastly, the positive 𝑧-axis represents the information that the engineers of the
Angat Dam are to be blamed and the negative 𝑧-axis represents the information that the
engineers of Angat Dam are not to be blamed.
The values assigned to the basic information are the degrees of belief of the actor. Each
degree of belief is given a value from −2 to 2. The 𝑛 basic information with their corresponding
degrees of belief are stored in the memory (hypercube) of the actor using an 𝑛-tuple belief
vector.
A positive degree of belief means that a person is a believer, while a negative value
means that the actor is a non-believer. These states were further divided into two. A believer
can further be classified as a believer but doubting, and a believer and loyal; while a non-
believer can further be classified as a non-believer but doubting, and a non-believer and loyal.
The believers and loyal have a greater strength of belief, which is from 1 to 2 (or in the case of
non-believers and loyal, −1 to −2).
For example, if the value of the basic information is 0.7, it means that the actor believes
the basic information with the degree of 70%, but implying that the actor is still doubting with
the information. While if the value of the basic information is −0.3 then the actor does not
believe the basic information with the degree of 30% but his/her unbelief is weak.
Notice that, the range of belief of a believer but doubting has the same length of
measure as compared to the range of a believer and loyal. These two are of equal length to give
equal chances for an actor to be in these states.
Another group of actors are categorized to be in the neutral state. These are the actors
having a zero (0) degree of belief because they have no knowledge of the story or they are
simply indifferent. In this paper, it is assumed that an actor cannot have opposing beliefs at the
same time. The values 2 and −2 are the extreme values that can be assigned to an actor’s
degree of belief or unbelief. This implies that each hypercube (memory) is bounded.
Figure 3, shows a summary of the possible states of the actors which can be found in the
system.
Figure 3: A Tree Diagram of all the Possible Actors in the System
Probabilities of communication are assigned for every actor. This probability is within
the interval 0 to 1. This is denoted by 𝑃𝑖𝑗 , or the probability that actor 𝑖 will communicate with
actor 𝑗 (or the probability that actor 𝑖 will share the story to actor 𝑗.) For example, 𝑃13 = 0.8,
means there is an 80% probability that actor 1 will communicate with actor 3.
Degrees of influence are also assigned to each edge. It is denoted by 𝐼𝑖𝑗 , or the degree of
influence actor 𝑗 has over actor 𝑖. For example, 𝐼21 = 0.3, means actor 2 has 30% influence
over actor 1.
Since a two-way network is being considered, it is assumed that, 𝑃𝑖𝑗 is not necessarily
equal to 𝑃𝑗𝑖 and similarly, 𝐼𝑖𝑗 is not necessarily equal to 𝐼𝑗𝑖 . Also, 𝑃𝑖𝑖 = 1 and 𝐼𝑖𝑖 = 1, since a
person has complete probability of communication and influence over him/herself,
respectively.
The stories within the system will then propagate after 𝑡 number of time periods. In real
life, this represents the number of days or years that the information will propagate within the
community. This implies that the concept of time is discrete. Making the step size of each time
steps smaller will make the algorithm approximate a continuous model.
Information evolves in the network through the mutation of the basic information
through a simulation algorithm. Mutation of the basic information will then affect the content
of the story inside an actor’s memory (hypercube). The algorithm is shown in Table 2.
Going back to the Ondoy issue, let us consider only the first two basic information: 1)
Arroyo Administration is to be blamed; 2) The MMDA Chairman is responsible for the damages.
Using the algorithm, the following results are obtained for five actors, six time periods, and 100
simulation runs:
Table 1. Evolved Stories after Six Time Periods
Basic info Basic info Basic info Basic info Basic info Basic info Basic info Basic info Basic info Basic info
1 2 1 2 1 2 1 2 1 2
0.80 0.00 -0.50 1.00 0.00 -1.30 2.00 1.60 -2.00 -1.80
0.85 -1.22 -0.10 1.00 -0.40 -1.18 1.60 1.09 -0.80 2.00
-1.65 -0.80 -0.29 2.00 -0.08 -0.57 1.01 0.24 -0.47 1.45
-1.01 -0.18 -1.16 1.94 0.14 1.56 -0.49 0.94 -1.65 2.00
-2.00 2.00 -1.66 1.85 -2.00 2.00 -2.00 2.00 -2.00 2.00
-2.00 2.00 -2.00 2.00 -2.00 2.00 -2.00 2.00 -2.00 2.00
2
1.5
1
0.5
0
-2 -1.5 -1 -0.5 -0.5 0 0.5 1 1.5 2
-1
-1.5
-2
2
1.5
1
0.5
0
-2 -1.5 -1 -0.5 -0.5 0 0.5 1 1.5 2
-1
-1.5
-2
2
1.5
1
0.5
0
-2 -1.5 -1 -0.5 -0.5 0 0.5 1 1.5 2
-1
-1.5
-2
2
1.5
1
0.5
0
-2 -1.5 -1 -0.5 -0.5 0 0.5 1 1.5 2
-1
-1.5
-2
It can be observed from the results, that a loyal believer can eventually become a loyal
non-believer, and vice-versa. Another observation is that from a neutral state, an actor can be a
believer (or non-believer).
Also, the results were intuitively unpredictable. For example, Actor 1 is initially neutral
for the second basic information. But after two time periods, and after interacting within the
system, he/she becomes a non-believer with respect to the second basic information. But
again, unexpectedly, after two time periods, he/she becomes a loyal believer (i.e. he/she
strongly believes that the MMDA Chairman is to be blamed in Ondoy’s aftermath).
The results from this prototype example mimic a certain real life scenario; for instance,
people are indecisive when it comes to their stand in a particular issue. Modeling rumors can
be helpful in controlling the spread of unwanted stories or to strengthen favorable stories.
Particularly the model shows the behavior of the information propagation within a finite
number of time periods for a specific social network, with a stochastic probability of
connection. The standard deviation of the simulation runs should be analyzed to check how
good the mean of the simulation can represent the phenomenon.
//input
disp('---------------------------------------------------------------------')
disp ('row i represents memory of actor i', 'elements should be in [-2,2]')
disp(' ')
for i=1:n
for k=1:m
x(i,k)=input ('enter value of basic information:')
end
disp (x, 'the initial memory matrix is (at time period 0)')
end
A=x;
disp('---------------------------------------------------------------------')
disp('cell i,k represents communication from actor k to actor i', 'probability should be in [0,1]')
disp(' ')
for i=1:n
for k=1:n
if i~=k then
p(i,k)=input ('enter probability of communication:')
else
p(i,k)=1;
end
end
disp (p, 'probability matrix for communication')
end
disp('---------------------------------------------------------------------')
disp('cell i,k represents influence of actor k to actor i', 'value of influence should be in [0,1]')
disp(' ')
for i=1:n
for k=1:n
if i~=k then
r(i,k)=input ('enter influence:')
else
r(i,k)=1;
end
end
disp (r, 'influence matrix')
end
//simulation
disp('---------------------------------------------------------------------')
disp('row i represents memory of actor i')
disp(' ')
for runi=1:run
y=x;
for i=1:t
for j=1:n
for k=1:n
if rand()<=p(j,k) then
if j~=k then
for l=1:m
y(j,l)=y(j,l)+(x(k,l)*r(j,k));
if y(j,l)>=2 then
y(j,l)=2;
end
if y(j,l)<=-2 then
y(j,l)=-2;
end
end
end
end
end
end
disp (y, 'the memory matrix is', i, 'at time period', runi, 'for simulation run #', '-----------------------
-')
B(runi,i,:,:)=y(:,:)
x=y;
end
x=A;
end
for l=1:t
for i=1:n
for j=1:m
FMean(l,i,j)=mean(B(:,l,i,j));
FStdev(l,i,j)=stdev(B(:,l,i,j));
end
end
end
for l=1:t
disp('---------------------------------------------------------------------')
disp('the following matrices are presented as transpose')
disp (FMean(l,:,:), l, 'the mean of simulation runs for the memory matrix at time period')
disp (FStdev(l,:,:), l, 'the standard deviation of simulation runs for time period')
end
//distance
disp('---------------------------------------------------------------------')
temp2=0
for i=1:n
for k=1:m
temp1(k)=(A(i,k)-FMean(t,i,k))^2;
temp2=temp1(k)+temp2;
end
D(i)=sqrt(temp2);
end
disp (D, 'distance between initial and mean final story')
CONCLUDING REMARKS
REFERENCES
Bordia, P. and DiFonzo, N. Problem solving in social interactions on the Internet: Rumor as Social
Cognition. (2004).
Daley, D. and Kendall, D. Stochastic Rumours. Journal of Applied Probability. (1965), 42-55.
Dodds, P. and Watts, D. A generalized model of social and biological contagion. Journal of
Theoretical Biology. (2005) 587-604.
Galam, S. Modeling Rumors: The No Pentagon French Hoax Case. Physica. (2003) Vol. 320, 571-
580.
Galam, S. Heterogenous Beliefs, Segregation, and Extremism in the Making of Public Opinions.
Physical Review. (2005) Vol. 71.
GMANEWS.TV. 2009. Overflowing Angat Dam, 4 others Continue Water Releasing amid
‘Pepeng.’ Retrieved March 4, 2011 from the World Wide Web:
http://www.gmanews.tv/story/173729.
Lind, P. G., da Silva, L. R., Andrade, J. S. Jr., and Herrmann, H. J. Spreading Gossip in Social
Networks. (2007).
Maki, D. and Thompson, M. Mathematical models and applications, with emphasis on the
social, life, and management sciences. Prentice Hall, Englewood Cliffs, N.J. (1973).
Matos, J. Information Flow, Social Interactions and the Justification Provided by Legal Evidence.
Judgment and Decision Making. (2004) Vol. 2, No. 5, pp.257-276.
Mendoza, G. M. 2009. Flooding in Metro: Who is to Blame? Retrieved February 29, 2011 from
the World Wide Web: http://www.abs-cbnnews.com/nation/10/02/09.
Moreno, Y., Nekovee, M., and Pacheco, A. F. Dynamics of rumor spreading in complex
networks. The American Physical Society. (2004).
Nekovee, M., Moreno, Y., Bianconi, G., and Marsili, M. Theory of rumor spreading in complex
social networks. Physica A. (2006) 457-470.
Rosnow, R. L. Rumor and gossip in interpersonal interaction and beyond: A social exchange
perspective. American Psychological Association. (2001) 203-232.
Rosnow, R. L. and Foster, E. K. Rumor and Gossip Research. Psychological Science Agenda.
(April. 2005) Vol 19, 1-2.
Suo, S. and Chen, Y. The Dynamics of Public Opinion in Complex Networks. Journal of Artificial
Societies and Social Communication. (2008) Vol. 11, No. 42.
Tierney K., Bevc, C. and Kuligowski, E. Metaphors Matter: Disaster Myths, Media Frames, and
Their Consequences in Hurricane Katrina. Annals, AAPSS. (2006) Vol. 64, 57-81.