Computational Experiments: Past, Present and Future

Download as pdf or txt
Download as pdf or txt
You are on page 1of 23

Computational Experiments: Past, Present and Future

Xiao Xue*, Xiang-Ning Yu, De-Yu Zhou, Xiao Wang, Zhang-Bin Zhou, Fei-Yue Wang

 Abstract: Powered by advanced information technology, more researched using the holism method [3]. 2) This repeated test
and more complex systems are exhibiting characteristics of the on the real system is not economically feasible due to the scale
Cyber-Physical-Social Systems (CPSS). Understanding the of social complex systems. 3) Many complex systems that
mechanism of CPSS is essential to our ability to control their involve social governance are constrained by legislation and
actions, reap their benefits and minimize their harms. In
often cannot be tested or reshaped, such as national security,
consideration of the cost, legal and institutional constraints on
the study of CPSS in real world, computational experiments have
military preparedness, and emergency response, and so on; 4)
emerged as a new method for quantitative analysis of CPSS. This Social complex systems are highly human-related system,
paper outlines computational experiments from several key which invlove human in many scenarios. And, the testing of
aspects, including origin, characteristics, methodological these systems may cause irreversible risks and losses, which
framework, key technologies, and some typical applications. are morally unacceptable [4].
From unstructured reality scenarios to
Finally, this paper highlights some challenges of computational structured computing scenarios
experiments to provide a roadmap for its rapid development and Social Complex
widespread application. System

Evolution Computational Bottom up Conceptual Top down Person-to-person


Index Terms- Cyber-Physical-Social Systems; Computational Strategy
experiment Model
Social
interaction
Evaluation What-if Scenario Abstract Scenario
experiments; artificial society; methodological framework; Analysis building analysis

thought experiment; parallel optimization


Thinking world
Environment Human-
Research environment
I. INTRODUCTION activity
Policy support
interaction

T here are two main purposes of system complexity research:


law exploration and theoretical explanation. Here, the law
can help us to clarify how system complexity occurs; the
Research activity
Researchers
Intervene

Fig.1 Schematic illustration of the computational experiments


Physical World

Therefore, the research of social complex systems can only


theory can help us to understand why system complexity
turn to “computational experiments”, which can realize the
occurs. The law is defined as the mapping between variables;
quantitative analysis of complex systems by means of
the theory is defined as a causal explanation of the observed
“Algorithmization” of “Counterfactuals” [5].Figure 1 shows
system complexity. However, powered by the rapid
the workflow of computational experiments. Firstly, this
development of Internet, the penetration of the Internet of
method abstracts the conceptual model of the reference system
Things, the emergence of big data, and the rise of social media,
by constructing the autonomous individual model and their
more and more complex systems are exhibiting the
interaction rules from the micro-scale perspective. Then, by
characteristics of social, physical, and information fusion,
combining the complex system theory with computer
which are called as Cyber-Physical-Social Systems (CPSS) [1,2].
simulation technology, the “digital twin” [6] of the real system
Because CPSS involve human and social factors, the design,
can be cultivated in the information world. Furthre, by
analysis, management, control, and integration of CPSS are
modifying the rules, parameters, and external intervention of
facing unprecedented challenges. In scientific research,
the system, various experiments can be carried out repeatedly
experimental methods often play an irreplaceable role, which
in a way of computation. Finally, According to the
not only promotes the establishment of the relationship
experimental results, the causal relationship between
between theory and facts but also drives the development of
intervening variables and system emergence can be identified,
science and technology from the exploration and discovery of
which provides a new way to explain, illustrate, guide and
numerous experiments. Given this, experimental methods are
reshape macro phenomena in reality.
being incorporated into CPSS research.
Computational experiments are the natural extension and
The traditional experimental method is generally a physical
improvement of computer simulation. The difference between
entity experiment, which cannot be carried out for the actual
them lies in data-driven, emerging mechanisms, as well as
social system mentioned above. This is due to the following
multi-world interpretation and guidance theory. Besides the
facts: 1) The social complex system cannot be studied by the
traditional description and prediction function of computer
reduction method because the decomposed system would most
simulation, the computational experiment emphasizes the
likely lose the function of the original system, so it must be
prescription function of the results, from “big law, small data”
in Newton's era to “big data, small law” in Merton's era [7].
Thanks for the support provided by National Key Research and
Development Program of China (No. 2021YFF0900800), National Natural
After more than ten years of development, the computational
Science Foundation of China (No.61972276, No.61832014, No.62032016), experiment method has become one of the mainstream
and Shandong Key Laboratory of Intelligent Buildings Technology (No. methods to analyze complex systems. Besides multi-agent
SDIBT202001). technology, its basic means include artificial systems and
(Corresponding author: Xiao Xue; e-mail: jzxuexiao@tju.edu.cn)
software-defined systems. Especially, the recent idea of the mainly explains the origin, characteristics, and our research
“digital twin” has been widely recognized in academic circles motivation.
and even in society. It has become a new means of generating
A. Origin of computational experiments
big data from small data and refining deep intelligence or
exact knowledge from big data, which complements with With the deepening of scientific research, research objects
artificial intelligence methods [8]. of interest are becoming increasingly huge in scale and
Compared with traditional experimental methods, the complex in function and structure. Thus, a discipline, known
computational experiment method is characterized by the as the “Science of the 21st Century”—Complexity Science,
following three features: 1) Precise controllability. By setting came into being [18]. Complexity theory, as an
environmental parameters (such as geographical factors, agent interdisciplinary subject of a complex system, involves natural
distribution, etc.) and triggering events (such as time, location, phenomena, engineering, economics, management, military,
type, scale, etc.), various scenarios can be accurately political and social fields; from the life phenomenon of a cell
reproduced as the operating environment of the system. 2) to the structure and mind of the brain; from the fluctuation of
Simple operation. In the simulation process, it is easy to the stock market to the rise and fall of society, etc.
realize various extreme environments to evaluate the different In terms of methods and paths, computational experiments
performance indexes of the system, such as accuracy, response have the potential to interrelate computer science, applied
rate, etc. 3) Repeatability. This advantage allows researchers engineering, humanities, social sciences, and economic
to design different experimental scenarios and evaluate the research, and provide new tools and means for researching
effects of different factors (such as geographical environment, complex systems. From the perspective of computer science,
trigger event characteristics, etc.) on system performance [9]. researchers of computational experiments extend the research
As a scientific research methodology, computational scope to the social field and study numerous interesting and
experiments have been applied to some application studies novel topics in collaboration with experts in other fields. Also,
that are risky, costly, or unable to conduct direct experiments research depth is not limited to the description of objective
in reality, including intelligent transportation system [10, 11], things, but focuses on revealing the causes and evolution of
war simulation systems [12], socioeconomic systems [13], objective things, and tries to predict their future development
ecosystems [14], physiological/pathological systems [15, 16], trajectories as accurately as possible. Fig. 2 shows the research
political ecosystems [17], etc. Once an “artifitial laboratory” of complex systems interlinks with the development of
for complex systems has been established, extensive research computer simulation technology to finally form the source of
on complex systems can be carried out in this laboratory. the computational experiment method.
However, its development also faces a series of challenges, The study of complex systems originated in the early 20 th
including the comparison and verification of computational century. In 1928, Austrian biologist L.V.Bertalanffy first
models, the design method of computational experiments, the proposed the concept of “complexity” [19], whose thoughts
knowledge-driven and data-driven fusion, etc. The further originated from the evolutionism of British biologist Darwin
study of these challenges will lay a solid theoretical C.R. and statistical physics of Austrian physicist Boltzmann
foundation for the wider application of computational L.E. Then, the research of complex system evolved, and
experiments. In addition, with the emergence of new underwent three stages:
technologies (such as reinforcement learning and digital twin), 1) 1950-1980: This stage focused on system science, which
the computational experiment also needs to be integrated with was represented by the old three theories (system theory [20],
new technologies to further enhance its application value in cybernetics [21], and information theory [22]) and the new
problem analysis and resolution. three theories (dissipation structure theory [23], catastrophe
The remaining parts of this paper are organized as follows. theory [24], and synergetics [25]).
Section II introduces conceptual origins, application 2) 1980-2000: This stage focused on the dynamics and
characteristics of computational experiment method, and our adaptability of the system, which was represented by the
research motivation. Section III presents the methodological self-organization theory (e.g. chaos theory [3, 26], fractal
framework of computational experiments, which consists of theory [27], critical theory [28]), and complex adaptive system
five main links: modeling of artificial society, construction theory [29].
of an experimental system, design of experiments, analysis of 3) 2000-now: This stage began to focus on the combination
experiments, and verification of experiments. Section IV with data science theory, which is represented by the complex
provides three types of application cases: thought network [30, 31] and CPSS system [1, 2].
experiment, mechanism exploration, and parallel optimization.
Section V discusses the problems and challenges that the With the rapid development of complex system research,
computational experiment method may encounter in the future. computer simulation technology as a tool has made great
Section VI concludes the paper. progress. The development of computer simulation technology
makes it possible to map real systems in the information space.
II. BACKGROUND AND MOTIVATION Traditional computer simulation holds the notion that the real
system is the only one, and whether the simulation results are
Computational experiments are considered as a standard consistent with the real system is the only criterion to evaluate
interdisciplinary research field. Its development history is the experimental results. Based on this, the computational
closely related to the research of complex systems and the experiment takes the computer as the “artificial laboratory” to
development of computer simulation technology. This section “cultivate” a possible macroscopic phenomenon in the real
system and explore the law behind it. This provides a feasible intervention effects. The representative results of the
way to analyze complex system behaviors and evaluate the computational simulation technology are as follows:
Artificial society Digital twin
Artificial science

 In 2002, Michael Grieves first proposed the idea of


 H. Simon conceived the concept of "artificial  In the mid-1990s, multi-agent simulation
digital twins, which constructed information entities
science", based on which links economics, technology began to emerge. The widely used
that were completely equivalent to physical entities
cognitive psychology, learning science, design simulation tools include Swarm, Repast, Ascape
in a computer virtual space.
science, and management science. and Netlogo.
 In 2004, WANG Fei-Yue systematically put forward
 In the early 1930s, von Neumann proposed  In 1995, Nigel Gilbert and Rosaria Conte
the basic ideas, concepts and methods of
the concept of cellular automata. formally proposed the concept of artificial
"computational experiment method", and proposed
 In 1970, John Conway wrote the "Game of society.
a set of parallel system theory that uses
Life" program. Subsequently, Langton C G  In 1996, Epstein and Axtell developed an computational methods to study complex systems,
proposed the theory of artificial life. artificial social system: Sugarscape. namely the ACP method. Computational experiment
1928 1969 1994 1995-1996 1999 2000s

 In 1928, L.V.Bertalanffy first proposed the  1980-2000. Representative work includes self-  In 1999, "Science" published a special issue
concept of "complexity". organization theory (chaos theory, fractal with the theme of "complex systems",
theory, critical theory ) and complex adaptive focusing on complex research in the fields of
 1950-1960, representative work includes chemistry, biology, neurology, zoology,
system theory.
system theory, cybernetics, and information physical geography, climatology, and
theory.  J.H.Holland proposed the concept of a complex
adaptive system. Based on this, a computer was economics.
 From 1970 to 1980, representative work  2000-present. The combination of complexity
used to establish a macroscopic model of the
includes dissipative structure theory, theory and data science theory has become a
entire system, namely the Ehco model.
catastrophe theory, and synergetics. hot spot, and representative work includes
complex networks and CPSS.

System Science Complex adaptive Complex system

Fig.2 Conceptual sources of computational experiments


1) Cellular automaton: In the early 1930s, Von Neumann effectiveness [43]. In 1995, Nigel Gilbert and Rosaria Conte
posited the concept of cellular automata (CA) [32]. The CA published “Artificial Societies - The Computer Simulation of
model emphasizes the interaction between autonomous Social Life” [44], in which artificial society was formally
individuals, focuses on the extraction strategy of entities in the proposed and became a relatively independent social science
system, and concerns the emergence attributes of the simple field. In 1998, the international academic journal Journal of
behavior of micro-individuals at the macro level. It aims to Artificial Society and Social Simulation chaired by the
provide a general framework for the simulation of complex University of Surrey was issued, marking the maturity of
systems. The CA model first emphasizes entity cells and artificial society - Agent-based sociology simulation. Several
topology and then focuses on attributes/variables. classical artificial society models have emerged, including
2) Artificial life: John Conway wrote the “life game” Epstein and Axtell's Sugarscape model [45], Arthur and
program [33] in 1970, which opened the prelude to artificial Holland's artificial stock market model [46], the ASPEN
life research. Then, Langton C.G. proposed the artificial life model from the US Sandia National Laboratory [47], etc.
theory [34], intending to construct a model system with the 5) Digital Twin: Michael Grieves from the University of
behavior characteristics of natural life systems by computer Michigan first proposed the idea of the “digital twin” in 2002.
and other artificial media. Based on the concept of “chaotic It hoped to build the digital twins equivalent to physical
edge”, Langton, together with other scholars established entities in the computer virtual space, establish the two-way
various models to explore the evolution of artificial life, such dynamic feedback mechanism, analyze and optimize the
as self-propagating cellular automata [35], Biods model [36], physical entities [48]. In the industrial field, the digital twin is
Ant colony model [37], “Amoeba World” [38], etc. used to monitor, diagnose, predict, and optimize the operation
3) Multi-Agent technology: In the mid-1990s, multi-agent and maintenance of physical assets, production systems, and
simulation technology began to rise and become an important manufacturing processes [49]. A digital twin can also be used
means to study complex systems. The Agent in a multi-agent to optimize the sustainable development of cities by capturing
simulation represents an individual with a certain autonomy, temporal and spatial effects. As a virtual replica of a particular
intelligence, and adaptability. The main idea of multi-agent city, digital twin technology allows city operators to develop
simulation is to simulate complex systems by modeling the different strategies. Digital twins have been implemented in
interaction between a number of individual agents. The most some countries and cities, such as Singapore and Jaipur [50].
widely used multi-agent simulation tools include Swarm [39],
Repast [40], Ascape [41], and Netlogo [42]. With different With the development of complex system science and
advantages and disadvantages, these tools need to be selected computer simulation technology, Wang Fei-Yue formally
according to specific needs. proposed the concept of “computational experiment” in 2004,
4) Artificial society: In 1991, the Rand report first and formed the ACP method (Artificial Systems +
introduced the concept of “artificial society”, where Agent Computational Experiments + Parallel Execution),
technology was used to build social laboratories in computers emphasizing the circular feedback relationship between an
and to test and evaluate different policies to ensure their artificial system and real system [51-55]. In recent years, with
the rise of software definition [56], digital twins [6], experiments and extreme experiments. The role of different
reinforcement learning [57], service ecosystems [58], and factors in system evolution is comprehensively, accurately,
other technologies, there is a growing connotation of timely, and quantitatively analyzed. This will make it easier
computational experiment method [8]. As a bridge between for researchers to explore the operation laws of complex
the virtual world and the real world, computational systems and find effective interventions.
experiments are providing new and effective computing Computational experiment methods adopt the “multi-world”
theories and means to research complex systems. view of complex system research. When modeling complex
systems, the degree of approximation to a real system is no
B. Characteristics of computational experiment
longer the only criterion; however, the model is regarded as a
Based on distributed thoughts and the bottom-up method, “reality”, which is a possible alternative form and another
the computational experiment method can simulate the possible way to realize the real system. The real system is just
microscopic behavior of various entities in the real world with one of the possible realities, with behavior “different” but
decentralized micro-intelligence models. By designing the “equivalent” to the model. Table 1 compares the
interaction between individuals in experiments, the complex characteristics of computer simulation, digital twin, and
phenomena formed can reflect the macro law of system computational experiments. In short, the computational
evolution. Through the well-designed computational model experiment system is a software definition of the real system,
and simulation environment, computational experiments can which is not only the digital “simulation” of the real system
be used as a powerful tool for reasoning, testing, and but also the alternative version of the real system (or other
understanding of system complexity. Compared with the possible situations). It can provide efficient, reliable, and
traditional analysis method, the computational experiment applicable scientific decisions and guidance for the design,
method can establish a variety of never-occurring virtual analysis, management, control, and synthesis of real complex
experimental scenes by changing the combination of internal systems.
and external parameters, and possibly several pressure
Table 1 Comparison of features between computational experiments and similar concepts
Computer Simulation Digital Twin Computational experiment
Physical systems with well-defined and Cyber-physical system, which covers the Social complex systems, which
Research specific structures, which emphasize modeling and simulation of the entire emphasize the fusion and interaction
object the modeling and simulation of integration process, from design, of social space, information space,
independent unit. manufacturing, operation, to maintenance. and physical space.
Based on the principle of similarity, a It can be used to solve non-linear and The computational experiment can
mathematical model is established by uncertain problems that cannot be solved simulate and deduce scenarios that
top-down decomposition,which has the by the traditional mechanism model. have never occurred in real world. It
Research
homomorphic relations as the actual or Furthermore, it can form an evolving is not required that the computational
means
envisaged system. The computer can system with machine learning. There are model must completely reproduce the
output the same results as the physical two optimization modes: model-driven and behavior of the actual systems.
system by numerical calculation. data-driven.
It focuses on the accuracy of modeling, It focuses on how to reflect the data It emphasizes that the experimental
that is, whether it can accurately reflect interaction between digital objects and system is not only a digital simulation
Research the characteristics and states of physical physical objects, as well as the dynamic but also an alternative version for real
objectives objects. Thus, it can guide the design changes of the system. Thus, the digital system. The experimental results can
and optimization of actual systems. twin has the value of continuous provide decision support for the
improvement in industrial applications. research of complex systems.
It has been applied in various fields, It can lay a solid foundation for enterprise It has led to the emergence of
including transportation, aerospace, digitalization, which can be applied in multiple interdisciplinary research
Application industrial manufacturing, innovation validation, virtual debugging, fields, such as computational
fields meteorological forecasting, electronic data monitoring, remote diagnosis, remote sociology, computational economics,
information industry, and so on. maintenance, etc. computational finance, computational
epidemiology, etc.
Due to the lack of sufficient theory and Due to the lack of consideration of “human No consensus is reached on how to
prior knowledge, it is difficult to use factors”, it is unable to explain and analyze prove the validity and equivalence of
Limitations top-down modeling to accurately complex social phenomena. the computational model. It is easy to
describe and analyze complex systems. be questioned whether the experiment
can reflect reality.
A wide range of the application of computational a computational experiment, related applications can be
experiments exists. In order to obtain the expected research summarized into the following four categories:
goal, it is necessary to balance reality and abstraction when 1) Highly abstract
constructing a computational model. For example, highly The model has only a few qualitative similarities with the
realistic models may have significant policy value with little reference system and does not attempt to replicate any
or no theoretical value; conversely, highly abstract models quantitative features. These models are mainly used for the
may provide profound scientific insights, but they can only theoretical analysis of basic science, not operational strategy
provide results that are rarely directly applied in terms of analysis. Some early social simulation models belong to this
policy practice. According to the application characteristics of
group, such as the heatbug model [59], the boids model [36], computational experiments is reviewed in this paper. It is
and so on. hoped to help readers construct a complete knowledge system
2) Moderately abstract of computational experiment methods and lay a solid
The model can exhibit convincing qualitative characteristics foundation for its subsequent development. We focus on the
and meet some quantitative criteria. Although these models following three aspects:
are largely theoretical, they can provide several applicable 1) What is the technical framework for computational
insights with valuable impacts on policy development. For experiments?
example, although the classical Schelling model [60] is quite As a multi-disciplinary field, the computational experiment
abstract, it reveals important insights into social isolation method involves a growing spectrum of knowledge. When
phenomena. beginners first interact with computational experiments, it is
3) Moderately realistic easy to drown in the vast literature, and also difficult to clarify
Although the model belongs to the qualitative category, it the relationship between computational experiment methods
accords with the quantitative demands in terms of important and various techniques. This dilemma is exacerbated by the
characteristics. Experiment-based social computing studies are emergence of new technologies and applications. The
most interested in such models. For example, due to public resulting problem is whether a unified and general
safety considerations, it is impossible to carry out some methodological framework could be developed to guide the
destructive social experiments in the real environment, such as application of existing computational models or the
some public safety events. Many extreme pressure development of new computational models, thus greatly
experiments can be simulated without risk by changing the reducing the difficulty of implementing new applications.
experimental conditions and setting different variables. Based on this, the third section represents a general
4) Highly realistic methodological framework to illustrate the related key
In terms of quantitative and qualitative characteristics, the technologies of computational experiments.
experimental output of this type of model is the most 2) How to implement a specific application of computational
consistent with empirical data. The highly realistic simulation experiments
can be compared from multiple dimensions to the reference Because of the diversity and uncertainty in the application
system, including spatial features, temporal features, or field, as well as the subjectivity in the construction of artificial
organizational patterns. Such models are widely used in societies, there are always different opinions in the academic
business and government organizations. Public policy involves community on whether the experimental system can represent
highly uncertain areas (such as social networks and human the original system. High confidence is the basis for applying
behavior), resulting in complexity in its analysis, formulation, the computational experiment methods. In addition to the
and implementation. Taking artificial society as an alternative breakthrough in experimental design and model verification
version of the real system, the computational experiment technology, it is necessary to analyze successful application
methods can be used to analyze the effect of policies (such as cases and summarize the valuable experience. Only in this
economic stimulus, laws, and regulations, etc.), to improve the way can the computational experiment method learn from
scientific level of public policy-making. other methods and transform into a powerful tool to
understand the operation laws of complex systems. Therefore,
C. Research motivation
section IV reviews typical computational experiment cases
As researchers pay growing attention to the computational from three levels: thought experiment, mechanism exploration,
experiment method, it has spawned many inter-disciplinary and parallel optimization.
research fields, such as computational economics [13], 3) What is the future development trend of computational
computational finance [61], computational histology [62], experiments?
computational epidemiology [63], etc. In February 2009, 15 The computational model is the core of the computational
top scholars from global renowned universities (Harvard experiment method and the repository for applying different
University, Massachusetts Institute of Technology, etc.) areas of knowledge. In recent years, endless waves of new
published the paper “The Age of Computational Social technologies have emerged, such as digital twins [6],
Sciences” [64] in Science. In 2012, 14 famous European and generative adversarial networks (GAN) [67], reinforcement
American scholars jointly issued “Declaration on learning [57], etc. These technologies have a considerable
Computational Social Science”. Today, “Computational Social impact on the construction of computational models. Based on
Science” has become the name accepted by mainstream this, how to improve the computational model design using
academia [65]. In 2020, these scholars published a joint paper various new technologies has been a key challenge in the
in Science, emphasizing the problems and challenges in the sustainable development of computational experiment
development of computational social science [66]. methods. Accordingly, section V focuses on three aspects: (1)
Note that, research on computational experiments requires how to define the artificial society using big data, i.e.
multi-disciplinary knowledge, such as computer science, describing intelligence; (2) how to predict the future using
social science, systems science, artificial intelligence, computational experiments, i.e. predictive intelligence; 3) how
computer simulation, and many other disciplines. Despite the to adapt the feedback intervention in the real world, i.e.
efforts of researchers, a complete and advanced theoretical guiding intelligence.
system has not been formed, and a chasm still exists between
its theoretical development and practical application. In order
to promote the development of this field, the state-of-the art of
III. METHODOLOGICAL FRAMEWORK FOR modeling of artificial society, construction of an experimental
COMPUTATIONAL EXPERIMENTS system, design of experiments, analysis of experimental
results, and verification of experimental models. Based on the
In Fig.3, the methodological framework of computational
framework, this section will elaborate on the key technique
experiments can be summarized as a five-step feedback loop:
used in each step of the computational experiment method.
Construction of
Programming platform 2 experimental Digital thread
Netlogo, Repast, Mason, etc. system Design of
System 3 computational
module
Interruption experiments
mechanism

Conversion
Design
Feedback selection
Design
principles Numerical
Social model
Structural generation
verification
Environment Feedback Data
Model verification Analysis of
4 computational
Abstract
Result 5 experiments
Agent model verification
Verification of Process analysis
experimental
Modeling of Abstract models Scenario What-If
1 analysis analysis
Artificial
Society

Conceptual
Actual complex
model of
Identification of system Optimization of
the system
research questions intervention strategies

Fig. 3 Methodological framework for computational experiments


change with time, such as identification; 𝑆𝑡 represents the
A. Modeling of artificial society
feature that Agent changes with time, such as the role of
Artificial society is a method of simulating human society Agent; 𝐸𝑡 is a collection of external events that Agent
in computers [68]. Compared with the general simulation perceives and stimulates their state and behavior; 𝑌𝑡 is the
model, the artificial society model can describe more complex decision-making mechanism adopted by Agent in the process
systems. Here, uncertainty in individual behavior, as well as a of feeling external stimuli and interacting with other agents;
complex interaction between individuals, exists. After 𝑉𝑡 is a collection of Agent behaviors, including all behaviors
defining the structure, elements, and attributes of artificial taken spontaneously and stimulated by external events; N is
society, it is necessary to map the complex system into a the constraints to which Agent is bound, including the
multi-agent system in information space, which focuses on the environment, other agents, and the mission objectives.
individual model, environmental model, and social model. The Brain/thinking
individual model is an adaptive behavior mechanism that
describes the individual agent; the environmental model is a Task/Plan Mechanism
Intention Optimization
description of the social attributes of the individual agent; the Resource
generation mechanism
social model is a description of the mechanism of system
Desire
evolution. The details are as follows: Intention

1) Individual model Behavior plan


parameters Individual
An agent in the artificial society is an individual with Perceived Fo
data
action
cus
independent ability, which corresponds to the biological Alternative individual
individual or biological group in real society. The individual behavior

agent model is a container for applying multi-disciplinary sensor


organizational
knowledge, which can be customized according to application action
Interaction Alternative organizational message
problems, including agent structure, learning ability, behavior
Perception
interactive mechanism, etc. Individual agents in artificial Behavior
Decision
societies can adopt homogeneous structures or heterogeneous
structures. As shown in Fig.4, the typical structure of an Fig. 4 Structure model of individual Agent
individual Agent consists of four parts: perception, decision, In the artificial society, the individual Agent is neither
reaction, and optimization [69]. The information control flow unconscious nor lacks initiative. The learning process of each
in the Agent structure connects the parts into a unit. The agent is an important driving force for the evolution of the
following is a formal expression of the Agent structure, which system. In the operation process, the Agent will interact with
is described by a set of properties related to time t. the environment to experience problem solving, update the
𝐴𝑔𝑒𝑛𝑡 =< 𝑅, 𝑆𝑡 , 𝐸𝑡 , 𝑌𝑡 , 𝑉𝑡 , 𝑁 > (1) rule base, and impact the decision-making mechanism.
Wherein, R represents the features that Agent does not According to the strength of individual consciousness (or
rationality), individual learning models can be classified into population accommodated, etc.), the geographical distribution
three categories: (1) non-conscious learning, including of environmental entities, etc. It is the basic idea of
reinforcement learning [70] and parameterized learning environmental modeling to reconstruct the specific
automaton [71]; (2) learning by imitation [72]; (3) characteristics of each individual in the group from the
belief-based learning, including fictitious play [73], random statistical feature of population data. In the process, two
dynamic learning [74], Bayesian rational learning [75], etc. aspects of consistency need to be met: (1) the number of
2) Environmental model Agents generated is statistically consistent with the real world;
In the computational experiment, the environment model is (2) the internal logical structure (i.e. the asoociated relation) of
the mapping of the actual physical environment in the generated Agent is consistent with the real world, i.e. the
computer, which is the activity place that the Agent relies on. associated relation of Agents is consistent with the real world.
According to the modeling method, the artificial society 3) Social model
system can be divided into physical modeling and grid The social model describes the cyclical mechanism of
modeling. The physical artificial society is to abstract various artificial society, including “interaction” rules between Agents,
environmental elements of real society (such as buildings, environments, Agent and environment. These rules can be
road traffic, and climate conditions) into physical models. At either a mapping of real social rules or an artificial
present, many typical artificial society systems are hypothetical rule. As shown in Fig. 6, the evolution order of
implemented in this way, such as EpiSimS[76], etc. The grid an artificial society is defined from three levels using the
artificial society does not focus on specific environmental bottom-up framework. The bottom layer is the individual
elements, but on modeling environmental space. Discrete grids evolution space, which is used to simulate the phenomenon of
are used to describe the existence of physical space and the genetic evolution in social systems; the middle layer is the
properties of the environment, such as the Sugarscape model organization evolution space, in which the individuals can
[45]. enhance their ability through imitation and observation; the
University
Company top layer is the social evolution space, which is used to
Population,
organization,
simulate the phenomenon of accelerated evolution of society
community network promoted by social influence. The social influence established
Individual
Factory by drawing excellent knowledge from the bottom layer can
guide the evolution of the bottom individuals [58].
SLE Framework Model Library
Grid-based
space-time model Culture Social Level
Construction Feedback, balance
2020-1-5- AM 8:30
Accept()
Organizational Level
Influence()
Organizational Imitation,diffusion
Evolution
Environment
model Provide() Evaluate() Individual Level
Variation, selection
Individual Evolution

Fig. 5 Abstract hierarchy of environmental models


Because of the different sizes of artificial society, the model Fig.6 The social molding framework
granularity of environment elements is different. The This analytical framework is also characterized by the
granularity of the environmental model is relatively coarse in complex system theory, given the nesting and causality of
the large-scale artificial society scene. For example, in the evolution mechanisms between layers. These three layers and
analysis of the global or urban-level spread of disease, the their interactions constitute a complete and abstract circular
traffic networks, such as aviation networks, are generally analysis structure. Each layer focuses on the different
considered as a simple abstract network. In the small-scale dimensions of the artificial society evolution process,
artificial society, the small granularity model may be including individuals, organizations, clusters, and countries.
established for the natural environment, including buildings Relevant modeling techniques can be selected according to
and transportation. For larger artificial society scenes, GIS is specific requirements. The implementation logic of SLE is
usually used to build geospatial models. For small-scale shown in Table 2.
artificial society scenes, visual scenes can be established by TABLE 2 The implementation logic of SLE
Definition-Agent: Used to represent individuals in an artificial society.
2D or 3D display technology. At the same time, the coordinate
Step 1: Individual evolution
system is established to determine the position in geographical Variation is the generating mechanism of diversity and the source of
space (Fig. 5). social system evolution. If there is no mutation and innovation, there is no
Because of the limitation of actual conditions, the initial evolution. Agent's variation rules can be set according to various
setting of the environmental model can only be statistical evolutionary algorithms in the model framework. Some are designed to
imitate the evolutionary functions of biological systems, such as the
characteristic data. Therefore, it is necessary to study the artificial neural network (ANN) [77], genetic algorithm (GC) [78],
generation algorithm of artificial society initialization data, evolutionary strategy (ES) [79], etc.
including statistical characteristics of Agent (total number, sex Step 2: Organizational evolution
ratio, age distribution, etc.), geographical distribution of Agent, The selection mechanism is a mechanism of decreasing diversity. It
evaluates the adaptability of individuals by some criteria, selects the
demographic and social relation attributes, statistical evolution units with high adaptability, and eliminates the evolution units
characteristics of environmental entities (total number, type, with low adaptability. In the model framework, the evolution rules of the
organization can be set according to the evolution characteristics of the state value of Agent is updated, and the behavior state
biological community, such as ant colony optimization algorithm (ACO) information is passed to the virtual environment in the
[37], particle swarm optimization (PSO) [80], artificial bee colony algorithm
(ABC) [81], etc.
simulation module. In the output module, the dynamic update
Step 3: Social emergence of the virtual environment and agents is realized to enter the
After fierce competition, some elites will emerge from the group. Other loop calculation at the next moment.
individuals can improve their ability to survive in the ecosystem by Pretreatment (1) Plan execution (2) Output (3)
imitating and learning their behaviors. This stage is the social evolution Agent behavior Simulation
Data collection
stage. There are three typical diffusion and evolution models: contagion Model Behavioral platform
Database calculation choice
model [82], social threshold model [83], and social learning model [84]. Agent/Environmental
information display
Step 4: Second order emergence Parameters Set sub-goal feedback:
Social space, in turn, acts on individual space, by which macro Communication data
External
selection Performance View control
behavior evaluation
phenomena can affect micro-individuals. In order to simulate the command
Behavioral
phenomenon that culture can accelerate the evolution of individuals, we Build agent/ configuration data files
design feedback rules in the model framework. The selection mechanism of environment

intervention strategy affects the variation level of an individual and the Initialize the target
whole system. Initialize simple
Step 5: Next cycle behavior
Agent Environmental
Over time, some elites may fall behind, and some new individuals with Internal behavior test
stronger capabilities will become new elites. Finally, the evolutionary Create an agent state
structure
equilibrium of the whole system breaks and transitions into the next cycle. External Virtual environment
Create the environment state

B. Construction of an experimental system Fig. 7 System module of computational experiment system


By using the bottom-up method, the computational 2) Digital thread
experiment system can simulate the whole complex system It is important to realize the uplink/downlink data
with a decentralized micro-intelligence model. First of all, it is interaction between the computational experiment system and
necessary to simulate the microscopic behavior of intelligent the real world. Only in this way can the computational
entities, to realize the complex phenomenon formed by the experiment have the application value of continuous
interaction of simple elements, and then to explore the improvement. A large amount of available data is gradually
macroscopic law of the whole system through the experiment becoming the original driving force for computational
design. The computational experiment system focuses on how experiments. It plays an important role both in the process of
to integrate various models with a digital thread to support the model abstraction & simplification and in the process of
implementation of artificial society and the identification of model correction & verification. Based on this, we introduce
intervention strategies. the concept of “digital thread”[85] in the industrial Internet to
1) System module describe the whole life cycle process of the computational
There are two development methods of the artificial society experiment method, which can provide the virtual
model: the first is self-programming to achieve greater representation of key elements (digital twin) and the
freedom of modeling; the second is the adoption of specific correlation between them. This can be used to trace the
platforms to achieve greater development efficiency. At experimental phenomena forward or backward, thus assisting
present, the development of the artificial society model is still strategy evaluation, impact analysis, defect backtracking, and
in its infancy. The importance of development efficiency is so on. From the perspective of knowledge engineering, the
higher than that of modeling freedom. Generally, researchers construction of artificial society is to organize and encode the
generate some framework codes with a development platform knowledge of the real system so that it can accept and process
and manually prepare the codes of specific functions to reduce the data input in the real world.
the programming workload. If the platform is trusted, the code The construction of artificial societies is a complex
it automatically generates is also trusted. At present, it has multi-stage, multi-factor, and multi-product process. From the
been a mainstream trend to develop artificial society models top-down perspective, artificial society modeling includes a
using platforms, such as Swarm[39], Repast[40], Mason[17], conceptual model, a mathematical logic model, and a
Netlogo[42], etc. simulation model, which have mutual dependence and
As shown in Fig.7, the development architecture of the constraint relationships. In the concept layer, the system is
artificial society can be divided into three modules: modeled according to physical concepts. System state
preprocessing, plan execution, and output. In the variables with physical meaning are described by physical
preprocessing module, the status value of agents, the external quantities, such as force, velocity, power, etc. The modeling of
status value, and instructions provided by the virtual the logical layer is closely related to the concept layer, which
environment are collected at the same time. In the plan mainly builds the relationship between various state variables.
execution module, the system selects the knowledge rules in In a continuous system, the relation is usually represented by a
the knowledge base and the model in the model base, partial differential equation or ordinary differential equation.
calculates and executes the behavior of various Agents in the These equations caonnot be applied to complex systems, so
simulation environment, and outputs the calculation results to they need to be converted into numerical models which could
the behavior selector to set the Agent sub-target in the next be adoptable. In the simulation layer, after completing the
moment. Based on behavior selection results and external transformation process from the upper model (logical layer) to
environment data, the Agent behavior learning rules are used the lower model (numerical calculation model), it can be
to configure Agent behavior, and the behavior selection considered that the computational experiment system can have
parameters can be modified by feedbacks. Finally, the internal the capability to support the calculation of the conceptual
model and the logical model. As shown in Fig.8, the digital which aims to show the system evolution history and special
thread can be seen as a bridge connecting different models, state transition during the experimental cycle.
Logical model

system Perception
structure module

Decision
module
system
setting Behavior Numerical calculation
Realistic data collection, sorting, analysis, module model
and fusion
system
Physical Environment Population Temporal and spatial Scenes
Agent entity operation
Model setting setting distribution Update

Digital individual
Physical data: environmental parameters, population organization
thread parameters, temporal and spatial distribution, etc. structure Update

Confrontation
Social game
Logical data: Agent module, individual, network
organization, society
System
evolution

Operating data: experimental operating data


Time Cycle

Run
termination
Artificial society
Fig. 8 Digital thread of computational experiment
3) Interruption mechanism artificial society (SIL, Society in the Loop). In the process of
The running process of the computational experiment is the simulation, it may affect the diffusion mode and equilibrium
result of the interaction of internal and external factors, state of society. The artificial society will respond to the
including the game between various individuals, and the game intervention and output the final results. Intervention makers
between individuals and the environment.External compare their goals and preferences with experiment results to
environmental factors are objective and uncontrollable, mainly judge the feasibility of the intervention. After repeated trial
including initial conditions and the external environment. and error, iteration, and refinement, a consensus on how to
Internal factors are controllable and adjustable, mainly intervene can be reached.
including the organization form, cooperation strategy, External input Intervention strategy
coordination mechanism between individuals, and so on.
Without the intervention of external factors, the experimental
Artificial society
system will evolve naturally, which can be used to analyze the
role of the initial setting and internal mechanism in the system Individual Organization
behavior
Individual group
evolution. If external intervention is applied to the behavior
Individual
behavior
Individual
experimental system, it can be used to evaluate and optimize behavior

the interventions. In order to make the experimental system


gather
evolve in the expected direction, it is necessary to implement
Influencing individual
reasonable interventions. As shown in Fig.9, a closed-loop is behavior through feedback Social
formed by connecting the intervention strategies and artificial emergence

society (including all kinds of agents directly and indirectly


affected by the intervention). The effect of intervention
Fig 9. Interruption mechanism of computational experiment
strategy can be tested by adjusting the initial setting of
external input and artificial society model. C. Design of experimental program
According to the intervention scale, the intervention With the increasing complexity of research objects, the
strategy can be divided into three categories: (1) The number of influencing factors and their combinations are huge.
intervention strategies are loaded into the Agent model of Moreover, there may be associations between variables. In the
artificial society (AIL, Agent in the Loop). In the process of computational experiment with many factors, if any
simulation, it may affect the characteristics and behavior rules combination of these factors is tested and observed, the
of individual agents. This mode is often applied to driverless number of experiments will increase exponentially, which
strategies by connecting real controllers and virtual-controlled cannot be available for large scale. Only through reasonable
objects [86,87]. (2) The intervention strategies are loaded into experiment design, can the ideal experimental results be
the organization model of artificial society (OIL, Organization obtained by the most rapid and economical method. This
in the Loop). In the process of simulation, it may affect the section describes how to design the computational experiment
interaction mode and learning rules of the group. (3) The reasonably, including design principle, design selection, and
intervention strategies are loaded into the social model of numerical generation.
1) Design principles 2) Design selection
The design of the computational experiment can be There are many feasible design schemes for one
represented by the model shown in Fig.10. Experimental computational experiment. To select the most appropriate
processes can often be visualized as a combination of solution, a cause-and-effect diagram (also called a fishbone
operations, models, methods, people, and other resources. It chart) is often used as an available tool for information
transforms multiple inputs (usually a combination) into an organization. As shown in Fig.11, the effects or response
output with one or more observable response variables. variables of interest are drawn on the fish spine, and potential
Wherein, x1, x2 … xm are the input of artificial society system; causes or design factors are arranged on a string of ribs. On
y1, y2 … yn are the output of artificial society system; u1, u2 … this basis, the selection of experimental design is carried out to
up are controllable factors or decisions; v1, v2 … vq are define experiment purpose, the number of experiment
uncontrollable factors or events. The purposes of repetitions, the appropriate order of the experiment, whether to
computational experiments are: (1) to determine the factor divide the group, or whether other randomization restrictions
collection that influences the system output through the are involved, etc.
computational experiment; (2) to determine the most effective Category A Category B
Candidate
controllable factor ui through computational experiment to Factor a1 Factor b1
solutions
Response
make the output result closer to ideal level; (3) to determine Factor a3
variable a1b1c1d1 The chosen
Factor b3
the collection of controllable factor ui through computational a1b1c1d2 solution

experiment to make the incontrollable factor or event vi affect Factor c1


a2b1c3d2
Factor d1
a3b3c3d2
the system at least. Factor c3 Factor d3
a3b3c3d3
Controllable factors or decisions
u1 u2 up Category C Category D

Fig. 11 Design selection model of the computational experiment


Input Output
The whole process consists of the following steps: (1)
x1 y1 Identification and presentation of problems: In order to use
x2 Agent-based artificial Society y2 statistical methods to design and analyze experiments,
...

...

researchers need to have a clear understanding of what the


xm yn
problem is, how to collect data, and how to analyze data. (2)
Agent Environment Social
model model model Selection of response variables: When selecting the response
variables, researchers should make sure that the variable can
provide useful information on the research process. (3)
v1 v2 vq Selection of factors: Many factors involved in the
Uncontrollable factors or events computational experiment are specified, including design
Fig.10 Design model of computational experiment factor, holding constant factor, and hate factor. (4) Selection
Because of the difference between physical experiments and of level and scope: Once the designer factors are determined,
computational experiments, many assumptions and boundary the researcher must choose the range of these factors and their
conditions in physical experiments are not satisfied in specific levels, and must consider how to control them on the
computational experiments. In physical experiments, it is desired values and how to measure them.
usually assumed that the errors are independent and equally If the interaction between different factors exists, the
distributed, and even conform to the normal distribution. appropriate way to deal with multiple factors is a factorial
However, these assumptions are often not satisfied in experiment [88]. This experiment strategy is that all factors
computational experiments. Generally, the independence of change together, not one at a time. Assuming that only two
errors can be guaranteed by different pseudo-random factors are considered and each factor has two levels, 22
sequences. However, it is still difficult to guarantee the rounds of factorial experiments can help the researcher to
identical distribution of experiment errors. Therefore, many study the individual effects of each factor and determine
classical design methods of the physical experiment cannot be whether the factor has interaction. Generally, if there are k
directly applied to computational experiments. For example, it factors and each factor has two levels, 2𝑘 rounds of factorial
is necessary to assume that no interaction effect exists between experiments should be carried out. The important
factors or only a low-order interaction effect in partial factorial characteristic of the factorial design is that experimental data
design; however, for complex computational models, can be used efficiently. In general, if there are 5 or more
satisfying this assumption is difficult. factors, there is usually no need to test all possible
Using scientific methods to design experiments is a combinations of factor levels. The fractional factorial
prerequisite for effective experiments. The statistical design of experiment is the deformation of the basic factorial design,
the experiment is very convenient to collect data suitable for and only one subset of all combinations needs to be tested.
statistical analysis and draw effective and objective 3) Numerical generation
conclusions. When the problem involves data affected by Simulation data will be continuously generated during the
experimental errors, only statistical methods are objective operation of the computational experiment. It becomes a key
analysis methods. As far as computational experiments are problem to be solved to decipher what kind of numerical
concerned, the experiment design needs to follow three basic generation strategy can make the data set as close as possible
principles of physical experiments: Randomization, to the real situation. Synthetic data sets allow researchers to
Replication, and Blocking [88].
test intervention strategies on series equivalent to thousands of replacement) and bootstrap (random sampling with
historical years and prevent overfitting to a particular observed replacement). Subsampling relies on weaker assumptions,
data set. Theoreotically, these synthetic data sets can be however, it is impractical when the observed data set has a
generated via two approaches: resampling and Monte Carlo. limited size. Bootstrap can generate samples as large as the
Fig.12 summarizes how these approaches branch out and observed data set, by drawing individual observations or
relate to each other[89]. blocks of them (hence preserving the serial dependence of the
Resampling consists of generating new (unobserved) data observations). The effectiveness of a bootstrap depends on the
sets by sampling repeatedly on the observed data set. independence of the random samples, a requirement inherited
Resampling can be deterministic or random. Instances of from the central limit theorem. To make the random draws as
deterministic resampling include jackknife (leave-one-out) and independent as possible, the sequential bootstrap adjusts
cross-validation (one-fold-out). Instances of random online the probability of drawing observations similar to those
resampling include subsampling (random sampling without already sampled.
Synthetic
dataset

Resamplings Monte Carlo

Non-
Deterministic Random Parametric
parametric

Subsample Regime- Generative


Permutation test Jackknife Bootstap Variational
(without switching time adversarial
(all combinations) (leave-one-out) (with repetition) autoencoders
repetition) series networks

Combinatorial Cross- Sequential


cross-validation validation bootstrap(adjusts
(leave k out) (one-fold-out) prob online)
Fig. 12 Method of generating data sets for computational experiments
The second approach to generating synthetic data sets is experiments can be analyzed from three levels: process
Monte Carlo. A Monte Carlo randomly samples new analysis, scenario analysis, and what-if analysis.
(unobserved) data sets from an estimated population or 1) Process analysis
data-generating process, rather than from an observed data set A main objective of computational experiments is to
(like a bootstrap would do). Monte Carlo experiments can be provide a scientific explanation of the emerging complex
parametric or nonparametric. An instance of a parametric phenomena based on the initial or driving conditions. A
Monte Carlo is a regime-switching time series model. This typical feature of a scientific theory is that it must contain
parametric approach allows researchers to match the statistical causal narratives that relate antecedents (inducement) and
properties of the observed data set, which are then replicated consequences (influence). Computational experiment theory
in the unobserved data set. One caveat of parametric Monte explains the complex phenomena that emerged in the
Carlo is that the data-generating process may be more experiment system according to causality, which is
complex than a finite set of algebraic functions can replicate. characterized by the empirical model to explain the observed
When that is the case, nonparametric Monte Carlo facts or available data. Theoretically, the event function can
experiments may be of help, such as variational auto-encoders, represent exact causal logic in detail and explain how to
self-organizing maps, or generative adversarial networks [67]. produce an integrated event.
These methods can be understood as non-parametric, However, how do event functions exist? How do different
non-linear estimators of latent variables (similar to a nonlinear event functions explain the occurrence of integrated events?
PCA). How is the probability of an integrated event according to the
event function determined? To answer these and other similar
D. Analysis of experimental results
questions, the logic of system complexity must be viewed at
In 2011, Judea Pearl found that statistical relevance cannot the micro-level. The sequential tree in Fig.13 provides the
replace causality in the study [5]. Compared with the first-order representation of Simon's theory [90]. The main
correlation dilemma of deep learning, the carefully designed conclusion is that various results in Ω space can be produced
computational experiment provides a means to discover the by combination.
causality law between factors and events. The main advantage Event E occurs in a specific environment at the initial time
lies in the algorithmization of counterfactuals. In point 𝑟0 . At the subsequent time point 𝑟1 , individual agents
computational experiments, a variety of controlled trials can can decide their adaptive change according to restricted
be set up concurrently to change the parameters of artificial rationality to meet the challenges of the current environment,
societies purposefully, or to impose specific interventions. By i.e. event 𝐷. If they don't make a change (i.e. Event ~D), the
performing a large number of repeated experiments, the result is E. If they decide to adapt, they can choose whether to
experimental results are compared to explore the deep causes actually implement decisions and adaptation measures (i.e.
for complex phenomena. Generally, computational Event 𝐴). If they fail to execute the event (i.e. Event ∼A), the
result 𝐸 ∗ will be produced. 𝐸 ≈ 𝐸 ∗ can be demonstrated. Wherein, 𝑇𝑟 indicates the trigger condition for scenario
If the individual has taken certain actions, the feedback may initiation; Des is a brief linguistic description of the scenario;
be valid or invalid at time 𝑟3 . If the feedback is valid (Event Actor is the participants involved in the scenario; In is the data
W), the result is C with higher complexity. If the feedback is or condition obtained from the external environment, which
invalid (Event ∼W), environmental impacts still need to be changes with the external environment; Action is the behavior
tolerated when the result fails (Result 𝐸 ∗∗ ). S(𝐸 ∗∗) > or action executed after the scenario starts; Out is the data
𝑆(𝐸)can be demonstrated, and S(X) represents the influence or generated in the execution process of scenario behaviors; ES is
unavailability related to Event X. the alternative scenario when the abnormal situation occurs.
The scenario options constructed with formula 2 can cover the
W various uncertainties of the system, thereby establishing the
C
A
relationship between the evolution of artificial society and the
~ W
D E** actions of multiple agents.
As shown in Fig.14, the traditional metrological regression
E ~ A methods can only describe the statistical relationship from
E*
scenario 1 to scenario 2 (arrow 1), while “agent analysis
~ D
E technology” explains the relations between pre-intervention
scenario 1 with post-intervention scenario 2 by analyzing
Fig. 13 Causality logic tree upon the occurrence of system complexity micro-individual behaviors. Such analysis is mainly based on
Under the sequential logic model [91], the occurrence of the the behavior analysis of intelligent algorithms: (1) the agent
system complexity C is explained as one branching process behavior pattern is explored from massive agent behavior data
that experiences multiple games and resolutions in the sample (Arrow 2); (2) the agent behaviors are self-learned
space Ω, i.e. one of the possible events. As shown in Fig.13, (Induction-probability model) and self-evolved (Deductive-
the emergence of system complexity C requires at least four Rule Model) based on intelligent algorithms (such as neural
necessary sequence conditions, which have a significantly low network, genetic algorithm, tabu search, simulated annealing,
probability. The occurrence probability of other results (fail E, etc.); (3) the evolution behavior rule of the agent is used to
𝐸 ∗, and 𝐸 ∗∗) is relatively high. In conditional logic mode, explain or predict the future phenomenon 2 (arrow 3).
when the system complexity occurs as an integrated event, the Scenario 1 Scenario 2
1
background can be studied according to the current site and
Macro
the necessary or sufficient conditions can be explained.
Generally, the system complexity C occurs in a dual causation
mode: depending on the simultaneous occurrence of necessary
conditions (events 𝑋1 , 𝑋2 , 𝑋3 , … 𝑋𝑁 are connected via AND); 2 3
or, depending on the occurrence of any sufficient condition
Agent
(event 𝑍1 , 𝑍2 , 𝑍3 , … 𝑍𝑀 are connected via OR).
2) Scenario analysis Action
The basic idea of scenario analysis is to acknowledge that Fig. 14 Scenario analysis model based on agent behavior
the future outcome of the event and the way to achieve this The behavioral analysis methodology assumes that the
outcome are both uncertain. It focuses on the prediction of collective behavior of agents is used as an intermediary
possible scenarios so that researchers can effectively respond between the source phenomenon 1 and the prediction
to possible scenarios [92]. In essence, it is to complete the phenomenon 2. In other words, the behavior chain between the
description of all possible future trends, including three parts: two phenomena can be replaced by intelligent algorithms.
(1) Describe the process of event occurrence and development, However, it can only solve the descriptive problem of the
and analyze the dynamic behavior of event development; (2) macro phenomenon, but not explain the emergence of
In the complex “event group”, sort out key elements and event complex social phenomena. The agent behavior based on
chain through induction and organization; (3) Obtain the intelligent algorithms is based on statistics. The law of
dynamics and relevance of the time scenarios embodied in this behavior produced by this method may not exist in reality; and,
process, and establish a logic structure for the same events. even if it exists, it cannot express the “causality” from
The traditional means for scenario analysis depends on phenomenon 1 to phenomenon 2.
artificial imagination and reasoning. Computational 3) What-If analysis
experiments introduce quantitative methods for scenario When using scenario analysis, if the micro-behavior
analysis and construct the “scenarios” of occurrence, assumption of the agent and the formal features produced by
development, transformation, and evolution of events. Under the model are both similar to the real world, we generally
the action of situational factor, the probability of transition think that this model can reasonably represent the laws of the
between various scenarios is affected, and the selective real world. There are several problems with this logic: “a
correlation transitions may occur between various scenarios. mechanism can produce a phenomenon” is far from the
From the development and evolution of the scenario, it can be conclusion that “the mechanism is the cause of this
described as follows: phenomenon”. Different Agent models can produce the same
𝑆 =< 𝑇𝑟 , 𝐷𝑒𝑠, 𝐴𝑐𝑡𝑜𝑟, 𝐼𝑛, 𝐴𝑐𝑡𝑖𝑜𝑛, 𝑂𝑢𝑡, 𝐸𝑆 > (2) formal features. So, which is the real internal mechanism that
produces these features? In order to find the causality between
variables, the what-if analysis generally adopts the three proof must be provided to ensure that the model hypothesis
conditions proposed by John Stuart Mill (British philosopher): represents the laws of a real system. High confidence is the
(1) Association: Two events must be co-varying or changing basis on which computational experiments work. Only when
together; (2) Direction of Causation: An event must occur breakthroughs are made in the model verification can the
before another event; (3) Confounding variable. If this computational experiment method become a powerful tool to
computational experiment (usually a series of experiments and understand the operation laws of complex systems. At present,
research processes) proves that the three conditions are various methods can be used to verify the validity of the
satisfied at the same time, then it can be said that the causality computational model [95]. As shown in Fig.16, the experiment
is credible and valid under certain conditions (experimental verification can be divided into three parts according to the
conditions). experiment operation process: structural verification, data
In the computational experiment, what-if analysis mainly verification, and result verification.
depends on mechanism-based behavior analysis technology. 1. Identify the 2. Build a 3. Setting 4. Solve the
problem model parameters
“Mechanisms” refer to the generation rules of agent behavior problem

and the interaction rules between agents, as well as the internal


mechanism of specific macro results. As shown in Fig.15, Precondition
+ Model
assumptions + Model
parameters = Model behavior

mechanism-based behavior analysis is the process of alignment


Represent? Consistent?
analyzing the causality chain. The emphasis of the analysis is
the agent, especially the orientation to the action of the agent:
(1) Analyze the influence of hypothesis 1 on the orientation to
Real law
+ Real system
parameters = Real system
behavior

action, including beliefs, desires, and intention (BDI), etc. [93],


i.e. the analysis of the causal chain between hypothesis 1 and Structure verification Data verification Result verification
Credibility Indirect calibration
the orientation to action (arrow 2); (2) Analyze the mechanism Concept model
Amount of data method
verification
between the orientation to the action of agents and other Calculation model
Timeliness Werker-brenner method
Correlation Historical data method
environmental factors, i.e. the generative model of agent verification

behaviors (arrow 3); (3) These rules will be manifested as the Fig. 16 Classification of experiment verification
behavior of the agent in the agent-based model, and then the 1) Structural verification
deduction results of hypothesis phenomenon will emerge The conceptual model is an abstraction of the real system
(arrow 4). and can be implemented through programming to obtain a
Scenario 1 Scenario 2 practically executable computational model. Structural
1 verification is to verify the process of model construction so
Macro that the model can reflect the internal structural characteristics
of the real system, and ensure that the intermediate process of
2 4 model behavior is correct. In other words, the hypothetical
model is a sufficient condition to produce the behavior result.
The modeling process of artificial society includes conceptual
Agent
3 model design and computational model implementation.
the orientation to Action Therefore, the structural verification also includes conceptual
action model verification and computational model verification:
Fig. 15 What-if analysis model based on mechanism 1.1) Conceptual model verification
The mechanism-based behavior analysis is an analysis of In this stage, the consistency of conceptual models with the
the “influence-action-emergence” mechanism from macro research purposes, given assumptions, existing theories, and
phenomena to micro behaviors to macro phenomena, rather evidence is examined to determine that the simplified
than a description of the coverage law between macro processing of the model will not seriously affect the credibility
phenomena based on statistics or intelligent algorithms. As a of the model and the understanding of the important
result, it can penetrate every aspect of the evolution of characteristics of the real system. At the same time, the
artificial society and restore the mechanism of intervention. conceptual model should satisfy inherent integrity, consistency,
Therefore, the what-if analysis is the core of the computational and correctness. Whether the model is reasonable can be
experiment. To further quantify the causal relationship judged three types of methods: expert judgment, theoretical
between the two nodes, the Probabilistic Causal Model (PCM) comparison, and empirical data fitting, which are often
[94] can be used to formalize and define relevant operations. qualitative and not strictly proven. By comparing with other
By adopting a probabilistic causality model and do-calculation existing models, researchers can greatly improve the maturity
axiom system, the distribution of another variable can be of conceptual models in the following aspects: (1) Describe
concluded when one variable is intervened. In this way, the the application scope and defects of the model; (2) The
causal relationship between two variables can be quantified. mainstream theory uses the same assumptions, or other
E. Verification of experimental results researchers made similar studies and assumptions; (3) Develop
Experimental verification is used to examine whether the and analyze a set of models with the same core assumptions
model accurately maps the prototype system in its application but different additional assumptions; (4) Establish an equation
domain, to ensure that the model meets the application model with similar characteristics to the behavior of the
purpose. To enhance the confidence of the model, credible original Agent model.
1.2) Computational model verification
In this stage, it is mainly to check whether the algorithm, some emergencies, the problem of data delays is very
coding, operating conditions, etc., are consistent with the prominent. For the computational model, the better the
conceptual model. The conceptual model needs to be timeliness of the data, the better the experimental effect.
transformed into a computational model that can be run by a Relevance: In many cases, there is no data we want to
specific computer system through programming. This process study in the data resources. In other words, there is a lack of
involves a variety of factors and undetectable errors. Because direct data related to the research problems. Hence, the
the Agent model is discrete, and significant differences exist researchers will have to find alternative variables for research.
between various programming languages and computer The correlation between alternative variables and direct
systems, the programming details have an important impact on variables will affect the experimental results.
the model results. At present, the main methods used to verify 3) Result verification
the computational model are: (1) Check the consistency The result verification is mainly used to evaluate whether
between the computational model and the conceptual model; the experimental results are consistent with real system
(2) Run the same code on different computers, different behavior. Real data can be collected from the real world, but
operating systems, and different pseudo-random number the data output from the model is generated by the set model
generators; (3) Use different programming languages to operating mechanism. The data generated by the model is
implement and compare conceptual models; (4) Test the compared with the real data to infer whether it “appropriately”
behavior of the computational model in extreme cases. Code reflects reality, to achieve the purpose of result verification. In
checks are involved in any case. In principle, code checking recent years, the mainstream result verification methods
can detect all programming errors. include the indirect calibration method [98], the
2) Data verification Werker-Brenner method [99], and the historical data method
In computational experiments, the model behavior obtained [100]. However, the artificial society model is only a finite
by model assumptions and model parameters is usually sample of the population after the finite operation. The model
regarded as a prior condition for experiment research. The behavior is unstable and parameter sensitive. Therefore,
building of the model is actually equivalent to proposing a compared with the real system, the result verification of the
hypothesis, and the builder believes that this hypothesis artificial society model will face the following problems:
represents the law of the real system. By performing Q1: How to deal with parameter combination explosion and
computational experiments and observing the model behavior sensitivity analysis?
results, a causal explanation of the real system is made. The A number of model parameters will lead to combination
quality of the input data will directly affect the credibility of explosion. Even if the experimental results are consistent with
the final experimental results. Poor data input can even lead to the real data, it is difficult to determine the decisive factors
great deviations from the actual situation. It is necessary to affecting the experimental data. At the same time, the model
guarantee the data availability from the source to build an results are not stable and are largely influenced by the initial
effective model. Many researchers [96,97] have noted the conditions. At present, researchers basically adopt two
problem of data quality, including consistency, precision, methods to reduce the parameter space. The first method: at
completeness, timeliness, and entity identity. For the beginning of modeling, the parameter space is reduced by
computational experiments, the following factors need to be existing data, facts, corresponding models and theories, expert
considered: experience, and knowledge. This method can make full use of
Credibility: Data collection is actually an artificial process existing knowledge, but it is easy to introduce unverified prior
of collecting data according to propositions. Therefore, it has a information. The second method is to perform sensitivity
tendency at the beginning, and the operation process cannot be analysis on formatted features after the model is established
guaranteed to be completely true. Since there is no absolute and run, to find and further analyze sensitive parameters,
“truth”, data that strictly follows scientific principles and is thereby reducing the parameter space. This idea is very
less subjective is closer to “true” and credible. In general, convincing when using experimental design, statistical method,
government and authority data are more reliable. and Monte Carlo method. But it is difficult to implement them
Data size: In statistics, the sample size is directly related to in many cases. These two methods are often combined to
the accuracy of the inferred estimates. In the case of a given reduce space in two directions.
population, the larger the sample size, the smaller the Q2: How to compare model output and empirical data?
estimation error. On the other hand, the smaller the sample The result of running the model is the probability
size, the greater the estimation error. In estimating the distribution of multiple samples, while real-world data is only
parameters of the computational model, for the same one sample. It is difficult to test a probability distribution with
population (as in a region), larger data size can reduce the a sample. The model output can be considered “consistent”
probability of random errors and the problem of with the real data to some extent. At present, it is mainly
underrepresentation. confirmed subjectively. The results are largely restricted by
Timeliness: The operation of a complex system is a the subjective consciousness of the analysts. This judgment
dynamic process. Data trends may fluctuate sharply as must consider not only the consistency of model output but
scenarios and external interventions change. Therefore, the also the consistency of input and output. That is, the model
timeliness of data is an important factor affecting the needs to be calibrated before the consistency of the results is
experimental results, especially for some projects with strong considered.
timeliness. There is an inevitable problem in that it takes time Q3: How to validate results that have not occurred in
to collect and acquire data. In particular, in the early stages of reality?
Under normal circumstances, the computational model is
verified using the existing empirical data. Sometimes, the
results of computational experiments may not have been
produced in the real world. In this case, we need to make full
use of domain knowledge to find out what causes this
phenomenon and whether it is a trend in the real world.
Additionally, we need to decide whether the model is effective.
The results are largely restricted by the range of knowledge
and problem analysis capabilities of the analysts.

IV. APPLICATION CASES


Computational experiment, as a new scientific research Fig. 17 Distribution of sugar and Agent in Sugarscape
methodology, can be applied in three aspects: thought 2) Schelling model
experiment, mechanism exploration, and parallel optimization. The Schelling model (Schelling isolation model) was
This section will give some classic examples to explain the proposed by Thomas Schelling (an American economist). The
concept of the computational experiment method. model describes the influence and function of homogeneity on
spatial segregation and reveals the principles behind racial and
A. Thought experiment income segregation [60]. The model contains three elements:
The thought experiment does not model a specific scenario Agent that generates behavior, behavior rules to be followed
or specific real social system but pursues the abstract logical by Agent, and macro results caused by Agent behaviors. As
relationships that describe general social systems. It is hoped shown in Fig.18, the whole city is seen as a giant chessboard
to explore and quantitatively analyze the unpredictable results in the experiment. Each small grid on the board allows Agent
of certain hypotheses on human society through experiments. to live or idle. There are two kinds of Agents (red and blue) of
This kind of research avoids the mapping problem from real equivalent number and about 10% of grids are blank (green).
society to artificial society. This type of research can often Each Agent has the lowest threshold. Once the number of
give metaphors, enlightenment, and qualitative trends, rather neighbors of the same kind falls below the threshold, they
than precise answers to complex questions. migrate to an unoccupied location that meets their residence
1) Sugarscape model requirements. The behavior rules of Agents include: (1)
In 1996, Joshua Epstein and Robert Axtell came up with an Calculate the number of its neighbors; (2) If the number of
“artificial society” model - Sugarscape, which can be used to neighbors is greater than or equal to their preference value,
carry out relevant experiments in economics and other social Agent thinks it is satisfied and stops moving; otherwise,
sciences [45]. As shown in Fig.17, the SugarScape model is a continues moving; (3) Agent will find a blank grid that
closed world composed of grids: the red dots indicate Agents satisfies its preference value and is closest to it and move
that can only walk in this world; the yellow part indicates there.
social wealth - sugar and yellow concentration indicates the As shown in Fig.18, as time passes, the degree of isolation
sugar distribution concentration. Each Agent contains three between different kinds of agents will eventually show a very
properties: range of vision r, metabolism of resources v, and obvious state. Through changing experimental settings (such
sugar quantity s. Agents walk according to the following rules: as Agent lifetime value, blank spaces, etc.), it can be found
(1) All cells in the range of vision r are observed and the cells that changing tolerance is not enough to avoid apartheid,
with the largest sugar content are determined as the target; (2) because almost every agent pursues the same kind of
If there is more than one cell with the maximum sugar content, neighbors. This phenomenon will trigger our thinking about
choose the nearest cell; (3) Move to the grid; (4) Collect the social issues: To what extent will racism transform the whole
sugar of the cell and update the corresponding variables s. society into such an isolation mode? How can apartheid be
As shown in Fig.17, as the experiment progresses, some reversed? The researchers extended the model and applied it to
agents may obtain more sugar because of their unique personal the study of ethnic conflicts in different regions and obtained
ability, the resource advantages of the location, and so on; better insights [101,102].
sugar-deficient individuals will die so that most Agents will be
gathered in two regions with higher sugar concentration.
Ultimately, a few Agents have a large amount of sugar, while
most Agents have only a small amount of sugar, which
verifies the famous Matthew effect in social science. Further,
by adding a variety of resources (such as spices) to Sugarcape
artificial society, how individuals form markets through
resource exchange in real society can be studied. Furthermore,
we can study social phenomena such as environmental change,
genetic inheritance, trade exchanges, and the market
mechanism by changing the different rules that Agent follows.

Fig. 18 Apartheid experiment based on the Schelling model


3) Landscape theory B. Mechanism exploration
Robert Axelrod (University of Michigan) proposed the The mechanism exploration is a modeling of a real social
landscape theory to study the alliance in human society and system, with emphasis on high matching between artificial
predict the equilibrium [103]. The theory treats social units as society and real social systems. It is expected to solve
particles in the physical world. They may be drawn together problems that exist or may exist in the existing society. Such
by some attraction and separated from each other by exclusion. applications often encounter the validity verification problem
This pull-and-push environment brings particles together in of mapping from real society to artificial society. At present,
different combinations. In various combinations, the sum of researchers hope to solve such problems through big data
the attraction and exclusion effects of each group member is acquisition and processing technology, so that the
called “total energy”. The energy landscape is a graph that experimental results can solve real problems.
represents all possible combinations and corresponding 1) Artificial stock market
energies. The more stable the combination, the lower the The stock market is clearly a complex system, which shows
energy. Axelrod and his colleagues simulated the possible complexity and unpredictability with multiple elements and
coalition camp on the eve of the Second World War by using hierarchy. To demonstrate and understand how investors make
the landscape model. The interaction between two countries is portfolio choices, Santa Fe Institute (SFI) proposed the
judged by using six factors, including ethnic situation, “Artificial Stock Market” model (ASM) in 1987, which
religious beliefs, territorial disputes, ideology, economic understood the complexity of the economic system from a new
situation, and history. A simple weighting factor is set for each perspective [46]. In this model, the hypothesis of a completely
factor. The weighting factor is +1 when converging, and is -1 rational and all-knowing “economic man” is discarded, and
when conflicting and antagonizing. replaced by a bounded rational man who can learn and adapt
to the environment and make decisions using inductive
methods. The economic system is seen as a complex system of
interacting individuals. In this visual market, several trading
Total energy

Agents make predictions by observing the changing stock


prices in the digital world and make decisions on whether to
buy stocks and the purchased quantity to maximize their utility.
All agents develop their expectations independently and have
the ability to learn. Their decision adjustment can be made
structure
according to the success or failure in predictions. In turn, the
decision of all traders determines the competitive state of
One of the Alliance: "Allied Confederate States II: "Axis One of the alliance: One of the alliance: demand and supply and then determines the price of stocks.
Powers" Powers" Soviet Union
Yugoslavia
United Kingdom
France
Romania
Hungary
As shown in Fig.20, the whole stock market constitutes a
United Kingdom Germany
Greece Czechoslovakia Portugal
France
Czechoslovakia
Italy
Poland Denmark Finland
self-closed computing system. The running time of the model
Germany Latvia
Denmark
Soviet Union
Romania
Hungary Italy Lithuania is discrete and the experiment can be carried on indefinitely.
Yugoslavia Poland Estonia
Greece
Portugal
Finland
ASM provides a good metaphor for the real stock market,
Latvia
Lithuania
which can be used to prove an effective market theory, i.e.
Fig. 19 Landscape map of coalition camp on the eve of the Second World War different expectations eventually evolve into the same rational
Fig.19 is the landscape map of all combinations. There are expectations. More complex behavior that occurs in the agent
two basin structures on the map (namely, the minimum energy and organization levels can support the views of real investors
value) : a deeper basin and a slightly shallower basin. : The and explain the real phenomenon of financial markets.
deeper sinking basin is close to the actual alliance of allies and Subsequently, several researchers have improved the ASM
axis powers. Only Portugal and Poland are classified as the model to conduct deeper analysis, such as linking micro-level
“wrong camp”. Another basin predicts a very different investor behavior with macro-level stock market dynamics,
scenario: the Soviet Union confronted all other European the effectiveness of price limits, and so on [104,105].
countries. According to the energy landscape, which “valley Agent Environment
bottom” Europe will eventually slide to is determined by the Stock market
Genetic
location of the starting point. The landscape theory divulges Rules algorithm status and Analyst
history
that for the special case of the alliance, it is possible to carry
out practical calculations and obtain credible values and
similar results. The study of history becomes more solid
Parameter Dividend Parameter
through the computational experiment of the landscape model. Utility
set flow set
Function
In this way, we have a certain degree of quantitative language
to discuss the world situation. This model can help us identify Fig. 20 Agent and stock market interaction structure
the influencing factors in the historical development process, 2) Political mechanisms
make the scope of research more clear, and achieve the Claudio Cioffi-Revilla (George Mason University)
purpose of prediction. proposed the MASON RebeLand model to analyze how the
state of a polity or its political stability is affected by internal
(endogenous) or environmental (exogenous) processes, such
as changing conditions in its economy, demography, culture,
natural environment (ecosystem), climate, or combined general overall resiliency of a polity, normally requiring not
socio-natural pressures [17]. Figure 21 shows a “map” view of just one or a few stressful issues to experience polity failure,
RebeLand as a polity or country that is situated in a natural but a large set in combination (such as inflation plus
environment. The country itself consists of an island insurgency plus environmental stress). Such results may not
surrounded by water, therefore omitting external or immediately yield actionable policy recommendations in
neighboring interactions with other countries. The RebeLand terms of specific programs, but at the very least they may offer
environment consists of terrain and a simple weather system new insights of value to both researchers and policy analysts.
that simulates climate dynamics (e.g. prolonged droughts, Such studies have also been used in electoral forecasting and
climate variability, and so on). The RebeLand political applied political science [106].
component comprises a society and a system of government 3) Epidemic spread
for dealing with public issues through public policies. Initially, With the prevalence of sudden infectious diseases (e.g.
the government formulates policies to address issues that SARS, H1N1, COVID-19, etc.), the transmission model of
affect society. Later in the simulation, under some conditions, infectious diseases has become a hot topic in research field.
the society can also generate insurgents that interact with However, the spread of infectious diseases is a dynamic and
government forces, as well as other emergent phenomena. uncertain process. There are not only multiple influencing
factors but also many unknown fields and uncertainty, putting
much pressure on predictive research. The artificial society
model is used to realize the integration of basic data, model
method, and analysis results in the computational experiment,
which involves three aspects: the generation of the epidemic
situation (such as the number of initial infections, exposure
rate, transmission rate, virus incubation period, mortality rate,
recovery probability, etc.), the representation of spatial
geographical features (such as urban type, transportation
network, population density, temperature, weather, urban
infrastructure, etc.), and the modeling of resources and
governance capacity (such as medical resources, social
Fig.21. Map of RebeLand Island showing its main natural and social features. organization, prevention and control measures, information
Cities are shown in green, natural resources in yellow, and rebel and transparency, etc.). Table 3 compares the characteristics of
government forces in red and blue, respectively. Roads and provincial several artificial societal systems related to infectious diseases.
boundaries are in gray and yellow, respectively. Physical topography is shown
on a green-tone scale and the island is surrounded by ocean.
Different artificial society systems have their characteristics in
This study demonstrates three political states: stability, implementation, representation, and accuracy.
instability, and failure. An important result pertains to the
Table 3 Comparison of the characteristics of several artificial society systems related to infectious diseases
Features BioWar[15] EpiSimS[76] GSAM[107] CovidSim[108] ASSOCC[109] SlsaR[110]
Droplet spread, No specific disease COVID-19
Disease type Physical contact Smallpox, influenza (taking H1N1 as an Other respiratory COVID-19 COVID-19
spread example) viruses
Assessing the costs
Effect evaluation, Effect evaluation, Study on the spread Effect evaluation, and benefits of
Effect evaluation,
Main application Strategy Strategy and control of Strategy different
Trade-off policy
optimization optimization infectious diseases optimization intervention
policies
Medium cities in Medium cities in
Simulated scale Globe-scale Country-scale Country-scale Country-scale
USA USA
Geographical
Stimulation method Multi-Agent Multi-Agent Multi-Agent Multi-Agent Multi-Agent
spatial unit
Development R language and
C++ - Java C++ NetLogo
Language Netlogo
Visualization No Yes Yes Yes Yes Yes
Yes (available
Open source No No No Yes Yes
online)
At present, computational experiments have become an condition, and will also have side-effects on social and
important means to study the mass spread of infectious economic environment. The quantitative assessment of its
diseases, which are mainly applied in 3 aspects: (1) impact is an important basis for emergency reserve and
Prejudgment of transmission trends. The developed intervention intensity decisions. (3) Intervention strategy
transportation systems make it easier for major infectious optimization. There are a variety of interventions for
diseases to spread on a large scale. When the epidemic emergency prevention and control of major infectious diseases.
situation has not broken out in certain areas, the analysis and Each intervention has a different target and intensity. In
judgment of its transmission trend are an important practice, it is an important and difficult problem to make
prerequisite for emergency preparedness. (2) Pre-assessment emergency decisions, that is, how to choose reasonable
of impact. The outbreak and evolution of the epidemic intervention measures and form a combined intervention
situation in one region will directly do harm to human health
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 18

strategy that can control the epidemic situation and reduce conduct objective research on the impact of policies on the
intervention cost. real economy.
2) Virtual Taobao
C. Parallel optimization
Commodity search is the core trade on Taobao, which is
The purpose of scientific research is to provide a causal one of the largest retail platforms. The business goal of
hypothesis that can link existing facts together. If the artificial Taobao is to increase sales by optimizing the strategy of
society model represents this law, the output behavior of the displaying the page view (PV) to the customer. As the
model can be effective in the real world. Parallel optimization, feedback signal from a customer depends on a sequence of
through the establishment of an artificial social model that has PVs, it is reasonable to consider it as a multi-step decision
a homomorphic relationship with the real society, realizes the problem rather than a one-step supervised learning problem.
parallel execution and cyclic feedback of the two, thereby The engine and customers are the environments of each other.
supporting the management and control of the real complex RL solutions are good at learning sequential decisions and
system. maximizing long-term rewards. One major barrier to directly
1) Island economy applying RL in these scenarios is that current RL algorithms
Economic inequality is growing globally and is gaining commonly require a large number of interactions with the
attention because of its negative impact on economic environment, which take high physical costs, such as real
opportunities, health, and social welfare. For governments, tax money, time from days to months, bad user experience, and
policies can be used to improve social outcomes. However, even lives in medical tasks.
because of the coupling between tax and labor, taxes may
reduce productivity. Therefore, how to reduce economic
inequality while ensuring productivity remains a problem to
be solved. The lack of appropriate economic data and
experiment opportunities makes it difficult to study such
economic problems in practice. For this purpose, Salesforce
proposed a new study named “The AI Economist”. Economic
simulation through AI agents can explore tax strategies that Fig. 22 Virtual Taobao Architecture with reinforcement learning
can efficiently balance economic equality and productivity To avoid physical costs, the researchers employed
[111]. A two-level reinforcement learning framework is simulators (i.e, Virtual Taobao) for RL training. Thus, the
formed between Agent and social planner with learning and policy can be trained offline in the simulator [113]. In this
adaption functions. Fig.21 gives the core implementation work, Virtual Taobao is designed by generating customers and
framework of “The AI Economist”, which emphasizes the generating interactions. As shown in Fig.22, the
game evolution mechanisms between Agent and Planner, i.e. GAN-for-Simulating-Distribution (GAN-SD) approach is
how to use reinforcement learning to realize the joint proposed to simulate diverse customers including their request;
optimization of agent behavior and taxation strategy. the Multi-agent Adversarial Imitation Learning (MAIL)
Planner Model
Planner Actions(7x22) approach is proposed to generate interactions. After generating
Set Bracket 1
customers and interactions, Virtual Taobao is built. The
CNN MLP LSTM MLP ...
experiment results disclose that Virtual Taobao successfully
Set Bracket 7
reconstructs properties that closely parallel the real
Public Map
Public State Semi-Private State Private State
environment. Virtual Taobao can be employed to train
platform policy for maximizing revenue. And, compared with
Tax Rates Endowment

Market
Labor Skill
the traditional supervised learning approach, the training
Behavior Endowment Labor Skill strategy in Virtual Taobao achieved more than a 2%
improvement of revenue in the real environment.
Agent Actions(50)
Agent Model
3) Unmanned driving
Move
In unmanned driving, the ability of unmanned vehicles to
CNN MLP LSTM MLP Build
Planner Observations understand complex traffic scenarios and autonomously make
Agent Observations
Trade
driving decisions needs to be repeatedly tested and verified. It
Fig. 22 The core implementation framework of “The AI Economist” is one of the major challenges in the AI field. However, it is
Researchers have compared the operation results of the AI challenging to build real test scenarios (such as blizzard,
Economist and the free market (Non-taxation or redistribution), rainstorm, typhoon, etc.) due to high costs in creation,
United States federal tax plan, and tax strategies generated duplication, and iteration. Therefore, the training and testing
from the Saez framework [112]. The experiment shows that of unmanned driving strategies in the virtual world have
the AI Economist can increase the trade-off between economic become a feasible technical choice. It is not only controllable
equality and productivity by 16% compared with the tax and repeatable but also safe and effective. In order to achieve
framework proposed by Emmanuel Saez. The framework can an autonomous driving test that is infinitely close to the real
be optimized directly for any socio-economic goal, which does world in the virtual world, three levels of reduction are
not use any prior knowledge or modeling assumptions, and required: (1) Geometric reduction: 3D scene simulation and
only learners from observable data. Salesforce developers sensor simulation are required to craft the environment and
hope that the AI Economist can solve the complex problems test vehicle conditions that mirror the real world; (2) Logic
that traditional economic research cannot easily handle, and reduction: the decision making and planning process of test
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 19

vehicles should be simulated in the virtual world; (3) Physical methods of artificial society modeling: 1) Data-based
reduction: the vehicle control and body dynamics results modeling method. The complex system is regarded as a black
should be simulated. At the same time, the simulation platform box, which focuses on the relationship between input and
should satisfy the characteristics of high concurrency and output, and does not model or simulate complex processes
realize the combination of vehicle responses in all scenarios. within the system. In practice, complex systems are often
At present, parallel learning methods are increasingly used in replaced by statistical models based on data and intelligent
virtual scene generation and intelligent testing of unmanned algorithms, including Graph Neural Network (RNN), vector
driving [10,11,87,88]. machine, fuzzy method, etc. 2) Mechanism-based modeling
As shown in Fig.24, Tencent built a virtual city system method. The complex system is understood and analyzed from
parallel to the real physical world, based on a simulation an overall perspective. Following the principle of “simple
platform, a high-precision map platform, and a data cloud consistency”, the structure and function of each part of the
platform. The virtual city contains not only static system are designed and restored. In the process of building a
environmental information but also dynamic information computational model according to the bottom-up principle, the
(such as traffic and passenger flow). It can not only support accuracy and complexity of the underlying microscopic model
the development and safety verification of self-driving, but play an important role in the evolution of the entire complex
also contribute to the construction of intelligent transportation system.
and smart cities. In order to improve the utilization of road However, the complex system is characterized by diverse
data and further enrich test scenarios, a large amount of road elements, frequent changes, coupled relationships, and
data can be used to train the traffic flow Agent. As a result, multi-level/multi-stage operations. Restricted by cognitive
traffic scenes with high reality and strong interaction can be ability, the traditional mechanism modeling is difficult to use
generated for closed-loop simulation, testing efficiency to describe the operation and evolution mechanism of complex
improvement, and collection cost reduction. For example, if systems. Due to the lack of internal structure and mechanism
the tested self-driving car wants to overtake, Agent AI may be information for process units, data-driven modeling relies
used to control NPC (Non-player character) vehicles to make heavily on the quantity and quality of data samples, making it
avoidance or other game actions consistent with the real difficult to analyze and explain the system mechanism in
world. depth. Therefore, how to construct an artificial society model
for CPSS has become one of the promising research problems
in this field. Mechanism analysis is helpful to identify the
essential characteristics of the system and set up an effective
model structure. And, data-driven methods can automatically
acquire information and knowledge hidden in data. By
coupling the rule model describing individual behavior with
the statistical model, a hybrid model combining system
mechanism analysis and big data analysis is established. It not
only reduces the dependence on data but also makes up for the
situation where traditional models are unlikely to simulate
individual behavior rules. In the future, there will be ample
space to build virtual artificial society models with more
Fig. 24 Scene demonstration of Tencent unmanned driving testing system realistic behavior.
It should be noted that due to the large freedom of the
V. PROBLEMS & CHALLENGES Agent model and the lack of a standardized core model, it is
Computational experiments would cultivate human ability easy to over-promote and exaggerate experiment results, thus
of observing and understanding the world and learn to deep lead to a lot of criticism. Therefore, in the process of
thinking. However, compared with its increasingly widespread progressive modeling of complex systems, we should note: 1)
application, computational experiments have not been It is necessary to identify a simple initial model with all details
advanced as a methodology and technology. Therefore, the that can represent the core elements of the reference system. 2)
computational experiment method should solve the following The design of the model is not in a random sequence that must
three questions to transition into the toolbox of mainstream provide incremental help for the final model. 3) Throughout
researchers: 1) How to define the artificial society using big the development process, model validation is necessary, but it
data, i.e. describing intelligence; 2) How to predict the future needs to be moderate. Because the intermediate stage model is
using computational experiments, i.e. predictive intelligence; 3) not yet mature, it may be rejected due to a lack of sufficient
How to realize the feedback intervention to the real world, i.e. empirical support. 4) It is necessary to determine the final
guiding intelligence. model of the reference system, otherwise the improvement of
the model will continue indefinitely.
A. Q1: How to implement describing intelligence?
B. Q2: How to implement predictive intelligence?
By abstracting and simplifying the complex behaviors and
phenomena in real social systems, we can obtain an artificial In the complex system, each link is full of high uncertainty
society model which can run on a computer platform and triggered by individual behavior, environment, and the
satisfy logical rationality and correctness. This is the basis of implementation of the intervention strategy. As a result, the
computational experiments. Currently, there are two main analysis, formulation, and implementation of intervention
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 20

strategies are accompanied by the complexities of uncertainty. In the virtual world, extreme scene can exist and occur,
Big data technology is a summary of facts based on the which is rare in the real world. Based on the optimization
analysis of historical data. It is helpless for uncertain events theory of operations research on uncertain and multi-objective
that never happened. Therefore, how to develop scientific and conditions, the intervention strategies are constantly trained
universally accepted intervention strategies for complex and revised to obtain the expected optimization results. The
systems is a basic problem to be addressed using aim is to intervene in the current development trajectory and
computational experiments. running state from a future perspective, find out the possible
With the help of artificial society, real-world issues can be adverse effects, conflicts, and potential dangers, and provide
accurately described, analyzed and controlled in the virtual “reference”, “prediction” and “guidance” for possible
world, resulting in massive system simulation operating data. scenarios. Reinforcement learning is suitable for solving these
Based on this, we can use computational experiments to problems. Its core task is to learn a strategic function, to
deduce various scenarios that have never occurred in reality, maximize a long-term benefit and to establish a mapping
discover problems and defects in system operation, find out relationship between observed values and output behaviors.
the reasons, and put forward practical solutions or The intervention strategy continues to evolve through machine
suggestions.Various forms of analysis can be adopted for the learning methods in the loop feedback process with the test
computational experiment, including process analysis, scenario.
scenario analysis, and what-if analysis, which correspond to Further, a collaborative dynamic feedback control system
the Pearl theory of causation: association, intervention, and may be constituted between computational experiments and a
counterfactual reasoning. Process analysis focuses on real system. 1) First, by analyzing the dependence of data and
individual interactions in time and space, especially key behavior, real-world problems are abstracted into the cognitive
thresholds and key attributes. Scenario analysis can provide space. 2) Second, a model is established in the cognitive space
comprehensive methods for experimental simulation analysis. for computational experiments. The obtained results can
Also, What-if analysis can compare models using different dynamically guide or control strategy execution in the real
rule sets, such as the difference between Moore neighborhood world. 3) Third, through the game process between the
and standard von Neumann neighborhood. intervention strategy and the real world, the action space with
The public policy simulation represented by “Decision the greatest difference is explored to reduce the modeling error
Theater” is a good example of this perspective [114]. By of the artificial system and the execution error of the
presenting the “if so, then” scenario, counterfactual reasoning intervention strategy. 4) Finally, the real-world execution
can be made on the consequences of different policies. Its results, in turn, continuously update the cognitive space model
specific workflow is as follows: 1) First, collect a large in the form of dynamic data input, forming a cyclic feedback
amount of historical data on the problems to be solved and learning process. The system iterates in this way of parallel
organize them according to different dimensions; 2) Second, execution until convergence. In the continuous integration of
the problem model is established by government growing data, the training and simulation of artificial society
decision-makers and experts, and then the abstract expert will become more real and accurate. Through interactive
knowledge is transformed into an acceptable scene language feedback with the real world, it can obtain self-growth and
using computational experiments; 3) Third, the self-optimization ability.
decision-making scenarios, in reality, are simulated and
visualized to provide multiple policy scenarios and result VI. CONCLUSION
previews for policy makers; 4) Finally, in an agreed decision
In scientific research, induction is the discovery of patterns
environment, various interest groups choose a satisfactory
in empirical data, which is widely used in opinion surveys and
solution to policy issues. The utilization of water resources in
macro-law analysis; deduction is to define a set of axioms and
Phoenix City is a typical application case [115].
prove the conclusions that can be derived from these axioms.
C. Q3: How to implement guiding intelligence? Compared with deduction and induction, the computational
In complex systems, individual activities are increasingly experiment method can be regarded as the third methodology
complex, diverse, and differentiated, while intervention is of scientific research. Similar to deduction methods,
simple, identical, and average centralized control. Considering computational experiments start by defining a clear set of
the dynamics and uncertainties of the external environment, assumptions; but unlike deduction methods, computational
the intervention will lead to adaptive changes of some experiments do not prove theorems. Computational
individuals according to the surrounding environment, which experiments can produce simulation data that can be analyzed
will lead to the failure of the initial optimization method. by inductive methods; but unlike typical induction methods,
Therefore, the design of an intervention strategy is not a task simulation data comes from the emergence of rules operation
that has been accomplished in one stroke. Instead, it needs to rather than direct measurements of the real world. The purpose
be constantly adjusted and optimized according to changes in of induction is to discover patterns from data, and the purpose
the external environment. Computational experiments can of deduction is to discover the results caused by hypotheses.
build the bridge between the virtual world and the real world, Compared with them, computational experiments can verify
eliminate the unfeasible strategies by counterfactual reasoning, our intuition through systematic deduction, trial & error, and
and transfer the optimized strategies to the real space to guide estimation in the virtual world.
the operation of the real system. This paper summarizes the present situation and challenges
of computational experiment methods, including the
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 21

conceptual origin, research framework, application cases, and [8] XUE X. Computational experimental methods of complex systems
-principles, models, and cases. Beijing: Science Press, 2020.
future challenges. The framework of computational
[9] Wang Fei-Yue. Computational experimental methods and complex
experiments mainly consists of the modeling of artificial system behavior analysis and decision evaluation. Journal of System
society, construction of experiment system, experimental Simulation, 2004, 16(5): 893-897.
design, experimental analysis, and experimental verification. [10] Li L, Wang X, Wang K. Parallel testing of vehicle intelligence via
The computational experiments’ application cases can be virtual-real interaction. Science Robotics, 2019, 4(28): eaaw4106.
summarized into three categories: thought experiment, [11] Li L, Huang W L, Liu Y. Intelligence testing for autonomous vehicles:
mechanism exploration, and parallel optimization, involving A new approach. IEEE Transactions on Intelligent Vehicles, 2017,
1(2):158-166.
political, economic, commercial, and epidemiological
[12] Hu X F, Li Z Q, Yang J Y, Si G Y, Pi L, Some key issues of war gaming
dimensions, etc. The technical challenges of computational & simulation, Journal of System Simulation. 2010,22 (3):549–553 .
experiments focus on how to combine with emerging smart [13] Leigh T. Agent-based computational economics. ISU Economics
technologies, including describing intelligence, predictive Working Paper, 2003,249(1):2002-2016.
intelligence, and guiding intelligence. [14] Acevedo M F, Callicott J B, Monticino M. Models of natural and human
Computational experiments provide new perspectives for dynamics in forest landscapes: Giorgio-site and cross-cultural
the analysis and study of complex systems. Although relevant synthesis.Geoforum,2008,39(2):846-866.
research has made great progress, there are still many [15] Carley K M, Fridsma D B, Casman E. BioWar: scalable agent-based
model of bioattacks. IEEE Transactions on Systems, Man, and
problems and challenges in experimental design, experimental Cybernetics - Part A: Systems and Humans, 2006, 36(2):252-265.
analysis, and experimental verification. For example, whether [16] Huang C Y, Sun C T, Hsieh J L, Lin H. Simulating SARS: small-world
the computational experiment results match the known system epidemiological modeling and public health policy assessments. Journal
laws (correctness); whether the representation of the of Artificial Societies and Social Simulation,2004,7(4):2.
computational model is elegant, including rules, syntactic [17] Cioffi R C, Rouleau M. Mason Rebeland: An agent-based model of
politics, environment, and insurgency. International Studies Review,
structure, and similar features (conciseness); whether 2010,12(1):31-52.
computational experiments can help transform the real world [18] Mitchell W. Complexity: the emerging science at the edge of order and
(effectiveness), etc. chaos. New York: Touchstone, 1992.
In essence, computational experiments are designed to solve [19] Bertalanffy L V. General System Theory. New York, USA: Braziller,
one or more research problems defined by a realistic reference 1968.
system. Theoretically, the representation form of [20] Bertalanffy L V. Problems of Life: An Evaluation of Modern Biological
computational experiments is relatively simple, but the real and Scientific Thought. New York: Harper, 1952.
system is more complex. If the abstract degree of the [21] Norbert W. Cybernetics: Or Control and Communication in the Animal
and the Machine.Massachusetts:MIT Press, 1961.
computational system is too high, it will be difficult to reflect
[22] Shannon C E. A mathematical theory of communication. Bell System
the operation law of the real world. If the abstract degree of Technical Journal, 1948, 27(3):379-423.
the computational system is too low, it will be too complex to [23] Prigogine J. Structure, dissipation and life, theoretical physics and
build a model. A series of problems (lack of data, insufficient biology. Theoretical Physics and Biology, 1969:23-52.
resources and time, and lack of knowledge) will be [24] Thom R. Structural Stability and Morphogenesis. Boca Raton: CRC
encountered. Therefore, the development of computational Press, 2018.
experiments need to seek a balance between the flexibility of [25] Haken H P J. Synergetics. IEEE Circuits & Devices Magazine, 1977,
28(9):412-414.
modeling and the credibility of conclusion. In addition,
[26] Li T Y, Yorke J A. Period three implies chaos. The American
computational experiments need to introduce more general Mathematical Monthly, 1975, 82(10): 985-992.
and effective experimental mechanisms and analysis models to [27] Mandelbrot B B. Fractals, form, chance and dimension. The
solve the problems of analysis, design, management, control, Mathematical Gazette, 1978, 62(420): 130-132.
and synthesis of complex systems. [28] Bak P, Tang C, Wiesenfeld K. Self-organized criticality: an explanation
of noise. Physical Review Letters,1987, 59 (4): 381–384.
[29] Holland J H. Adaptation in Natural and Artificial Systems: an
Introductory Analysis with Applications to Biology, Control, and
Artificial Intelligence. New York: MIT Press, 1992.
REFERENCES
[30] Watts D J, Strogatz S H. Collective dynamics of ‘small-world’ networks.
[1] Zhou Y, Yu F R, Chen J, Kuo Y. Cyber-physical-social systems: A Nature, 1998, 393(6684): 440.
state-of-the-Art survey, challenges, and opportunities. IEEE [31] Barabási A L, Albert R. Emergence of scaling in random networks.
Communications Surveys & Tutorials, 2020, 22(1):389-425. Science, 1999, 286(5439): 509-512.
[2] Wang F Y. The emergence of intelligent enterprises: From CPS to CPSS. [32] Wolfram S. Cellular automata as models of complexity.
IEEE Intelligent Systems, 2010, 25(4):85-88. Nature,1984,311(5985):419-424.
[3] Prigogine I, Stengers I. Order Out of Chaos. New York, USA: Bantam [33] Aguilera V, Galán G, Egea G. A probabilistic extension to Conway’s
Books Inc, 1984. Game of Life. Advances in Computational Mathematics, 2019,
[4] Xue X, Wang S F, Gui B, Hou Z W. A computational experiment-based 45(4):2111-2121.
evaluation method for context-aware services in a complicated [34] Langton C G. Artificial life. in 1991 lectures in complex
environment. Information Sciences, 2016,373:269-286. systems. Addison-Wesley Reading, 1992: 189- 241.
[5] Pearl J, Mackenzie D. The Book of Why: The New Science of Cause [35] Wolfram S. Random sequence generation by cellular automata.
and Effect. New York: Basic Books, 2018. Advances in Applied Mathematics, 1986, 7(2):123-169.
[6] Boschert S, Rosen R. Mechatronic futures, Digital Twin—The [36] Teruyam , Nakao Z , Chen Y W. A boid-like example of adaptive
Simulation Aspect. Cham: Springer, 2016. 59-74. complex systems. In: Proceedings of IEEE International Conference on
[7] Wang Fei-Yue. Intelligence 5.0:parallel intelligence in parallel age. Systems. Hawai'i: IEEE, 1999.1-5.
Journal of Information, 2015, 34(6): 563-574.
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 22

[37] Dorigo M, Birattari M, Stutzle T. Ant colony optimization. IEEE [63] Marathe M, Vullikanti A K S. Computational epidemiology.
Computational Intelligence Magazine, 2006, 1(4): 28-39. Communications of the ACM, 2013, 56(7): 88-96.
[38] Gilbert N, Troitzsch K G. Simulation for The Social Scientist 2nd [64] Lazer D, Pentland A, Adamic L. Social science. Computational social
Edition. Berkshire: Open University Press, 2005. science. Science, 2009, 323(5915): 721-723.
[39] Swarm main page[Online], available: http://www.swarm. org, April [65] Conte R, Gilbert N, Bonelli G, Cioffi R C, Deffuant G, Kertesz J, Loreto
3,2021. V, Moat S, Nadal J P, Sanchez A, Nowak, A. Manifesto of
[40] Repastsource[Online],available:https://repast.github.io/index.html,April computational social science. The European Physical Journal Special
1,2021. Topics, 2012, 214(1) : 325-346.
[41] Parker M T. What is Ascape and why should you care. Journal of [66] Lazer D M J, Pentland A, Watts D J. Computational social science:
Artificial Societies and Social Simulation, 2001, 4(1): 5. Obstacles and opportunities. Science, 2020, 369(6507): 1060-1062.
[42] Tisue S, Wilensky U. Netlogo: A simple environment for modeling [67] Goodfellow I J, Pouget A J, Mirza M. Generative adversarial networks.
complexity. In: Proceedings of International Conference on Complex ArXiv Preprint ArXiv, 2014: 1406-2661.
Systems. Shanghai, China: 2004, 21: 16-21. [68] Nigel G, Rosaria C. Artificial Societies: The Computer Simulation of
[43] Carl H, Steven C B. Artificial Societies: A Concept for Basic Research Social Life. London, England: University College London Press,1995.
on the Societal Impacts of Information Technology. Santa [69] Tu X, Terzopoulos D. Artificial fished: physics, locomotion, perception,
Monica,CA,1991 behavior. In: Proceedings of ACM Computer Graphics, Annual
[44] Nigel G. Rosaria Conte: Artificial societies, The Computer Simulation Conference Series, Proceedings of Siggraph, 1994: 43-50.
of Social Life. London, England: University College London Press, [70] Börgers T, Sarin R. Learning through reinforcement and replicator
1995. dynamics. Journal of Economic Theory, 1997, 77(1): 1-14.
[45] Epstein, J M, Axtell R. Growing Artificial Societies: Social Science [71] Arthur W B. Designing economic agents that act like human agents: A
From the Bottom Up. Washington, D.C, USA: Brookings Institution behavioral approach to bounded rationality. The American Economic
Press,1996. Review, 1991, 81(2): 353-359.
[46] Arthur W B, Holland J, LeBaron B, Palmer R, Tayler P. Asset pricing [72] Byrne R W, Russon A E. Learning by imitation: A hierarchical approach.
under endogenous expectations in an artificial stock market. The Behavioral and Brain Sciences, 1998, 21(5): 667-684.
Economy as an Evolving Complex System II. 1997:15–44 [73] Fudenberg D, Levine D. Learning in games. European Economic
[47] Basu N,Pryor R, Quint T. ASPEN:A microsimulation model of the Review, 1998, 42(3-5): 631-639.
economy. Computational Economics, 1998,12:223-241. [74] Kaniovski Y M, Young H P. Learning dynamics in games with
[48] Grieves M W. Virtually Intelligent Product Systems: Digital and stochastic perturbations. Games and Economic Behavior, 1995, 11(2):
Physical Twins. Complex Systems Engineering: Theory and Practice, 330-363.
2019. [75] Jordan J S. Bayesian learning in repeated games. Games and Economic
[49] Tao F, Zhang H, Liu A, Nee A Y. Digital twin in industry: Behavior, 1995, 9(1): 8-20.
State-of-the-art. IEEE Transactions on Industrial Informatics, 2018, [76] Mniszewski S M, Del Valle S Y. EpiSimS: large-scale agent-based
15(4):2405-2415. modeling of the spread of disease. Los Alamos National Lab.(LANL),
[50] Farsi M, Daneshkhah A, Hosseinian F A, Jahankhani H. Digital twin USA, 2013.
technologies and smart cities. Springer, 2020. [77] Jain A K, Mao J, Mohiuddin K M. Artificial neural networks: A tutorial.
[51] Wang Fei-Yue. Parallel system approach and management and control Computer, 1996, 29(3): 31-44.
of complex systems. The Journal of Control and Decision, 2004, [78] Whitley D. A genetic algorithm tutorial. Statistics and Computing, 1994,
19(5):485-489. 4(2): 65-85.
[52] Wang F Y. An Investigation on Social Physics: Methods and [79] Alavi M, Henderson J C. An evolutionary strategy for implementing a
Significance. Complex Systems and Complexity Science, 2005, decision support system. Management Science, 1981, 27(11):
2(3):13-22. 1309-1323.
[53] Wang F Y. Toward a paradigm shift in social computing: the ACP [80] Kennedy J, Eberhart R. Particle swarm optimization. In: Proceedings of
approach. IEEE Intelligent Systems, 2007, 22(5):65-67. ICNN'95-International Conference on Neural Networks. Perth, Australia,
[54] Wang F Y. A framework for social signal processing and analysis: from 1995,4:1942-1948.
social sensing networks to computational dialectical analytics. Scientia [81] Karaboga D, Akay B. A comparative study of artificial bee colony
Sinica: Informationis, 2013, 43(12): 1598-1611. algorithm. Mathematics and Computation,2009, 214(1), 108-132.
[55] Wang F Y, Li X, Mao W. An ACP-based approach to intelligence and [82] Dodds P S, Watts D J. A generalized model of social and biological
security informatics. Studies in Computational Intelligence, 2015, contagion. Journal of Theoretical Biology, 2005, 232(4): 587-604.
563(1):69-86.
[83] Chen W, Yuan Y, Zhang L. Scalable influence maximization in social
[56] Mei H. Understanding “software-defined” from an OS perspective: networks under the linear threshold model. In: Proceedings of 2010
technical challenges and research issues. Science China Information IEEE international conference on data mining. IEEE, 2010: 88-97.
Sciences, 2017, 60(12): 126101.
[84] Hoppitt W, Laland K N. Social Learning: an Introduction to
[57] Busoniu L, Babuska R, Schutter B. A comprehensive survey of Mechanisms, Methods, and Models. Washington D.C,USA: Princeton
multiagent reinforcement learning. IEEE Transactions on Systems, Man, University Press, 2013.
and Cybernetics, 2008, 38(2): 156-172.
[85] Bachelor G, Brusa E, Ferretto D, Mitschke A.. Model-based design of
[58] Xue X, Wang S F, Zhang L J, Feng Z Y, Guo Y D. Social learning complex aeronautical systems through digital twin and thread concepts.
evolution (SLE): computational experiment-based modeling framework IEEE Systems Journal, 2020, 14(2):1568-1579.
of social manufacturing. IEEE Transactions on Industrial Informatics,
2019, 15(6): 3343-3355. [86] Wang F Y, Zheng N N, Cao D. Parallel driving in CPSS: A unified
approach for transport automation and vehicle intelligence. IEEE/CAA
[59] Wilensky U. NetLogo heatbugs model. [Online], available: Journal of Automatica Sinica, 2017, 4(4): 577-587.
http://ccl.northwestern.edu/netlogo/models/Heatbugs. November 16,
2020. [87] Li L, Lin Y L, Zheng N N. Artificial intelligence test: a case study of
intelligent vehicles. Artificial Intelligence Review, 2018, 50(3):
[60] Schelling T C. Dynamic models of segregation. Journal of Mathematical 441-465.
Sociology,1971,1(2):143-186.
[88] Montgomery D C. Design and Analysis of Experiments. Hoboken, USA,
[61] LeBaron B. Agent-based computational finance. Handbook of John Wiley & Sons, 2017.
Computational Economics, 2006, 2: 1187-1233.
[89] Marcos Lopez de Prado. Machine Learning for Asset Managers.
[62] Carley K M, Gasser L. Computational organization theory. Multiagent Cambridge, United, Kingdom: Cambridge University Press,2020.
Systems: A Modern Approach to Distributed Artificial Intelligence,
[90] Newell A, Simon H A. Human Problem Solving. New Jersey:
1999: 299-330.
Prentice-Hall, 1972.
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 23

[91] Hamilton J. Time Series Analysis. 1st ed. New Jersey, USA: Princeton
University Press,1994. Xiao Xue, born in 1979. Professor at the School of Computer
[92] Kahn H, Wiener A J. The next thirty-three years: a framework for Software, College of Intelligence and Computing, Tianjin
speculation. Daedalus, 1967: 705-732. University. Also an adjunct professor at the School of
[93] Rao A S, Georgeff M P. BDI agents: From theory to practice. In: Computer Science and Technology, Henan Polytechnic
Proceedings of International Conference on Mechanical and Aerospace University. His main research interests include service
Systems, 1995,95:312-319. computing, computational experiment, the Internet of things,
[94] Pearl J. Causal inference in statistics: An overview. Statistics Surveys, etc.
2009, 3: 96-146.
Xiangning Yu, born in 2000. Undergraduate student at the
[95] YANG M, XIONG Z. Model validation-Methodological problems in
College of Intelligence and Computing, Tianjin University.
agent-based modeling, System Engineering –Theory & Practice, 2013,
33(6): 1458-1470. Her current research interests include service computing
and computational experiment.
[96] Gao J, Zhang Y C, Zhou T. Computational socioeconomics, Physics
Reports, 2019, 817:1-104.
[97] Salganik M J. Bit by bit: Social Research in the Digital Age. New Jersey,
USA: Princeton University Press, 2017. Deyu Zhou, born in 1995. PhD student in the School of
[98] Sundberg R. Multivariate calibration-direct and indirect regression
Software, Shandong University, Jinan, China. Her
methodology. Scandinavian Journal of Statistics, 1999, 26(2): 161-207. current research interests include service computing and
[99] Werker C, Brenner T. Empirical Calibration of Simulation Models
computational experiment.
Report. Papers on Economics and Evolution, 2004.
[100] Malerba F, Nelson R, Orsenigo L. 'History-friendly' models of industry
Xiao Wang, Associate Professor in the State Key
evolution: the computer industry. Industrial and Corporate Change, 1999,
8(1): 3-40.
Laboratory for Management and Control of Complex
Systems, Institute of Automation, Chinese Academy of
[101] Markisic S, Neumann M, Lotzmann U. Simulation Of Ethnic Conflicts Sciences, and executive president of Qingdao Academy of
In Former Jugoslavia. In: Proceedings of the International Conference
Intelligent Industries. Her research interests include social
on Energy, Chemical, Materials Science, 2012: 37-43.
network analysis, social transportation, cybermovement
[102] Ravi B, Miodownik D, Nart J. Rescape: An agent-based framework for organizations, and multi-agent modeling.
modeling resources, ethnicity, and conflict. The Journal of Artificial
Societies and Social Simulation, 2008, 11(27).
Zhangbing Zhou, born in 1975. Professor at China University
[103] Axelrod R, Bennett D S. A landscape theory of aggregation. British of Geosciences, Beijing, China, and Adjunct Professor at
Journal of Political Science, 1993: 211-233. TELECOM SudParis, Evry, France. He has authored over 100
[104] Paul E J. Agent-based modeling: what I learned from the artificial stock referred papers. His research interests include process-aware
market. Social Science Computer Review, 2002, 20(2): 174-186. information systems and sensor network middleware.
[105] Yeh C H, Yang C Y. Examining the effectiveness of price limits in an
artificial stock market. Journal of Economic Dynamics and Control,
Fei-Yue Wang, State Specially Appointed Expert and Director
2010,34(10):2089–2108.
of the State Key Laboratory for Management and Control of
[106] Laver M., Sergenti E. Party Competition: An Agent-Based Model. Complex Systems. His current research focuses on methods
Princetion, USA: Princeton University Press,2012. and applications for parallel systems, social computing, parallel
[107] Epstein J M. Modelling to contain pandemics. Nature, 2009, 460(7256): intelligence, and knowledge automation.
687-687.
[108] Schneider K A, Ngwa G A, Schwehm M. The COVID-19 pandemic
preparedness simulation tool: CovidSIM. BMC Infectious Diseases,
2020, 20(1): 1-11.
[109] Ghorbani A, Lorig F, Bruin B. The ASSOCC simulation model: A
response to the community call for the COVID-19 pandemic. [Online],
available: https://rofasss. org/2020/04/25/the-assocc-simulation-model,
2020, April 1,2021.
[110] Pescarmona G, Terna P, Acquadro A, Pescarmona P, Russo G, and
Terna S. How can ABM Models become part of the policy-making
process in times of emergencies .The S.I.S.A.R. Epidemic Model.
Review of Artificial Societies and Social Simulation, 20th Oct 2020.
[Online],available: https://rofasss.org/2020/10/20/sisar/.
[111] Zheng S, Trott A, Srinivasa S. The AI economist: improving equality
and productivity with ai-driven tax policies. ArXiv Preprint
ArXiv:2004.13332,2020.
[112] Peter D, Emmanuel S. The case for a progressive tax: from basic
research to policy recommendations. Journal of Economic Perspectives,
2011,25(4):165–190,
[113] Shi J C, Yu Y, Da Q. Virtual-taobao: Virtualizing real-world online
retail environment for reinforcement learning. In: Proceedings of the
AAAI Conference on Artificial Intelligence. Hawaii, USA, 2019, 33(01):
4902-4909.
[114] Robert E, Kelli L. Decision making in a virtual environment:
effectiveness of a semi-immersive "Decision Theater" in understanding
and assessing. Human-Environment Interactions. AutoCarto.2006,
8:1922
[115] Public Sector Digest Research Staff. More than slightly ahead of the
curve. Public Sector Digest. 2006, 10:3641

You might also like