DESS-Jerry Banks
DESS-Jerry Banks
DESS-Jerry Banks
SYSTEM
SIMULATION
Third Edition
System Simulation
Ch.1 Introduction to Simulation
Ch.2 Simulation Examples
Ch.3 General Principles
Ch.4 Simulation Software
Ch. 1 Introduction to Simulation
• Simulation enables the study of, and experimentation with, the internal
interactions of a complex system, or of a subsystem within a complex system.
• Informational, organizational, and environmental changes can be simulated, and
the effect of these alterations on the model’s behavior can be observed.
• The knowledge gained in designing a simulation model may be of great value
toward suggesting improvement in the system under investigation.
• By changing simulation inputs and observing the resulting outputs, valuable
insight may be obtained into which variables are most important and how
variables interact.
• Simulation can be used as a pedagogical device to reinforce analytic solution
methodologies.
1.1 When Simulation is the Appropriate Tool (2)
• Advantages
• New polices, operating procedures, decision rules, information flows, organizational
procedures, and so on can be explored without disrupting ongoing operations of the
real system.
• New hardware designs, physical layouts, transportation systems, and so on, can be
tested without committing resources for their acquisition.
• Hypotheses about how or why certain phenomena occur can be tested for feasibility.
• Insight can be obtained about the interaction of variables.
• Insight can be obtained about the importance of variables to the performance of the
system.
• Bottleneck analysis can be performed indicating where work-in-process, information,
materials, and so on are being excessively delayed.
• A simulation study can help in understanding how the system operates rather than
how individuals think the system operates.
• “What-if” questions can be answered. This is particularly useful in the design of new
system.
1.3 Advantages and Disadvantages of Simulation (2)
• Disadvantages
• Model building requires special training. It is an art that is learned over time and
through experience. Furthermore, if two models are constructed by two competent
individuals, they may have similarities, but it is highly unlikely that they will be the
same.
• Simulation results may be difficult to interpret. Since most simulation outputs are
essentially random variables (they are usually based on random inputs), it may be hard
to determine whether an observation is a result of system interrelationships or
randomness.
• Simulation modeling and analysis can be time consuming and expensive. Skimping on
resources for modeling and analysis may result in a simulation model or analysis that is
not sufficient for the task.
• Simulation is used in some cases when an analytical solution is possible, or even
preferable, as discussed in Section 1.2. This might be particularly true in the simulation
of some waiting lines where closed-form queueing models are available.
1.4 Areas of Application (1)
• System
• defined as a group of objects that are joined together in some regular
interaction or interdependence toward the accomplishment of some
purpose.
• System Environment
• changes occurring outside the system.
• The decision on the boundary between the system and its environment
may depend on the purpose of the study.
1.6 Components of a System (1)
• Model
• a representation of a system for the purpose of studying the system
• a simplification of the system
• sufficiently detailed to permit valid conclusions to be drawn about the real
system
1.9 Types of Models
• Problem formulation
• Policy maker/Analyst understand and agree with the formulation.
• Setting of objectives and overall project plan
• Model conceptualization
• The art of modeling is enhanced by an ability to abstract the essential
features of a problem, to select and modify basic assumptions that
characterize the system, and then to enrich and elaborate the model until a
useful approximation results.
• Data collection
• As the complexity of the model changes, the required data elements may
also change.
• Model translation
• GPSS/HTM or special-purpose simulation software
1.11 Steps in a Simulation Study (2)
• Verified?
• Is the computer program performing properly?
• Debugging for correct input parameters and logical structure
• Validated?
• The determination that a model is an accurate representation of the real
system.
• Validation is achieved through the calibration of the model
• Experimental design
• The decision on the length of the initialization period, the length of
simulation runs, and the number of replications to be made of each run.
• Production runs and analysis
• To estimate measures of performances
1.11 Steps in a Simulation Study (3)
• More runs?
• Documentation and reporting
• Program documentation : for the relationships between input parameters
and output measures of performance, and for a modification
• Progress documentation : the history of a simulation, a chronology of work
done and decision made.
• Implementation
1.11 Steps in a Simulation Study (4)
Input Respons
s e
Repetitio Xi1 Xi2 … Xij … Xip yi
ns
1
·
·
n
2.1 Simulation of Queueing Systems (1)
Serve
Waiting Line r
Calling
population
Fig. 2.1 Queueing
System
• A queueing system is described by its calling population, the nature of
the arrivals, the service mechanism, the system capacity, and the
queueing discipline.
2.1 Simulation of Queueing Systems (2)
• Arrivals and services are defined by the distribution of the time between
arrivals and the distribution of service times, respectively.
• In some systems, the condition about arrival rate being less than service rate
may not guarantee stability
2.1 Simulation of Queueing Systems (4)
• System state : the number of units in the system and the status of the
server(busy or idle).
• In a single-channel queueing system there are only two possible events that
can affect the state of the system.
• the arrival event : the entry of a unit into the system
• the departure event : the completion of service on a unit.
Departure
Event
Arrival
Event
• The interarrival times and service times must be meshed to simulate the single-
channel queueing system.
• Table 2.4 was designed specifically for a single-channel queue which serves
customers on a first-in, first-out (FIFO) basis.
2.1 Simulation of Queueing Systems (12)
• Figure 2.6 depicts the number of customers in the system at the various
clock times.
2.1 Simulation of Queueing Systems (14)
Arrival Departure
Checkout Counter
Assumptions
• Only one checkout counter.
• Customers arrive at this checkout counter at random from 1
to 8 minutes apart. Each possible value of interarrival time
has the same probability of occurrence, as shown in Table 2.6.
• The service times vary from 1 to 6 minutes with the
probabilities shown in Table 2.7.
• The problem is to analyze the system by simulating the arrival
and service of 20 customers.
2.1 Simulation of Queueing Systems (15)
2.1 Simulation of Queueing Systems (16)
The expected service time is slightly lower than the average service
time in the simulation. The longer the simulation, the closer the
E (S )will be to
average
2.1 Simulation of Queueing Systems (22)
Able
Baker
A drive-in restaurant where carhops take orders and bring food to the
car.
Assumptions
• Cars arrive in the manner shown in Table 2.11.
• Two carhops Able and Baker - Able is better able to do the job and
works a bit faster than Baker.
• The distribution of their service times is shown in Tables 2.12 and
2.13.
2.1 Simulation of Queueing Systems (25)
• After the first customer, the cells for the other customers must be based on logic and
formulas. For example, the “Clock Time of Arrival” (column D) in the row for the
second customer is computed as follows:
D2 = D1 + C2
• The logic to computer who gets a given customer can use the Excel macro function IF(),
which returns one of two values depending on whether a condition is true or false.
IF( condition, value if true, value if false)
clock = 0
Is there the service
Is it time of arrival? Increment clock
N completed? N
o o
Y
e
s
Y
e
Store clock time (column H or K)
s
Generate random digit for
Is Able idle?
Y service (column E)
e
s
Convert random digit to random
number for service time
(column G)
N
o
Able service begin (column F)
Generate random digit for
Is Baker idle?
Y service (column E)
e
s
N Convert random digit to random
o
number for service time
(column J)
Nothing Baker service begin (column I)
2.1 Simulation of Queueing Systems (27)
F10 IF ( D10 MAX ( H $1 : H 9), D10, IF ( D10 MAX ( K $1 : K 9), "" ,
MIN ( MAX ( H $1 : H 9), MAX ( K $1 : K 9))))
A similar formula applies to cell I10 for “Time Service Begins” for
Baker.
2.1 Simulation of Queueing Systems (28)
• Notice that in the second cycle, the amount in inventory drops below zero,
indicating a shortage.
• Two way to avoid shortages
• Carrying stock in inventory
: cost - the interest paid on the funds borrowed to buy the items, renting of storage
space, hiring guards, and so on.
• Making more frequent reviews, and consequently, more frequent purchases or
replenishments
: the ordering cost
• The total cost of an inventory system is the measure of performance.
• The decision maker can control the maximum inventory level, M, and the length of the
cycle, N.
• In an (M,N) inventory system, the events that may occur are: the demand for items in
the inventory, the review of the inventory position, and the receipt of an order at the
end of each review period.
2.2 Simulation of Inventory Systems (3)
Beginning
= Ending Inventory + new order
Inventory of Third of 2 day in first
day cycle
The lead time for this order was 1 day.
Notice that the beginning inventory on the second day of the third
cycle was zero. An order for 2 units on that day led to a shortage
condition. The units were backordered on that day and the next
day also. On the morning of day 4 of cycle 3 there was a beginning
inventory of 9 units. The 4 units that were backordered and the 1
unit demanded that day reduced the ending inventory to 4 units.
Based on five cycles of simulation, the average ending inventory is
approximately 3.5 (88 25) units. On 2 of 25 days a shortage
condition existed.
2.3 Other Examples of Simulation (1)
• Example 2.5 A Reliability Problem
Milling Machine
Repairperson
A classic simulation
problem is that of a
squadron of bombers
attempting to destroy
an ammunition depot
shaped as shown in
Figure 2.8.
2.3 Other Examples of Simulation (7)
X
Z X Z
where X is a normal random variable, is the true mean of the
distribution of X, and is the standard deviation of X.
In this example the aiming point can be considered as (0, 0); that
is, the the
value in the horizontal direction is 0, and similarly for
value in the vertical direction.
X Z X Y Z Y
where (X,Y) are the simulated coordinates of the bomb after it has
fallen
X 600
and Y 300
X 600 Z Y 300 Z i
i
2.3 Other Examples of Simulation (9)
• The next chapter gives a more systematic presentation of the basic concepts. A
more systematic methodology, such as the event-scheduling approach described
in Chapter 3, is needed.
• Ad hoc simulation tables were used in completing each example. Events in the
tables were generated using uniformly distributed random numbers and, in one
case, random normal numbers.
• The examples illustrate the need for determining the characteristics of the input
data, generating random variables from the input models, and analyzing the
resulting response.
Ch. 3 General Principles
• Discrete-event simulation
• Discrete-event models are appropriate for those systems for which changes in
system state occur only at discrete points in time.
• This chapter deals exclusively with dynamic, stochastic systems (i.e., involving
time and containing random elements) which change in a discrete manner.
3.1Concepts in Discrete-Event Simulation (1)
an end of inspection
event
event time = 105
Event
notice
10 10 time
0 5
• To keep track of activities and their expected completion time, at the simulated
instant that an activity duration begins, an event notice is created having an
event time equal to the activity's completion time.
3.1Concepts in Discrete-Event Simulation (5)
• A delay's duration
• Not specified by the modeler ahead of time, But rather determined by system
conditions.
• Quite often, a delay's duration is measured and is one of the desired outputs of a
model run.
How long to wait?
Delay Activity
What so
a conditional wait an unconditional wait
called
A completion a secondary event a primary event
by placing the
associated entity on
A by placing an event
another list, not the FEL,
management notice on the FEL
perhaps repre-senting a
waiting line
System state, entity attributes and the number of active entities, the
contents of sets, and the activities and delays currently in progress
are all
functions of time and are constantly changing over time.
Time itself is represented by a variable called CLOCK.
3.1Concepts in Discrete-Event Simulation (7)
• A discrete-event simulation
: the modeling over time of a system all of whose state changes occur
at discrete points in time-those points when an event occurs.
• The mechanism for advancing simulation time and guaranteeing that all events
occur in correct chronological order is based on the future event list (FEL).
• Future Event List (FEL)
• to contain all event notices for events that have been scheduled to occur at a future
time.
• to be ordered by event time, meaning that the events are arranged chronologically;
that is, the event times satisfy
t t1 t 2 t n
current value Imminent event
of simulated
time
• Scheduling a future event means that at the instant an activity begins, its
duration is computed or drawn as a sample from a statistical distribution and the
end-activity event, together with its event time, is placed on the future event
list.
3.1.1. The Event-Scheduling/Time-Advanced Algorithm (2)
• the addition of a new event to the list, and occasionally removal of some event (called
cancellation of an event)
: Addition of a new event (and cancellation of an old event) requires a
search of the list.
• The efficiency of this search depends on the logical organization of the list and
on how the search is conducted.
• The removal and addition of events from the FEL is illustrated in Figure 3.2.
3.1.1. The Event-Scheduling/Time-Advanced Algorithm (3)
• The system snapshot at time 0 is defined by the initial conditions and the
generation of the so-called exogenous events.
• Every simulation must have a stopping event, here called E, which defines
how long the simulation will run.
• World views
: the event-scheduling world view, the process-interaction world view, and the
activity-scanning world view.
• Phase A : Remove the imminent event from the FEL and advance the clock
to its event time. Remove any other events from the FEL that
have the same event time.
• Phase B : Execute all B-type events that were removed from the FEL.
• Phase C : Scan the conditions that trigger each C-type activity and
activate any whose conditions are met. Rescan until no
additional C-type activities can begin or events occur.
3.1.2. World Views (5)
Activity Condition
A customer is in queue and Able is
Service time by Able
idle
Service time by A customer is in queue, Baker is idle,
Baker and Able is busy
• Using the process-interaction approach, we view the model from the viewpoint of a
customer and its “life cycle.” Considering a life cycle beginning upon arrival, a
customer process is pictured in Figure 3.4
3.1.3. Manual Simulation Using Event Scheduling (1)
• Initial conditions
• the system snapshot at time zero (CLOCK = 0)
• LQ(0) = 0, LS(0) = 1
• both a departure event and arrival event on the FEL.
• The simulation is scheduled to stop at time 60.
• Server utilization : total server busy time (B) / total time (T E).
• a* : the generated interarrival time
• s* : the generated service times
• The simulation in Table 3.1 covers the time interval [0, 21].
3.1.3. Manual Simulation Using Event Scheduling (4)
3.1.3. Manual Simulation Using Event Scheduling (5)
Loading
Scale
Loader Weighing
queue queue
First-Come
First-Come First-Served
First-Served
The distributions of loading time, weighing time, and travel time
are given in Tables 3.3, 3.4, and 3.5, respectively, from Table A.1.
The purpose of the simulation is to estimate the loader and scale
utilizations (percentage of time busy).
3.1.3. Manual Simulation Using Event Scheduling (9)
Head Pointer
Memory address 10 10 10 10 10 10 10 10 10 10 11
0 1 1 2 2 3 3 4 4 5 5 7 6 8 7 9 8 10 9 0
addin
g 6
10 10 10 10 10 10 10 10 10 10 11
0 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 0
• a variable called a head pointer, with name headptr, points to the record at the top
of the list.
3.2.2 Using Arrays for List Processing (3)
• Example 3.7 (A List for the Dump Trucks at the Weigh Queue)
• In Example 3.5, suppose that a waiting line of three dump trucks occurred at
the weigh queue, at CLOCK time 10 in Table 3.6.
• In our example, we will use a notation for records identical to that in the
previous section (3.2.2):
Entities: [ ID, attributes, next pointer ]
Event notices: [ event type, event time, other data, next pointer ]
• If for some reason we wanted the third item on the list, we would have
to traverse the list, counting items until we reached the third record.
• Unlike arrays, there is no way to retrieve directly the ith record in a linked
list, as the actual records may be stored at any arbitrary location in
computer memory and are not stored contiguously as are arrays.
3.2.3 Using Dynamic Allocation and Linked Lists (3)
• Example 3.8 (The Future Event List and the Dump Truck Problem)
• Based on Table 3.6, event notices in the dump truck problem of Example 3.5
are expanded to include a pointer to the next event notice on the future
event list and can be represented by:
[ event type, event time, DT i , nextptr ]
• as, for example,
[ EL, 10, DT 3, nextptr ]
• where EL is the end loading event to occur at future time 10 for dump truck
DT 3, and the _eld nextptr points to the next record on the FEL.
• Figure 3.9 represents the future event list at CLOCK time 10 taken from Table
3.6.
3.2.3 Using Dynamic Allocation and Linked Lists (4)
1 2 … 49 50
51 52 … 99 100
• Historical period
• 1955 – 60 The Period of Search
• 1961- 65 The Advent
• 1966 – 70 The formative Period
• 1971 – 78 The Expansion Period
• 1979 – 86 The Period of Consolidation and
Regeneration
• 1987 - The Period of Integrated Environments
4.1 History of Simulation Software
• Workload Characterization
Ex. (1.3) The number of packets lost on two links was measured for four
file sizes as shown in Table 1.1. Which link is better?
Ex. (1.4) The performance of a system depends on the following three factors
(a) Garbage collection technique used: G1, G2, or none.
(b) Type of workload: editing, computing, or artificial intelligence (AI).
(c) Type of CPU: C1, C2, or C3
How many experiments are needed? How does one estimate the
performance impact of each factor?
1.1 Outline of Topics (6)
• Example 1.7
• The throughputs of two systems A and B were measured in
transactions per second.
• The results are shown in Table 1.2
A 20 10
B 10 20
• ACM SIGMETRICS
: for researchers engaged in developing methodologies and user
seeking new or improved techniques for analysis of computer
systems
• ACM SIGSIM
: Special Interest Group on SIMulation – Simulation Digest
• CMG
: Computer Measurement Group, Inc. – CMG Transactions
1.3 Professional Organizations,
Journals, and Conferences (2)
• SIAM
: SIAM Review, SIAM Journal on Control &Optimization, SIAM Journal
on Numerical Analysis, SIAM Journal on Computing, SIAM Journal
on Scientific and Statistical Computing, and Theory of Probability &
Its Applications
1.3 Professional Organizations,
Journals, and Conferences (3)
• ORSA
: Operations Research, ORSA Journal on Computing, Mathematics
of Operations Research, Operations Research Letters, and
Stochastic Models
• No goals
• Any endeavor without goals is bound to fail.
• Each model must be developed with a particular goal in mind.
• The metrics, workloads, and methodology all depend upon the goal.
What goals?
General- purpose
model
Particular model
2.1 Common Mistakes in
Performance Evaluation (2)
• Biased Goals
• The stating the goals becomes that of finding the right metrics
and workloads for comparing the two systems, not that of
finding the metrics and workloads such that our system turns
out better.
Pick up
as my
likes
Parameter Metric B
A
Workload C Factor D
2.1 Common Mistakes in
Performance Evaluation (4)
Model A
Final
results
Model B
2.1 Common Mistakes in
Performance Evaluation (5)
Compare MIPS
• Unrepresentative Workload
• The workload used to compare two systems should be
representative of the actual usage of the systems in the field.
• The choice of the workload has a significant impact on the
results of a performance study.
Network
Measurement
Analytical
Simulation Modeling
2.1 Common Mistakes in
Performance Evaluation (8)
• No Analysis
• One of the common problems with measurement projects is
that they are often run by performance analysts who are good
in measurement techniques but lack data analysis expertise.
• They collect enormous amounts of data but do not know to
analyze or interpret it.
Let’s explain
5
how one can 3
4
2
use the results 1
2.1 Common Mistakes in
Performance Evaluation (13)
• Erroneous Analysis
• There are a number of mistakes analysts commonly make in
measurement, simulation, and analytical modeling, for example,
taking the average of ratios and too short simulations.
Simulation time
2.1 Common Mistakes in
Performance Evaluation (14)
• No Sensitivity Analysis
• Often analysts put too much emphasis on the results of their
analysis, presenting it as fact rather than evidence.
• Without a sensitivity analysis, one cannot be sure if the
conclusions would change if the analysis was done in a slightly
different setting.
• Without a sensitivity analysis, it is difficult to access the relative
importance of various parameters.
2.1 Common Mistakes in
Performance Evaluation (15)
Packet 512
octects
Transmi Receive
t buffer buffer
2.1 Common Mistakes in
Performance Evaluation (16)
• Ignoring Variability
• It is common to analyze only the mean performance since
determining variability is often difficult, if not impossible.
• If the variability is high, the mean alone may be misleading to
the decision makers.
Load
demand Weekly
Mean = 80
Not useful
Decision Analyst
maker
2.1 Common Mistakes in
Performance Evaluation (20)
I’m analyst.
Let’s explain Words, pictures, and graphs
the results of
the analysis
2.1 Common Mistakes in
Performance Evaluation (21)
Dual CPU
System
Timesharing system Different ALU system
3. Perform a number of
1. Request the service different instructions
4. Request queries
2. Send the packets
6. Response
5. Answer queries
2.2 A Systematic Approach to
Performance Evaluation (3)
• Select Metrics
• Select criteria to compare the performance.
• Choose the metrics(criteria).
• In general, the metrics are related to the speed, accuracy, and
availability of services.
• The performance of a network
: the speed(throughput, delay), accuracy(error rate), and
availability of the packets sent.
• The performance of a processor
: the speed of (time taken to execute) various instructions
2.2 A Systematic Approach to
Performance Evaluation (4)
• List Parameters
• Make a list of all the parameters that affect performance.
• The list can be divided into system parameters and workload
parameters.
• System parameters
: Hardware/Software parameters
: These generally do not vary among various installations of the
system.
• Workload parameters
: Characteristics of user’s requests
: These vary form one installation to the next.
2.2 A Systematic Approach to
Performance Evaluation (5)
• Select Workload
• The workload consists of a list of service requests to the system.
• For analytical modeling, the workload is usually expressed as a
probability of various requests.
• For simulation, one could use a trace of requests measured on
a real system.
• For measurement, the workload may consist of user scripts to
be executed on the systems.
• To produce representative workloads, one needs to measure
and characterize the workload on existing systems.
2.2 A Systematic Approach to
Performance Evaluation (8)
• Design Experiments
• Once you have a list of factors and their levels, you need to
decide on a sequence of experiments that offer maximum
information with minimal effort.
• In first phase, the number of factors may be large but the
number of levels is small. The goal is to determine the relative
effect of various factors.
• In second phase, the number of factors is reduced and the
number of levels of those factors that have significant impact
is increased.
2.2 A Systematic Approach to
Performance Evaluation (9)
• Present Results
• It is important that the results be presented in a manner that is
easily understood.
• This usually requires presenting the results in graphic form and
without statistical jargon.
• The knowledge gained by the study may require the analysts to
go back and reconsider some of the decisions made in the
previous steps.
• The complete project consists of several cycles through the
steps rather than a single sequential pass.
Case Study 2.1 (1)
• System Definition
• Goal : to compare the performance of applications using
remote pipes to those of similar applications using
remote procedure calls.
• Key component : Channel (either a procedure or a pipe)
• System
Network
Clinet
Server
System
Case Study 2.1 (3)
• Services
• Two types of channel calls
: remoter procedure call and remote pipe
• The resources used by the channel calls depend upon the
number of parameters passed and the action required on those
parameters.
• Data transfer is chosen as the application and the calls will be
classified simply as small or large depending upon the amount
of data to be transferred to the remote machine.
• The system offers only two services
: small data transfer or large data transfer
Case Study 2.1 (4)
• Metrics
• Due to resource limitations, the errors and failures will not be
studied. Thus, the study will be limited to correct operation only.
• Resources : local computer(client), the remote computer(server),
and the network link
• Performance Metrics
- Elapsed time per call
- Maximum call rate per unit of time or equivalently, the time
required to complete a block of n successive calls
- Local CPU time per call
- Remote CPU time per call
- Number of bytes sent on the link per call
Case Study 2.1 (5)
• Parameters
• System Parameter
• Speed of the local CPU, the remote CPU, and the network
• Operating system overhead for interfacing with the channels
• Operating system overhead for interfacing with the networks
• Reliability of the network affecting the number of retransmissions required
• Workload Parameters
• Time between successive calls
• Number and sizes of the call parameters
• Number and sizes of the results
• Type of channel
• Other loads on the local and remote CPUs
• Other loads on the network
Case Study 2.1 (6)
• Factors
• Type of channel
: Two type – remote pipes and remote procedure calls
• Speed of the network
: Two locations of the remote hosts will be used – short distance(in the
campus) and long distance(across the country)
• Sizes of the call parameters to be transferred
: Two levels will be used – small and large
• Number n of consecutive calls
: Eleven different values of n – 1,2,4,8,16,32,,512,1024
• All other parameters will be fixed.
• The retransmissions due to network errors will be ignored.
• Experiments will be conducted when there is very little other load on
the hosts and the network.
Case Study 2.1 (7)
• Evaluation Technique
• Since prototypes of both types of channels have already been
implemented, measurements will be used for evaluation.
• Analytical modeling will be used to justify the consistency of
measured values for different parameters.
• Workload
• A synthetic program generating the specified types of channel
requests
• This program will also monitor the resources consumed and log
the measured results(using Null channel requests).
Case Study 2.1 (8)
• Experimental Design
• A full factorial experimental design with 2311=88 experiments will be used
for the initial study.
• Data Analysis
• Analysis of variance will be used to quantify the effects of the first three
factors and regression will be used to quantify the effects of the number n of
successive calls.
• Data Presentation
• The final results will be plotted as a function of the block size n.
Chapter. 3 Selection of Techniques and Metrics
3.1 Selecting an Evaluation Technique (1)
Analytical
Criterion Modeling Simulation Measurement
• Life-cycle stage
• Measurement : only if something similar to the proposed
system already exists
• Analytical modeling and Simulation : if it is a new concept
• Level of accuracy
• Analytical modeling requires so many simplifications and
assumptions that if the results turn out be accurate.
• Simulations can incorporate more details and require less
assumptions than analytical modeling, and thus more often are
closer to reality.
• Measurements may not give accurate results simply because
many of the environmental parameters, such as system
configuration, type of workload, and time of the measurement,
may be unique to the experiment. Thus, the accuracy of results
can vary from very high to none.
3.1 Selecting an Evaluation Technique (4)
• Trade-off evaluation
• The goal of every performance study is either to compare
different alternatives or to find the optimal parameter value.
• Analytical models provide the best insight into the effects of
various parameters and their interactions.
• With simulations, it may be possible to search the space of
parameter values for the optimal combination, but often it is
not clear what the trade-off is among different parameters.
• Measurement is the least desirable technique in this respect. It
is not easy to tell if the improved performance is a result of
some random change in environment or due to the particular
parameter setting.
3.1 Selecting an Evaluation Technique (5)
• Cost
• Measurement requires real equipment, instruments, and time. It
is the most costly of the three techniques.
• Cost, along with the ease of being able to change configurations, is often the
reason for developing simulations for expensive systems.
• Analytical modeling requires only paper and pencils. Thus, It is the cheapest
alternative.
• Saleability of results
• The key justification when considering the expense and the
labor of measurements
• Most people are skeptical of analytical results simply because
they do not understand the technique or the final result.
3.1 Selecting an Evaluation Technique (6)
Time
(Response ti
Done Rate
correctly (Throughput
Done Resource
(Utilization)
System
Probability
Done
Error j
incorrectly
Time betwee
errors
Duration
of the event
cannot do Event k
Time betwee
events
3.2 Selecting performance Metrics (2)
Intermedate
• The problem of congestion
End system
Intermedate
systems End system occurs when the number of
systems
packets waiting at an
intermediate system exceed the
system’s buffering capacity and
End system
Intermedate
Intermedate
End system some of the packets have to be
systems
systems dropped.
Intermedate
systems End system
End system
Case Study 3.1 (2)
• Time-rate-resource metrics
• Response time: the delay inside the network for individual packets.
• Throughput: the number of packets per unit of time.
• Processor time per packet on the source end system.
• Processor time per packet on the destination end systems.
• Processor time per packet on the intermediate systems.
Case Study 3.1 (3)
(i 1 xi ) 2
n
f ( x1 , x2 , , xn )
ni 1 xi2
n
Time
Response time
Time
Reaction Think
time time
Response
time
(Definition 1)
Response
time
(Definition 2)
Nominal
capacity
Throughput
`
Usable
Knee capacity
capacity
Load
Response
time
`
Load
3.3 Commonly Used Performance
Metrics (4)
• Idle time : the period during which a resource is not being used.
• Reliability : the probability of errors or by the mean time between
errors.
• Availability : the fraction of the time the system is available to
service user’s requests.
• Downtime : the time during which the system is not available.
• Uptime : the time during which the system is available(MTTF-Mean
Time To Failure).
• Cost/performance ratio : a metric for comparing two or more
systems.
3.4 Utility Classification of
Performance Metrics
Better Better
Utility Utility
Metric Metric
Utility
Best
Metric
3.5 Setting Performance
Requirements (1)
(a) The probability of any bit being in error must be less than 10 -7.
(b) The probability of any frame being in error (with error indication
set) must be less than 1%.
(c) The probability of a frame in error being delivered without error
indication must be less than 10-15.
(d) The probability of a frame being misdelivered due to an
undetected error in the destination address must be less than
10-18.
(e) The probability of a frame being delivered more than once
(duplicate) must be less than 10-5.
(f) The probability of losing a frame on the LAN (due to all sorts of
errors) must be less than 1%.
Case Study 3.2 (3)
• Availability : Two fault modes were considered significant. The first was
the time lost due to the network reinitializations, and the
second was time lost due to permanent failures requiring
field service calls. The requirements for frequency and
duration of these fault modes were specified as follow:
(a) The mean time to initialize the LAN must be less than 15
milliseconds.
(b) The mean time between LAN initializations must be at least 1
minute.
(c) The mean time to repair a LAN must be less than 1 hour. (LAN
partitions may be operational during this period.)
(d) The mean time between LAN partitioning must be at least half a
week.
정지영
목차
1. 개요
2. 다중프로세서 시스템
3. 근사 분석 모델
4. 시뮬레이션 모델
5. 분석 모델과 시뮬레이션 모델 결과
6. 다중프로세서 시스템 모델의 확장
1. 개요
• 시스템의 분석적 모델 개발
기억장치
1 2 M
버스 1
2
1 2 N
처리장치
멀티프로세서 시스템 요소
2. 다중프로세서 시스템
• 메모리 요청 처리
3. 성공적인 요청에 대해서 그 요청이 저장이라면 , 주소와 자료가 버스를 통해 메모리로 전송되고 만일
그 요청이 인출이라면 해당 싸이클의 종료 시 버스를 통하여 자료가 반환된다 .
• 목적 : 프로세서 성능이 메모리 모듈과 버스에 대한 경쟁에 의해서 얼마나 영향을 받는가를 결정하는
것
3. 근사 분석 모델
실행 블럭된 요청 승인된 요청
x b 1
• 단일 프로세서 요청률
• r=(b+1)/T = (b+1)/(x+b+1)
• 분모와 분자를 b+1 로 나누면
• r=1/[1+x/(b+1)]
• b+1=rT : T 동안에 발생된 요청의 전체 횟수
• 프로세서당 요청완료율 : BW/N
• T=N/BW
• b+1=Nr/BW
• r= 1/[1+xBW/Nr]
3. 근사 분석 모델
• e=0.005 인 경우 C 알고리즘
real BW(p,B,M,n)
real p; intB, M, N;
{
real bw0, bw1=p*N, r=p, x=1.0/p-1.0, Bwi();
do
{
bw0 = bw1; r=1.0/(1.0+x*bw0/(N*r));
bw1=BWi(r,B,M,N);
}
while (fabs(bw1-bw0) > 0.005);
return(bw1);
}
3. 근사 분석 모델
real Bwi (r,B,M,N)
real r; intB, M, N;
{ /* compute bandwidth for request rate r */
int I; real q, bw=0.0, f();
q=1.0-pow(1.0-r/M, (real)N);
for(i=1; i<B; i++) bw += i*f(i,M,q);
for(i=B; i<=M; i++) bw += B*f(i,M,q);
return (bw);
}
real Fact(n)
int n;
{ /* compute n factorial */
real z=1.0;
while (n) {z*=n; n--;}
return (z);
}
3. 근사 분석 모델
real C(n,k)
int n,k;
{ /* compute binomial coefficient */
return (Fact (n)/Fact(k) * Fact(n-k)));
}
real f(i,M,q)
int i, M; real q;
{ /* compute binomial probability */
real z;
z=C(M,i)*pow(q,(real)i)*pow(1.0-q,(real)(M-i));
return(z);
}
3.1 활용과 대기시간
• b=(N/BW)-(1/p)
3.2 평균 큐 길이
Lb=bBW / N
Lb=1-BW / Np
3.3 시스템 처리량
#include <smpl.h>
#define busy 1
real
p=0.250, /* local memory miss rate */
treq[17] /* next request time for processor */
tn=1.0E6; /* earliest-occurring request time */
int
N=8, M=4, nB=2, /* no. processors, memories, & buses */
modole[17],bus, /* memory & bus facility descriptors */
nbs=0, /* no. busy buses current cycle */
req[17], /* currently-requested memory module */
next=1, /* arbitration scan starting point */
4.1 시뮬레이션 모델 1
/*----------- MEMORY-BUS BANDWIDTH MODEL-----*/
main() {
int event, i,n;
smpl (0, “bandwidth Model”);
for (i=1; i<=M, i++) module [i]=facility(“module”,1);
for (n=1; n<=N; n++) {req[n++] {req[n]=0; next_access (n) ;}
schedule(1,tn,0);
while (time() < 10000.0)
{
cause (&event,&n) ;
switch (event) {
case 1: begin_cycle() ; break;
case 2: req_module(n) : break;
case 3: end cycle(n); break;
}
}
printf(“BW=%.3f\n”, U(bus));
4.1 시뮬레이션 모델 1
• 이것이 성능에 얼마나 영향을 주는가를 보기 위하여 , 모듈과 버스 요청이 선입선출에 기반해서
큐잉되는 시스템의 모델을 살펴본다 .
4.2 시뮬레이션 모델 2
• 프로세서가 메모리 요청을 발생시키면 , 요청은 선택된 메모리 모듈에 대해 큐잉되고 , 그것을 점유하
고 , 버스에 대해서 큐잉되고 , 버스를 점유한다 .
메모리
프로세서 메모리큐 버스
1 1 1
2 2 2
해제
N M N
예약
다중프로세서 시스템 큐잉 모델 다이어그램
4.2 시뮬레이션 모델 2
• 이 모델의 장점은 요청당 평균 지연이 메모리 지연과 버스 지연으로 분리될 수 있다는 것이다 .
#include <smpl.h>
#define queued 1
real p=0.250; /* local memory 비적중율 */
int N=8, M=4, nB=2, /* no. processors, memories, & buses */
module[17], /* facility descroptors for modules */
bus, /* focility descriptors for buses */
req[17]; /* currently-requested memory module */
4.3 시뮬레이션 모델 3
main()
{
int event, I, n; real x=1.0/p-1.0;
smpl(0,”Bandwidth Model”) ;
bus=facility(“bus”,nB) ;
for(i=1; i<=M, i++) module[i]=facility(“module”,1);
for(n=1; n<=N; n++) {
req[n]=random(1,M); schedule(1, expntl(x),n; )
}
4.3 시뮬레이션 모델 3
while (time()<10000.0) {
cause(&event,&n);
switch(event) {
case 1:
if (request(module[req[n]], n, 0)!=queued)
then schedule(2, 0.0, n); break;
case 2: /* reserve bus & initiate transfer */
if (request(bus, n, 0) !=queued) then
schedule(3, 1.0, n); break;
case 3: /* complete: schedule next request */
release(bus, n);
release(module[req[n]], n);
req[n]=random(1, M);
schedule(1,espntl(x), n); break;
}
}/* end-while */
report();
}/* end-main */
5. 분석 모델과 시뮬레이션 모델 결과
• 일정하지 않은 요청률
• 중앙 및 입출력 프로세서의 혼합과 같은 다른 요청률을 가진 서로 다른 프로세서 타입으로 구성된 시스템을
모델링하기를 원할 수 있다 .
• 다양한 전송 길이
• 요청당 하나의 단위의 전송을 가정했었다 . 이것을 요청당 여러 단위가 전송될 수 있도록 확장시키길 원할 수
있다 .
1.1 변수 (Variable)
• 문자 (letter), 숫자 (digit), 마침표 (period) 를 조합해 만든다 . 대소문자의 구별은 없다 .
example :
read x and y
add x to y
print 1 line with y thus
The sum is : ***
1. Introduction
1.8 프로그램 종료
• stop 문은 프로그램을 논리적으로 끝내는 명령문이고 ,
• end 문은 물리적으로 끝내는 명령문이다 .
1.10 Routines
• CALL routine name : routine 을 호출할 때
• RETURN 은 call 된 routine 을 종료할 때 쓴다 .
• argument passing
- routine <name> given <argument> yielding <argument>
• function 을 사용하여 routine 을 표현할 수가 있다 .
preamble 에 서 "DEFINE name AS mode function" 으 로 정 의 하 고
return value 는 function 내 에 서 "RETURN WITH arithmetic
expression" 으로 한다 .
example : function Absolute(Number)
...
return with Number
end
1. Introduction
Model Structure
시뮬레이션을 하는 모델은 다음의 구성요소를 가져야 한다 .
1) 새로운 객체의 도착을 표현하는 메카니즘
2) 모델된 시스템 안에서 그 객체에서 일어나는 일의 표현
3) 시뮬레이션을 종료시키는 메카니즘
Process Concept
프로세스는 모델 안에서 시뮬레이션이 수행되는 시간동안 능동적으로 행동하는 개체이다
2. Elementary modeling concept
Resource Concept
• 자원 (resource) 은 모델 안에서 프로세스가 요구하는 일을 행하는 수동적인 개체이다 .
Program Structure
1) Preamble : C 의 Header File 과 유사하다 .
2) Main program : 시뮬레이션이 수행되도록 하는 절차를 밟는 부분이다 . 시스템의 컨트롤이
Timing Routine 으로 넘어가는 동작으로 수행한다 .
3) Process routine : preamble 에서 선언된 process 의 동작을 표현하는 routine
Timing routine
• Discrete-event simulation 의 심장부로 모델 개발자에게 투명
예제 : A Simple Gas Station Model
[ Model 개요 ]
• 시뮬레이션에서 사용된 가정
PREAMBLE
PROCESSES INCLUDE GENERATOR AND CUSTOMER
RESOURCES INCLUDE ATTENDANT
ACCUMULATE AVG.QUEUE.LENGTH AS THE AVERAGE
MAIN
CREATE EVERY ATTENDANT(1)
LET U.ATTENDANT(1) = 2
ACTIVATE A GENERATOR NOW
START SIMULATION
PRINT 4 LINES WITH AVG.QUEUE.LENGTH(1),
MAX.QUEUE.LENGTH(1),
AND UTILIZATION(1) * 100. / 2 THUS
PROCESS GENERATOR
FOR I = 1 TO 1000,
DO
ACTIVATE A CUSTOMER NOW
WAIT UNIFORM.F(2.0,8.0,1) MINUTES
LOOP
END
PROCESS CUSTOMER
REQUEST 1 ATTENDANT(1)
WORK UNIFORM.F(5.0,15.0,2) MINUTES
RELINGQUISH 1 ATTENDANT(1)
END
3. Modeling Individual Objects
3.1. Attribute Concept
• 프로세스나 자원 (resource) 은 속성이 주어질 수 있다 .
• Resources
• Every Pump has a Grade
• Create Every Pump (3)
1 2 3
U.Pump
N.X.Pump
N.Q.Pump
Grade
3. Modeling Individual Objects
3.2 Variables
• 변수는 전역 또는 지역변수 (default) 로 될 수 있다 . 전역변수는 Preamble 에 정의된다 .
• 모든 변수는 mode 를 가지고 있다 .(integer, real, alpha, text)
• Background mode 는 real 이며 다음의 문장에 의해 변경된다 .
• NORMALLY, MODE IS mode
LOOPING
IF Statement
FOR EACH resource
IF STATUS = BUSY is equivalent to
ADD 1 TO BACK.LOG FOR resource = 1 TO N.resource
ALWAYS
FOR EACH resource CALLED name
is equivalent to
FOR name = 1 TO N.RESOURCE
PREAMBLE
DEFINE .seconds TO MEAN days
DEFINE .milliseconds TO MEAN hours
DEFINE .microseconds TO MEAN minutes
END
MAIN
LET HOURS.V = 1000
LET MINUTES.V = 1000
END
예제 : A Bank with a Separate Queue for Each
Teller
• 일반적인 은행의 경우에 , 고객은 은행에 도착해서 바로 이용 가능한 은행원에게 서비스를 받고 은행을
떠나게 된다 . 그러나 만약 모든 은행원들이 이용가능하지 않다면 고객은 가장 짧은 줄에 줄을 서게 될
것이다 .
• 이러한 은행을 시뮬레이션 해 보자 . 성능 측정의 요소는 큐 ( 대기열 ) 의 평균 , 최대 길이 , 은행원
각각의 이용률 , 그리고 전체 고객의 평균 대기 시간이다 .
• 이 시뮬레이션에서 사용되는 파라미터는 모델 설계자가 직접 입력한다 .
• 은행원의 수 (Teller), 고객 도착 시간 (λ : 지수 분포를 따라 도착한다 ), 은행의 영업 시간
예제 : A Bank with a Separate Queue for Each
Teller
PREAMBLE
PROCESSES INCLUDE GENERATOR AND CUSTOMER
RESOURCES INCLUDE TELLER
MAIN
READ N.TELLER, MEAN.INTERARRIVAL.TIME, MEAN.SERVICE.TIME,
AND DAY.LENGTH
CREATE EVERY TELLER
FOR EACH TELLER,
LETU.TELLER(TELLER) = 1
PROCESS GENERATOR
DEFINE ARRIVAL.TIME AS A REAL VARIABLE
LET TIME.TO.CLOSE = DAY.LENGTH / HOURS.V
PROCESS CUSTOMER
DEFINE ARRIVAL.TIME AS A REAL VARIABLE
DEFINE MY.CHOICE AS A INTEGER VARIABLE
LET ARRIVAL.TIME = TIME.V
FOR EACH TRELLER, WITH N.X.TELLER(TELLER) = 0,
FIND THE FIRST CASE
IF FOUND,
LET MY.CHOICE = TELLER
ELSE
FOR EACH TELLER,
COMPUTE MY.CHOICE AS THE MINIMUM(TELLER)
OF N.Q.TELLER(TELLER)
ALWAYS
REQUEST 1 TELLER(MY.CHOICE)
LET WAITING.TIME = TIME.V - ARRIVAL.TIME
WORK EXPONENTIAL.F(MEAN.SERVICE.TIME,2) MINUTES
RELINQUISH 1 TELLER(MY.CHOICE)
END
예제 : A Bank with a Separate Queue for Each
Teller
[ 예제의 OUTPUT ]
SIMULATION OF A BANK WITH 2 TELLERS
(EACH WITH A SEPARATE QUEUE)
CUSTOMERS ARRIVE ACCORDING TO AN EXPONENTIAL DISTRIBUTION
OF INTER ARRIVAL TIMES WITH A MEAN OF 5.00 MINUTES.
SERVICE TIME IS ALSO EXPONENTIALLY DISTRIBUTED
WITH A MEAN OF 10.00 MINUTES.
THE BANK DOORS ARE CLOSED AFTER 8.00 HOURS.
(BUT ALL CUSTOMERS INSIDE ARE SERVED.)
THE LAST CUSTOMER LEFT THE BANK AT *.** HOURS.
THE AVERAGE CUSTOMER DELAY WAS *.** MINUTES.