Decision Making Techniques PPT at Mba Opreatiop MGMT
Decision Making Techniques PPT at Mba Opreatiop MGMT
Decision Making Techniques PPT at Mba Opreatiop MGMT
1. Decision Analysis
There are two common elements to every decision problem:
A choice has to be made. There must be a criterion by which to evaluate the outcome of the choice.
A pattern of decision/ chance event/ decision/ chance event is the single most important characteristic of problems that are potentially solvable by the technique of decision analysis.
Problem Characteristics
The decision maker seeks a technique that matches the problems special characteristics. The two principal characteristics of a problem for which decision analysis will be an effective technique:
The problem comprises a series of sequential decisions spread over time. These decisions are interwoven with chance elements, with decisions taken in stages as new information emerges. The decisions within the sequence are not quantitatively complex.
Decision Tree
The logical sequence of decisions and chance events are represented diagrammatically in a decision tree, with decision nodes represented by a square ( ) and chance nodes by a circle (O), connected by branches representing the logical sequence between nodes. At the end of each branch is a payoff. Although the payoff can be non-monetary, Expected Monetary Value (EMV) is the usual criterion for decision making in decision analysis.
Stage 5: Summarize the Optimal Path and Draw up the Risk Profile.
Risk Profile
is a table summarizing the possible outcomes on the optimal path, along with their payoffs and probabilities.
This is not the same thing as listing all outcomes since the act of choosing the optimal path will have eliminated some possibilities. A decision maker may favor a course with a lower EMV but a more attractive risk profile. This will reveal whether the EMV incorporates a highimpact, low probability (HILP) event. Where differences are small, a report might contain multiple profiles.
Proceeding to Decision
Recommendations should be tested against other criteria:
Is the decision tree an accurate model of the decision faced? How much faith can be put in the probability estimates? Is EMV an appropriate criterion for the situation? How robust (sensitive to small variations in assumptions) is the optimal decision?
Bracket Medians
1. Decide how many representative values are
required.
Rule of thumb: at least five but not more than ten. If calculations are computerized, a large number of values can be used for intricate or sensitive distributions.
Bracket Medians, 2
3. Select one representative value from each of
the ranges by taking the median of the range. This value is the bracket median. 4. Let the bracket median stand for all of the values in the range and assign to it the probability of all the values in the range.
In a normal distribution, the mean is the bracket median for the entire range.
Revising Probability
Bayes theorem is used whenever probabilities are revised in light of some new information (e.g., a sample).
It combines experience (prior probabilities) with sample information (numbers) to derive posterior probabilities. P(A and B) = P(A) x P(B|A) P(A and B) = P(B) x P(A|B)
A diagrammatic approach
A Venn diagram in this case, a square can be used to represent prior and posterior probabilities:
Divide the square vertically to represent prior probabilities. Thus, rectangles are created those areas are proportional to their prior probabilities. Divide the rectangles horizontally to represent the sample accuracy probabilities. These smaller areas are proportional to the conditional probabilities. Areas inside the square can be combined or divided to represent the posterior probabilities. A conditional probability is given not by the area representing the event, but by this area as a proportion of the area that represents the condition.
Bayesian Revision
Results are conventionally summarized in a table, showing how the calculations are built up. The posterior probability is calculated by dividing the term P(V) x P(T|V) in the fifth column by P(T).Prior Test
Test outcome (T) Variable (V) probability of variable P(V) accuracy P(T|V) P(T|V) P(V) x P(T|V) Posterior probability P(V|T)
Utility
The EMV criterion can mask large payoffs by averaging them out. A utility function represents monetary payoffs by utilities and allows EMVs to be used universally.
A utility is a measurement that reflects the (subjective?) value of the monetary outcome to the decision maker. These are not necessarily scaled proportionately. If a $2 million loss meant bankruptcy where a $1 million loss meant difficulty, the disutility of the former may be more than twice the latter. The rollback then deals with utility rather than EMV.
3. Linear Programming
Linear programming (LP) is a technique for solving certain types of decision problems. It is applied to those that require the optimization of some criterion (e.g., maximizing profit or minimizing cost), but where the actions that can be taken are restricted (i.e., constrained). Its formulation requires three factors: decision variables, a linear objective function and linear constraints.
Linear means that there are no logarithmic nor exponential functions.
Application
The Simplex method (developed by Dantzig) or variations of it are used to solve even problems with thousands of decision variables and thousands of constraints. These are typically the product of computer applications, although simple problems might be resolved algebraically (simultaneous equations) or graphically.
The Solution
The optimal point (if there is one) will be one of the corners of the feasible region: the northeast (maximization) or southwest (minimization), unless:
1.
2.
3.
The objective line is parallel to a constraint, in which case the two corner points and all of the points on the line are optimal. The problem is infeasible (there are no solutions to satisfy all constraints). The solution is unbounded (usually an error?) and there is an infinite number of solutions.
Dual Values
Each constraint has a dual value that is a measure of the increase in the objective function if one extra unit of the constrained resource were available, everything else being unchanged.
Dual values are sometimes called shadow prices. They refer only to constraints, not to variables.
These measure the true worth of the resources to the decision maker and there is no reason why they should be the same as the price paid for the resource or its market value.
Dual Values, 2
When the slack is non-zero, then the dual value is zero.
If this were not so, then incremental use of the resource could have been made, with an increment in the objective function.
The dual value also works in the other direction: indicating a reduction in the objective function for a one unit reduction in the constrained resource.
Reduced Costs
Dual values can also be found for the non-negativity constraints, but they are then called reduced costs.
If the optimal value of a variable is not zero, then its nonnegativity constraint is slack, and the dual value (reduced cost) is zero. If the optimal value is zero, then the constraint is tight and will have a non-zero reduced cost (a loss).
Dual values are marginal values and may not hold over large ranges. Most computer packages give the range for which the dual value holds. This is known as the right-hand side range.
Coefficient Ranges
A coefficient range shows by how much an objective function coefficient can change before the optimal values in the decision variable change.
Changing the coefficient value is the same as varying the slope of the line. If they change by small amounts, the slope of the line will not be sufficiently different to move away from the optimal point. The same small changes will, however, change the value of the optimal objective function. These ranges apply to the variation of one coefficient at a time, the others being kept at original values.
Transportation Problem
Some LP problems have special characteristics that allow them to be solved with a simpler algorithm. The classic transportation problem is an allocation of production to destinations with an objective of minimizing costs.
The special structure is that all of the variables in all of the constraints have the coefficient 0 or 1. (This is what the book says, but I dont understand what it means.)
An assignment problem is like the transportation problem, with the additional feature that the coefficients are all 1. (Likewise.)
Other Extensions
LP assumes that the decision variables are continuous, that is, they can take on any values subject to the constraints, including decimals and fractions. When the solution requires integer values (or blocks of integer values), a more complex algorithm is required. Quadratic programming can contend with objective functions have squared terms, as may arise with variable costs, economies of scale or quantity discounts. Goal programming an example of a method with multiple objectives will minimize the distance between feasibility and an objective.
Handling Uncertainty
The coefficients and constants used in problem formulation are fixed, even if they have been derived from estimates with various degrees of certainty (i.e., the model is deterministic).
Parametric programming is a systematic form of sensitivity analysis. A coefficient is varied continuously over a range and the effect on the optimal solution displayed. Stochastic programming deals specifically with probability information. Chance-constrained programming deals with constraints do not have to be met all of the time, but only a certain percentage of the time.
5. Simulation
Simulation means imitating the operation of a system. Although a model may take (scaled down) physical form, it is more often comprised of a set of mathematical equations. The purpose of a simulation is to test the effect of different decisions and different assumptions.
These can be tested quickly and without the expense or danger of carrying out the decision in reality.
Types of Simulations
Deterministic means that the inputs and assumptions are fixed and known: none is subject to probability distributions. It is used because of the number and complexity of the relationships involved. Corporate planning models are an example of this type. Stochastic means that the some of the inputs, assumptions and variables are subject to probability distributions. The Monte Carlo technique describes a simulation run many times, with the input of particular variables defined by their probabilities in a distribution. The output will often form a distribution, thus defining a range and probability of outcomes.
Flowchart
A flowchart is an intermediate stage that helps transform a verbal description of a problem into a simulation.
Risk Analysis
When a stochastic simulation uses the Monte Carlo technique to model cash flows, it is called a risk analysis. This differs from other stochastic simulations only in terms of its application to financial problems. A best practice is to plan a series of trial policies structured so that the range of options can be narrowed down until a nearly optimal one can be obtained.
Technique Basics
1. A project is broken down into all of its individual
Some must be completed before others can start; some can take place at the same time as others. A common mistake is confuse the customary sequence with their logical sequence.
graphically, where the lines are activities and circles (called nodes) mark the start and completion of the activities.
Technique Basics, 2
3.
(cont.)
The node at the beginning of a task should be the end node of all activities that are immediate predecessors. The node at the end should be the beginning node of all activities that are immediate successors. A dummy activity, represented by a dotted line, overcomes the requirement that no task can have the same beginning and end nodes, and avoids depicting a false dependency of one task upon another.
4.
The critical tasks are studied with a view to deploying resources to speed completion or minimize the impacts of over-runs.
Activity Duration
The expected duration of an activity may be based on experience or on subjective estimation. Where there is some uncertainty, a sensitivity analysis can be performed. Alternatively, one can obtain optimistic, most likely and pessimistic durations, then
Expected duration = (Optimistic + (4 x Most Likely) + Pessimistic) / 6 Variance (for activity) = ((Pessimistic Optimistic)/6)2 Variance (for project) = (variance of activities on the critical path) Standard deviation = variance (With 95% confidence, expected duration = mean 2 std dev)
Methodological Assumptions
1.
2.
Activity times follow a beta distribution a unimodal and non-symmetrical distribution with a shape commonly found in activities in large projects. Activities are independent. This allows the application of the central limit theorem, that would predict that, as the number of variables in the sum increases, the distribution of the sum becomes increasingly normal. In fact, the activities are probably not independent:
The cause of delay in one activity may be the same cause of delay in other activities. The critical path can change. Reductions in durations may make other activities critical.
Forward Pass
A forward pass means going through the network activity by activity to calculate for each an earliest start and earliest finish, using the following formulae:
Earliest finish = Earliest start + Activity duration Earliest start = Earliest finish (of previous activity) If there are several previous activities, we use the latest of their earliest finish times.
The notation appears, for example, as (6, 10, 16), indicating that the activitys earliest start is at week 6, has a duration of 10 weeks, and has an earliest finish at week 16.
Backward Pass
A backward pass means going through the network activity by activity, starting with the last and finishing with the first, to calculate for each an latest finish and latest start.
Latest finish = Latest start (of following activity) Latest start = Latest finish - Activity duration If there are several following activities, we use the earliest of their latest start times.
The notation for backward pass is similar to that for forward pass, indicating Latest start, Float (discussed next) and Latest finish.
Calculating Float
An activitys float is the amount of time that it can be delayed without delaying completion of the entire project.
The difference between its earliest and latest start times The difference between its earliest and latest finish times (the same thing)
5. 6.
List all activities in the project. Identify the predecessors of each activity. Draw the network diagram. Estimate activity durations as average times or, if uncertainty is to be dealt with explicitly, as optimistic/ most likely/ pessimistic times. Produce an activity schedule by making a forward and then a backward pass through the network. Determine the critical path by calculating activity floats.
variances, the variance of the whole project and finally confidence limits for the duration of the project. 8. Draw cost/ time curves for each activity. 9. Crash the network and construct a cost/ time graph for the project.
Activities-Predecessors-Draw-Duration-PassesPath-Variance-Costs-Crashes = APDDPPVCC
History of IT in Business
1960s: successful mainframe applications for accounting, operations (inventory, customer accounts) and technical operations. MIS was little more than a data dump. 1970s: recognition of the MIS failure, with various attempts made to convert data to information that managers at different levels could use. 1980s: introduction of the micro-computer puts computing power in the hands of the end user. 1990s and beyond: downloading for local analysis, internal linkages (internal resource and information sharing) and external linkages (internet).
Benefits
Communications: speed and penetration of internal and external communications; interfaces for data transfer Efficiency: direct time and cost savings, turnaround Functionality: systems capabilities, usability and quality Management: reporting provides a form of supervision; gives personnel greater control of their resources Strategy: greater market contact; support for company growth; competitive advantage
The Future
Technical progress
Expert systems, e.g., a medical diagnostic system Artificial Intelligence (AI): can deal with less structured issue Data mining: looking for patterns Decision mapping: discovering the structure of a problem Knowledge management, especially as a competitive advantage
Greater internal and external integration (especially within the value chain) IT will be a greater part of corporate strategy