SlideShare uma empresa Scribd logo
1 de 383
Decision AnalysisDecision Analysis
and Tradeoff Studiesand Tradeoff Studies
Terry BahillTerry Bahill
Systems and Industrial EngineeringSystems and Industrial Engineering
University of ArizonaUniversity of Arizona
terry@sie.arizona.eduterry@sie.arizona.edu
©, 2000-10, Bahill©, 2000-10, Bahill
This file is located inThis file is located in
http://www.sie.arizona.edu/sysengr/slides/http://www.sie.arizona.edu/sysengr/slides/
03/24/14 © 2009 Bahill2
AcknowledgementAcknowledgement
This research was supported by
AFOSR/MURI F49620-03-1-0377.
03/24/14 © 2009 Bahill3
Timing estimate for this course*Timing estimate for this course*
• Introduction (10 minutes)
• Decision analysis and resolution (49 slides, 40 minutes)
• San Diego Airport example (7 slides, 5 minutes)
• The tradeoff study process and potential problems (238 slides, 145
minutes)
• Summary (6 slides, 10 minutes)
• Dog system exercise (140 minutes)
• Mathematical summary of tradeoff methods (38 slides, 70 minutes)
• Course summary (10 minutes)
• Breaks (50 minutes)
• Total (480 minutes)
03/24/14 © 2009 Bahill4
OutlineOutline**
• This course starts with brief model of human decision making (slides 14-27).
Then it presents a crisp description of the tradeoff study processes (Slides 14-
67), which includes a simple example of choosing between two combining
methods.
• Then it shows a complex, but well-known tradeoff study example that most
people will be familiar with: the San Diego airport site selection (Slides 68-75).
• Then we go back and examine many difficulties that could arise when designing
a tradeoff study; we show many methods that have been used to overcome
these potential problems (Slides 76-338).
• The course is summarized with slides 339-346.
• In the Dog System Exercise, students create their own solutions for a tradeoff
study. These exercises will be computer based. The students complete one of
the exercise’s eight parts. Then we give them our solutions. They complete
another portion and we give them another solution. The computers will be
preloaded with all of the problems and solutions. The students will use Excel
spreadsheets and a simple program for graphing scoring (utility) functions.
• After the exercise there will be a mathematical summary of tradeoff methods.
Students who are algebraically challenged may excuse themselves.
03/24/14 © 2009 Bahill5
Course administrationCourse administration
• AWO:
• Course Name: Decision Making
and Tradeoff Studies
• Course Number:
• Facilities
Telephones*
Bathrooms
Vending Machines
Exits
ExitExit
03/24/14 © 2009 Bahill6
Course objectivesCourse objectives**
• The students should be able to
 Understand human decision making
 Use many techniques, including tradeoff studies, to help
select among alternatives
 Decide whether a problem is a good candidate for a
tradeoff study
 Establish evaluation criteria with weights of importance
 Understand scoring (utility) functions
 Perform a valid tradeoff study
 Fix the do nothing problem
 Use several different combining functions
 Perform a sensitivity analysis
 Be aware of many tradeoff methods
 Develop a decision tree
03/24/14 © 2009 Bahill7
Student introductionsStudent introductions
•Name
•Current program assignment
•Related experience
Decision AnalysisDecision Analysis
and Resolutionand Resolution
03/24/14 © 2009 Bahill9
CMMICMMI
• The Capability Maturity Model Integrated (CMMI)
is a collection of best practices from diverse
engineering companies
• Improvements to our organization will come
from process improvements, not from people
improvements or technology improvements
• CMMI provides guidance for improving an
organization’s processes
• One of the CMMI process areas is Decision
Analysis and Resolution (DAR)
03/24/14 © 2009 Bahill10
DARDAR
• Programs and Departments select the decision
problems that require DAR and incorporate them in
their plans (e.g. SEMPs)
• DAR is a common process
• Common processes are tools that the user gets,
tailors and uses
• DAR is invoked throughout the whole program
lifecycle whenever a critical decision is to be made
• DAR is invoked by IPT leads on programs, financial
analysts, program core teams, etc.
• Invoke the DAR Process in work instructions, in gate
reviews, in phase reviews or with other triggers,
which can be used anytime in the system life cycle
03/24/14 © 2009 Bahill11
Typical decisionsTypical decisions
• Decision problems that may require a formal
decision process
 Tradeoff studies
 Bid/no-bid
 Make-reuse-buy
 Formal inspection versus checklist
inspection
 Tool and vendor selection
 Cost estimating
 Incipient architectural design
 Hiring and promotions
 Helping your customer to choose a solution
03/24/14 © 2009 Bahill12
It’s not done just onceIt’s not done just once
• A tradeoff study is not something that you do once at the
beginning of a project.
• Throughout a project you are continually making tradeoffs
 creating team communication methods
 selecting components
 choosing implementation techniques
 designing test programs
 maintaining schedule
• Many of these tradeoffs should be formally documented.
03/24/14 © 2009 Bahill13
PurposePurpose**
“In all decisions you gain
something and lose
something. Know what they
are and do it deliberately.”
03/24/14 © 2009 Bahill14
Tradeoff StudiesTradeoff Studies
03/24/14 © 2009 Bahill15
A simple tradeoff studyA simple tradeoff study
03/24/14 © 2009 Bahill16
DAR
Specific Practice
Decide if formal evaluation is
needed
When to do a tradeoff study
Establish Evaluation Criteria
What is in a tradeoff study
Identify Alternative Solutions
Select Evaluation Methods
Evaluate Alternatives
Select Preferred Solutions
CMMI’s DAR processCMMI’s DAR process
03/24/14 © 2009 Bahill17
Tradeoff Study ProcessTradeoff Study Process**
These tasks are
drawn serially,
but they are not
performed in a serial
manner. Rather, it is
an iterative process
with many feedback
loops, which are not shown.
Decide if Formal
Evaluation is
Needed
Decide if Formal
Evaluation is
Needed
Problem
Statement
Problem
Statement
Select
Evaluation
Methods
Select
Evaluation
Methods
Establish
Evaluation
Criteria
Establish
Evaluation
Criteria
Identify
Alternative
Solutions
Identify
Alternative
Solutions
Proposed
Alternatives
Proposed
Alternatives
Evaluation
Criteria
Evaluation
Criteria
Evaluate
Alternatives
Evaluate
Alternatives
Select
Preferred
Solutions
Select
Preferred
Solutions
Formal
Evaluations
Formal
Evaluations
Perform
Expert Review
Perform
Expert Review
Preferred
Solutions
Preferred
Solutions
Present
Results
Present
Results
Put In
PPAL
Put In
PPAL
∑
03/24/14 © 2009 Bahill18
When creating a processWhen creating a process
the most important facets are
• illustrating tasks that can be done in parallel
• suggesting feedback loops
• configuration management
• including a process to improve the process
03/24/14 © 2009 Bahill19
Humans make four types of decisions:Humans make four types of decisions:
• Allocating resources among competing projects*
• Generating plans, schedules and novel ideas
• Negotiating agreements
• Choosing amongst alternatives
 Alternatives can be examined in series or parallel.
 When examined in series it is called sequential
search
 When examined in parallel it is called a tradeoff or a
trade study
 “Tradeoff studies address a range of problems
from selecting high-level system architecture to
selecting a specific piece of commercial off the
shelf hardware or software. Tradeoff studies are
typical outputs of formal evaluation processes.”*
03/24/14 © 2009 Bahill20
HistoryHistory
Ben Franklin’s letter* to
Joseph Priestly outlined one
of the first descriptions of a
tradeoff study.
03/24/14 © 2009 Bahill21
Decide if Formal Evaluation is NeededDecide if Formal Evaluation is Needed
Decide ifDecide if FormalFormal
Evaluation isEvaluation is
NeededNeeded
Problem
Statement
Problem
Statement
Select
Evaluation
Methods
Select
Evaluation
Methods
Establish
Evaluation
Criteria
Establish
Evaluation
Criteria
Identify
Alternative
Solutions
Identify
Alternative
Solutions
Proposed
Alternatives
Proposed
Alternatives
Evaluation
Criteria
Evaluation
Criteria
Evaluate
Alternatives
Evaluate
Alternatives
Select
Preferred
Solutions
Select
Preferred
Solutions
Formal
Evaluations
Formal
Evaluations
Perform
Expert Review
Perform
Expert Review
Preferred
Solutions
Preferred
Solutions
Present
Results
Present
Results
Put In
PPAL
Put In
PPAL
03/24/14 © 2009 Bahill22
Is formal evaluation needed?Is formal evaluation needed?
Companies should have polices for when to do formal
decision analysis. Criteria include
• When the decision is related to a moderate or high-risk issue
• When the decision affects work products under configuration
management
• When the result of the decision could cause significant
schedule delays
• When the result of the decision could cause significant cost
overruns
• On material procurement of the 20 percent of the parts that
constitute 80 percent of the total material costs
03/24/14 © 2009 Bahill23
Guidelines for formal evaluationGuidelines for formal evaluation
• When the decision is selecting one or a few alternatives
from a list
• When a decision is related to major changes in work
products that have been baselined
• When a decision affects the ability to achieve project
objectives
• When the cost of the formal evaluation is reasonable when
compared to the decision’s impact
• On design-implementation decisions when technical
performance failure may cause a catastrophic failure
• On decisions with the potential to significantly reduce design
risk, engineering changes, cycle time or production costs
03/24/14 © 2009 Bahill24
Establish Evaluation CriteriaEstablish Evaluation Criteria
Decide if Formal
Evaluation is
Needed
Decide if Formal
Evaluation is
Needed
Problem
Statement
Problem
Statement
Select
Evaluation
Methods
Select
Evaluation
Methods
EstablishEstablish
EvaluationEvaluation
CriteriaCriteria
Identify
Alternative
Solutions
Identify
Alternative
Solutions
Proposed
Alternatives
Proposed
Alternatives
Evaluation
Criteria
Evaluation
Criteria
Evaluate
Alternatives
Evaluate
Alternatives
Select
Preferred
Solutions
Select
Preferred
Solutions
Formal
Evaluations
Formal
Evaluations
Perform
Expert Review
Perform
Expert Review
Preferred
Solutions
Preferred
Solutions
Present
Results
Present
Results
Put In
PPAL
Put In
PPAL
03/24/14 © 2009 Bahill25
Establish evaluation criteriaEstablish evaluation criteria**
• Establish and maintain criteria for evaluating alternatives
• Each criterion must have a weight of importance
• Each criterion should link to a tradeoff requirement, i.e. a
requirement whose acceptable value can be more or less
depending on quantitative values of other requirements.
• Criteria must be arranged hierarchically. The top-level
may be performance, cost, schedule and risk.
 Program Management should prioritize these four criteria
at the beginning of the project and make sure everyone
knows the priorities.
• All companies should have a repository of generic
evaluation criteria.
03/24/14 © 2009 Bahill26
What will you eat for lunch today?What will you eat for lunch today?
•In class exercise.
•Write some evaluation criteria that
will, help you decide.*
03/24/14 © 2009 Bahill27
Killer tradesKiller trades
•Evaluating alternatives is expensive.
•Therefore, early in tradeoff study, identify very
important requirements* that can eliminate many
alternatives.
•These requirements produce killer criteria.**
•Subsequent killer trades can often eliminate 90%
of the possible alternatives.
03/24/14 © 2009 Bahill28
Identify Alternative SolutionsIdentify Alternative Solutions
Decide if Formal
Evaluation is
Needed
Decide if Formal
Evaluation is
Needed
Problem
Statement
Problem
Statement
Select
Evaluation
Methods
Select
Evaluation
Methods
Establish
Evaluation
Criteria
Establish
Evaluation
Criteria
IdentifyIdentify
AlternativeAlternative
SolutionsSolutions
Proposed
Alternatives
Proposed
Alternatives
Evaluation
Criteria
Evaluation
Criteria
Evaluate
Alternatives
Evaluate
Alternatives
Select
Preferred
Solutions
Select
Preferred
Solutions
Formal
Evaluations
Formal
Evaluations
Perform
Expert Review
Perform
Expert Review
Preferred
Solutions
Preferred
Solutions
Present
Results
Present
Results
Put In
PPAL
Put In
PPAL
03/24/14 © 2009 Bahill29
Identify alternative solutionsIdentify alternative solutions
• Identify alternative solutions for the problem
statement
• Consider unusual alternatives in order to test the
system requirements*
• Do not list alternatives that do not satisfy all
mandatory requirements**
• Consider use of commercial off the shelf and in-
house entities***
• Use killer trades to eliminate thousands of
infeasible alternatives
03/24/14 © 2009 Bahill30
What will you eat for lunch today?What will you eat for lunch today?
•In class exercise.
•List some alternatives for today’s lunch.*
03/24/14 © 2009 Bahill31
Select Evaluation MethodsSelect Evaluation Methods
Decide if Formal
Evaluation is
Needed
Decide if Formal
Evaluation is
Needed
Problem
Statement
Problem
Statement
SelectSelect
EvaluationEvaluation
MethodsMethods
Establish
Evaluation
Criteria
Establish
Evaluation
Criteria
Identify
Alternative
Solutions
Identify
Alternative
Solutions
Proposed
Alternatives
Proposed
Alternatives
Evaluation
Criteria
Evaluation
Criteria
Evaluate
Alternatives
Evaluate
Alternatives
Select
Preferred
Solutions
Select
Preferred
Solutions
Formal
Evaluations
Formal
Evaluations
Perform
Expert Review
Perform
Expert Review
Preferred
Solutions
Preferred
Solutions
Present
Results
Present
Results
Put In
PPAL
Put In
PPAL
03/24/14 © 2009 Bahill32
Select evaluation methodsSelect evaluation methods
• Select the source of the evaluation data and the method for
evaluating the data
• Typical sources for evaluation data include approximations,
product literature, analysis, models, simulations, experiments
and prototypes*
• Methods for combining data and evaluating alternatives include
Multi-Attribute Utility Technique (MAUT), Ideal Point, Search
Beam, Fuzzy Databases, Decision Trees, Expected Utility, Pair-
wise Comparisons, Analytic Hierarchy Process (AHP), Financial
Analysis, Simulation, Monte Carlo, Linear Programming, Design
of Experiments, Group Techniques, Quality Function
Deployment (QFD), radar charts, forming a consensus and
Tradeoff Studies
03/24/14 © 2009 Bahill33
Collect evaluation dataCollect evaluation data
•Using the appropriate source (approximations,
product literature, analysis, models, simulations,
experiments or prototypes) collect data for evaluating
each alternative.
03/24/14 © 2009 Bahill34
Evaluate AlternativesEvaluate Alternatives
Decide if Formal
Evaluation is
Needed
Decide if Formal
Evaluation is
Needed
Problem
Statement
Problem
Statement
Select
Evaluation
Methods
Select
Evaluation
Methods
Establish
Evaluation
Criteria
Establish
Evaluation
Criteria
Identify
Alternative
Solutions
Identify
Alternative
Solutions
Proposed
Alternatives
Proposed
Alternatives
Evaluation
Criteria
Evaluation
Criteria
EvaluateEvaluate
AlternativesAlternatives
Select
Preferred
Solutions
Select
Preferred
Solutions
Formal
Evaluations
Formal
Evaluations
Perform
Expert Review
Perform
Expert Review
Preferred
Solutions
Preferred
Solutions
Present
Results
Present
Results
Put In
PPAL
Put In
PPAL
03/24/14 © 2009 Bahill35
Evaluate alternativesEvaluate alternatives
• Evaluate alternative solutions using the evaluation criteria,
weights of importance, evaluation data, scoring functions
and combining functions.
• Evaluating alternative solutions involves analysis, discussion
and review. Iterative cycles of analysis are sometimes
necessary. Supporting analyses, experimentation,
prototyping, or simulations may be needed to substantiate
scoring and conclusions.
03/24/14 © 2009 Bahill36
Select Preferred SolutionsSelect Preferred Solutions
Decide if Formal
Evaluation is
Needed
Decide if Formal
Evaluation is
Needed
Problem
Statement
Problem
Statement
Select
Evaluation
Methods
Select
Evaluation
Methods
Establish
Evaluation
Criteria
Establish
Evaluation
Criteria
Identify
Alternative
Solutions
Identify
Alternative
Solutions
Proposed
Alternatives
Proposed
Alternatives
Evaluation
Criteria
Evaluation
Criteria
Evaluate
Alternatives
Evaluate
Alternatives
SelectSelect
PreferredPreferred
SolutionsSolutions
Formal
Evaluations
Formal
Evaluations
Perform
Expert Review
Perform
Expert Review
PreferredPreferred
SolutionsSolutions
Present
Results
Present
Results
Put In
PPAL
Put In
PPAL
03/24/14 © 2009 Bahill37
Select preferred solutionsSelect preferred solutions
• Select preferred solutions from the alternatives based on
evaluation criteria.
• Selecting preferred alternatives involves weighing and
combining the results from the evaluation of alternatives.
Many combining methods are available.
• The true value of a formal decision process might not be
listing the preferred alternatives. More important outputs
are stimulating thought processes and documenting their
outcomes.
• A sensitivity analysis will help validate your
recommendations.
• The least sensitive criteria should be given weights of 0.
03/24/14 © 2009 Bahill38
Perform Expert ReviewPerform Expert Review
Decide if Formal
Evaluation is
Needed
Decide if Formal
Evaluation is
Needed
Problem
Statement
Problem
Statement
Select
Evaluation
Methods
Select
Evaluation
Methods
Establish
Evaluation
Criteria
Establish
Evaluation
Criteria
Identify
Alternative
Solutions
Identify
Alternative
Solutions
Proposed
Alternatives
Proposed
Alternatives
Evaluation
Criteria
Evaluation
Criteria
Evaluate
Alternatives
Evaluate
Alternatives
Select
Preferred
Solutions
Select
Preferred
Solutions
Formal
Evaluations
Formal
Evaluations
Perform
Expert Review
Perform
Expert Review
Preferred
Solutions
Preferred
Solutions
Present
Results
Present
Results
Put In
PPAL
Put In
PPAL
∑
03/24/14 © 2009 Bahill39
Perform expert reviewPerform expert review11
• Formal evaluations should be reviewed* at regular
gate reviews such as SRR, PDR and CDR or by
special expert reviews
• Technical reviews started about the same time as
Systems Engineering, in 1960. The concept was
formalized with MIL-STD-1521 in 1972.
• Technical reviews are still around, because there is
evidence that they help produce better systems at
less cost.
03/24/14 © 2009 Bahill40
Perform expert reviewPerform expert review22
• Technical reviews evaluate the product of an IPT*
• They are conducted by a knowledgeable board of
specialists including supplier and customer representatives
• The number of board members should be less than the
number of IPT members
• But board expertise should be greater than the IPT’s
experience base
03/24/14 © 2009 Bahill41
Who should come to the review?Who should come to the review?
• Program Manager
• Chief Systems Engineer
• Review Inspector
• Lead Systems Engineer
• Domain Experts
• IPT Lead
• Facilitator
• Stakeholders for this decision
 Builder
 Customer
 Designer
 Tester
 PC Server
• Depending on the decision, the Lead Hardware Engineer
and the Lead Software Engineer
03/24/14 © 2009 Bahill42
Present resultsPresent results
Present the results* of the formal
evaluation to the original decision maker
and other relevant stakeholders.
03/24/14 © 2009 Bahill43
Put in the PALPut in the PAL
• Formal evaluations reviewed by experts should be put in
the organizational Process Asset Library (PAL) or the
Project Process Asset Library (PPAL)
• Evaluation data for tradeoff studies come from
approximations, analysis, models, simulations,
experiments and prototypes. Each time better data is
obtained the PAL should be updated.
• Formal evaluations should be designed with reuse in mind.
03/24/14 © 2009 Bahill44
Closed Book Quiz, 5 minutesClosed Book Quiz, 5 minutes
Fill in the empty boxesFill in the empty boxes
Problem
Statement
Problem
Statement
Proposed
Alternatives
Proposed
Alternatives
Evaluation
Criteria
Evaluation
Criteria
Formal
Evaluations
Formal
Evaluations
Preferred
Solutions
Preferred
Solutions∑
03/24/14 © 2009 Bahill45
Tradeoff Study ExampleTradeoff Study Example
03/24/14 © 2009 Bahill46
Example: What method shouldExample: What method should
we use for evaluating alternatives?we use for evaluating alternatives?**
• Is formal evaluation needed?
• Check the Guidance for Formal Evaluations
• We find that many of its criteria are satisfied including “On
decisions with the potential to significantly reduce design
risk … cycle time ...”
• Establish evaluation criteria
• Ease of Use
• Familiarity
• Killer criterion
• Engineers must think that use of the technique is intuitive.
03/24/14 © 2009 Bahill47
Example (continued)Example (continued)11
• Identify alternative solutions
 Linear addition of weight times scores, Multiattribute
Utility Theory (MAUT).* This method is often called a
“trade study.” It is often implemented with an Excel
spreadsheet.
 Analytic Hierarchy Process (AHP)**
03/24/14 © 2009 Bahill48
Example (continued)Example (continued)22
• Select evaluation methods
 The evaluation data will come from expert opinion
 Common methods for combining data and evaluating
alternatives include:
Multi-Attribute Utility Technique (MAUT),
Decision Trees, Analytic Hierarchy Process
(AHP), Pair-wise Comparisons, Ideal Point, Search
Beam, etc.
 In the following slides we will use two methods: linear
addition of weight times scores (MAUT) and the Analytic
Hierarchy Process (AHP)*
03/24/14 © 2009 Bahill49
Example (continued)Example (continued)33
• Evaluate alternatives
 Let the weights and evaluation data be integers
between 1 and 10, with 10 being the best. The
computer can normalize the weights if necessary.
03/24/14 © 2009 Bahill50
Multi-Attribute Utility Technique (MAUT)Multi-Attribute Utility Technique (MAUT)11
Criteria
Weight of
Importance
MAUT AHP
Ease of Use 8 4
Familiarity
Sum of
weight
times score
Assess evaluation data* row by row
03/24/14 © 2009 Bahill51
Multi-Attribute Utility Technique (MAUT)Multi-Attribute Utility Technique (MAUT)22
Criteria
Weight* of
Importance
MAUT AHP
Ease of Use 9 8 4
Familiarity 3 9 2
Sum of
weight
times score
99 42
The
winner
03/24/14 © 2009 Bahill52
Analytic Hierarchy Process (AHP)Analytic Hierarchy Process (AHP)
Verbal scale
Numerical
value
Equally important, likely or
preferred
1
Moderately more important,
likely or preferred
3
Strongly more important, likely
or preferred
5
Very strongly more important,
likely or preferred
7
Extremely more important,
likely or preferred
9
03/24/14 © 2009 Bahill53
AHP, make comparisonsAHP, make comparisons
Create a matrix with the criteria on the
diagonal and make pair-wise
comparisons*Ease of Use Ease of Use is
moderately more
important than
Familiarity (3)
Reciprocal of 3 = 1/3 Familiarity
03/24/14 © 2009 Bahill54
AHP, compute weightsAHP, compute weights
• Create a matrix
• Square the matrix
• Add the rows
• Normalize*
1 1 2
3 3 3
1 3 1 3 2 6 8
1 1 2 2
0.7
. 5.6
5
0 27
     
× = ⇒ ⇒     
     
03/24/14 © 2009 Bahill55
In-class exerciseIn-class exercise
• Use these criteria to help select your lunch today.
Closeness, distance to the venue. Is it in the same
building, the next building or do you have to get in a
car and drive?
Tastiness, including gustatory delightfulness,
healthiness, novelty and savoriness.
Price,* total purchase price including tax and tip.
03/24/14 © 2009 Bahill56
To help select lunch todayTo help select lunch today11
• closeness is ??? more important than tastiness,
• closeness is ??? more important than price,
• tastiness is ??? more important than price.
Closeness Tastiness Price
Closeness
Tastiness
Price
03/24/14 © 2009 Bahill57
To help select lunch todayTo help select lunch today22
• closeness is strongly more important (5) than tastiness,
• closeness is very strongly more important (7) than price,
• tastiness is moderately more important (3) than price.
Closeness Tastiness Price
Closeness 1 5 7
Tastiness 1 3
Price 1
03/24/14 © 2009 Bahill58
To help select lunch todayTo help select lunch today33
1 5 7 1 5 7
3 12.3 29 44.3 0.73
1 1
1 3 1 3 0.8 3 7.4 11.2 0.19
5 5
0.4 1.4 3 4.8 0.08
1 1 1 1
1 1
7 3 7 3
   
   
    
    × = ⇒ ⇒           
   
   
Closeness Tastiness Price Weight of
Importance
Closeness 1 5 7 0.73
Tastiness 1/5 1 3 0.19
Price 1/7 1/3 1 0.08
03/24/14 © 2009 Bahill59
AHP, get scoresAHP, get scores
Compare each alternative
on the first criterion
1 1
2 2
1 2 1 2 2 4 6
1 1 1 2 3
0.67
0.33
     
× = ⇒ ⇒     
    
Ease of Use
MAUT In terms of Ease
of Use, MAUT is
slightly preferred
(2)
1/2 AHP
03/24/14 © 2009 Bahill60
AHP, get scoresAHP, get scores22
Compare each alternative
on the second criterion
1 1
5 5
1 5 1 5 2 10 0.83
0.17
12
1 1 0.4 2 2.4
     
× = ⇒ ⇒     
    
Familiarity
MAUT In terms of
Familiarity,
MAUT is
strongly
preferred (5)
1/5 AHP
03/24/14 © 2009 Bahill61
AHP, form comparison matrixAHP, form comparison matrix****
Combine with linear addition*
Criteria
Weight of
Importance
MAUT AHP
Ease of Use 0.75 0.67 0.33
Familiarity 0.25 0.83 0.17
Sum of
weight
times score
0.71 0.29
The
winner
03/24/14 © 2009 Bahill62
Example (continued)Example (continued)44
• Select Preferred Solutions
 Linear addition of weight times scores (MAUT)
was the preferred alternative
 Now consider new criteria, such as Repeatability
of Result, Consistency*, Time to Compute
 Do a sensitivity analysis
03/24/14 © 2009 Bahill63
Sensitivity analysis, simpleSensitivity analysis, simple
In terms of Familiarity, MAUT was strongly preferred (5)
over the AHP. Now change this 5 to a 3 and to a 7.
• Changing the scores for Familiarity does not
change the recommended alternative.
• This is good.
• It means the Tradeoff study is robust with
respect to these scores.
Final Score
Familiarity MAUT AHP
3 0.69 0.31
5 0.71 0.29
7 0.72 0.28
03/24/14 © 2009 Bahill64
Sensitivity analysis, analyticSensitivity analysis, analytic
Compute the six semirelative-sensitivity
functions, which are defined as
which reads, the semirelative-sensitivity
function of the performance index F with
respect to the parameter β is the partial
derivative of F with respect to β times β
with everything evaluated at the normal
operating point (NOP).
F
NOP
F
Sβ β
β
∂
=
∂
%
03/24/14 © 2009 Bahill65
Sensitivity analysisSensitivity analysis22
For the performance index use the alternative rating for
MAUT minus the alternative rating for AHP*
F = F1 - F2 = Wt1×S11 + Wt2×S21 – Wt1×S12 –Wt2×S22
Criteria
Weight of
Importance
MAUT AHP
Ease of Use Wt1 S11 S12
Familiarity Wt2 S21 S22
Sum of
weight
times score
F1 F2
03/24/14 © 2009 Bahill66
Sensitivity analysisSensitivity analysis33
The semirelative-sensitivity functions*
( )
( )
1
2
11
21
12
22
11 12 1
21 22 2
1 11
2 21
1 12
2 22
0.26
0.16
0.50
0.21
-0.25
-0.04
F
Wt
F
Wt
F
S
F
S
F
S
F
S
S S S Wt
S S S Wt
S Wt S
S Wt S
S Wt S
S Wt S
= − =
= − =
= =
= =
= − =
= − =
%
%
%
%
%
%
S11 is the most
important
parameter. So go
back and
reevaluate it.
03/24/14 © 2009 Bahill67
Sensitivity analysisSensitivity analysis44
• The most important parameter is the score
for MAUT on the criterion Ease of Use
• We should go back and re-evaluate the
derivation of that score
Ease of Use
MAUT In terms of Ease
of Use, MAUT is
slightly preferred
(2)
1/2 AHP
03/24/14 © 2009 Bahill68
03/24/14 © 2009 Bahill69
Example (continued)Example (continued)55
• Perform expert review of the tradeoff study.
• Present results to original decision maker.
• Put tradeoff study in PAL.
• Improve the DAR process.
 Add some other techniques, such as AHP, to the DAR
web course
 Fix the utility curves document
 Add image theory to the DAR process
 Change linkages in the documentation system
 Create a course, Decision Making and Tradeoff Studies
03/24/14 © 2009 Bahill70
Quintessential exampleQuintessential example
A Tradeoff Study of Tradeoff Study Tools
is available at
http://www.sie.arizona.edu/sysengr/sie554/tradeoffStudyOfT
radeoffStudyTools.doc
San Diego CountySan Diego County
Regional AirportRegional Airport
Tradeoff StudyTradeoff Study
This tradeoff study has cost $17 million.This tradeoff study has cost $17 million.
http://www.san.org/authority/assp/index.asp
http://www.san.org/airport_authority/archives/index.asp#master_plan
03/24/14 © 2009 Bahill72
The evaluation criteria treeThe evaluation criteria tree**
Operational Requirement
Optimal Airport Layout
Runway Alignment
Terrain
Weather
Existing land uses
Wildlife Hazards
Joint Use and National Defense Compatibility
Expandability
Ground Access
Travel Time, percentage of population in three travel time segments
Roadway Network Capacity, existing and projected daily roadway volumes
Highway and Transit Accessibility, distance to existing and planned freeways
Environmental Impacts
Quantity of residential land to be displaced by the airport development
Noise Impact, population within each of three specific decibel ranges
Biological Resources
Wetlands
Protected species
Water quality
Significant cultural resources
Site Development Evaluations
03/24/14 © 2009 Bahill73
Top-level criteriaTop-level criteria
1. Operational Requirements
2. Ground Access
3. Environmental Impacts
4. Site Development Evaluations
These four evaluation criteria are then
decomposed into a hierarchy
03/24/14 © 2009 Bahill74
Operational RequirementsOperational Requirements
Optimal Airport Layout
Runway Alignment
Terrain, weather and existing land uses
Wildlife Hazards
Joint Use and National Defense Compatibility
Expandability
03/24/14 © 2009 Bahill75
Ground AccessGround Access
• Travel Time, percentage of population in three
travel time segments
• Roadway Network Capacity, existing and projected
daily roadway volumes
• Highway and Transit Accessibility, distance to
existing and planned freeways
03/24/14 © 2009 Bahill76
Environmental ImpactsEnvironmental Impacts
• Quantity of residential land to be displaced
by the airport development
• Noise Impact, population within each of
three specific decibel ranges
• Biological Resources
 Wetlands
 Protected species
• Water quality
• Significant cultural resources
03/24/14 © 2009 Bahill77
Alternative LocationsAlternative Locations
• Miramar Marine Corps Air Station
• East Miramar
• North Island Naval Air Station
• March Air Force Base
• Marine Corps Base Camp Pendleton
• Imperial County desert site
• Campo and Borrego Springs
• Lindberg Field
• Off-Shore floating airport
• Corte Madera Valley
03/24/14 © 2009 Bahill78
Tradeoff Studies:Tradeoff Studies:
the Process and Potentialthe Process and Potential
ProblemsProblems**
03/24/14 © 2009 Bahill80
Outline of this sectionOutline of this section
• Problem statement
• Models of human decision making
• Components of a tradeoff study
 Problem statement
 Evaluation criteria
 Weights of importance
 Alternative solutions
 The do nothing alternative
 Different distributions of alternatives
 Evaluation data
 Scoring functions
 Scores
 Combining functions
 Preferred alternatives
 Sensitivity analysis
• Other tradeoff techniques
 The ideal point
 The search beam
 Fuzzy sets
 Decision trees
• The wrong answer
• Tradeoff study on tradeoff study tools
• Summary
03/24/14 © 2009 Bahill81
ReferenceReference
J. Daniels, P. W. Werner and A. T.
Bahill, Quantitative Methods for
Tradeoff Analyses, Systems Engineering,
4(3), 199-212, 2001.
03/24/14 © 2009 Bahill82
PurposePurpose
The systems engineer’s job
is to elucidate domain
knowledge and capture the
values and preferences of
the decision maker, so that
the decision maker (and
other stakeholders) will
have confidence in the
decision.
The decision maker balances
effort with confidence*
03/24/14 © 2009 Bahill83
03/24/14 © 2009 Bahill84
Tradeoff studiesTradeoff studies
• Humans exhibit four types of decision making activities
1. Allocating resources among competing projects
2. Making plans, which includes scheduling
3. Negotiating agreements
4. Choosing alternatives from a list
 Series

Parallel, a tradeoff study

03/24/14 © 2009 Bahill85
A typical tradeoff study matrix
Alternative-A Alternative-B
Criteria Qualitative
weight
Normalized
weight
Scoring
function
Input
value
Output
score
Score
times
weight
Input
value
Output
score
Score
times
weight
Criterion-1 1 to 10 0 to 1 Type and
parameters
Natural
units
0 to 1 0 to 1 Natural
units
0 to 1 0 to 1
Criterion-2 1 to 10 0 to 1 Type and
parameters
Natural
units
0 to1 0 to 1 Natural
units
0 to1 0 to 1
Sum 0 to1 0 to1
03/24/14 © 2009 Bahill86
Pinewood Derby*
03/24/14 © 2009 Bahill87
Part of a Pinewood Derby tradeoff studyPart of a Pinewood Derby tradeoff study
Performance figures of merit evaluated on a prototype for a Round Robin with Best Time Scoring
Evaluation
criteria
Input
value
Score Weight
Score
times
weight
1. Average Races
per Car
6 0.94 0.20 0.19
2. Number of Ties 0 1 0.20 0.20
3. Happiness 0.87 0.60 0.52
Qualitative
weight
Normalized
weight
Input
value
Scoring
function
Output
score
Score
times
weight
3.1 Percent
Happy Scouts
10 0.50 96 0.98 0.49
3.2 Number of
Irate Parents
5 0.25 1 0.50 0.13
3.3 Number of
Lane Repeats
5 0.25 0 1.00 0.25
Sum 0.87 0.91
http://www.sie.arizona.edu/sysengr/pinewood/pinewood.pdf
03/24/14 © 2009 Bahill88
When do people do tradeoff studies?When do people do tradeoff studies?
• Buying a car
• Buying a house
• Selecting a job
• These decisions are important, you have lots of time to
make the decision and alternatives are apparent.*
• We would not use a tradeoff study to select a drink for
lunch or to select a husband or wife.
• You would also do a tradeoff study when your boss asks
you to do one.
03/24/14 © 2009 Bahill89
Do the tradeoff studies upfrontDo the tradeoff studies upfront
before all of the costs are locked inbefore all of the costs are locked in**
03/24/14 © 2009 Bahill90
Why discuss this topic?Why discuss this topic?
• Many multicriterion decision-making techniques
exist, but few decision-makers use them.
• Perhaps, because
 They seem complicated
 Different techniques have given different preferred
alternatives
 Different life experiences give different preferred
alternatives
 People don’t think that way*
03/24/14 © 2009 Bahill91
Models of Human Decision MakingModels of Human Decision Making
03/24/14 © 2009 Bahill92
Series versus parallelSeries versus parallel11
• Looking at alternatives in parallel is not an innate
human action.
• Usually people select one hypothesis and work on it
until it is disproved, then they switch to a new
alternative: that’s the scientific method.
• Such serial processing of alternatives has been
demonstrated for
 Fire fighters
 Airline pilots
 Physicians
 Detectives
 Baseball managers
 People looking for restaurants*
03/24/14 © 2009 Bahill93
Series versus parallelSeries versus parallel22
• V. V. Krishnan has a model of animals searching for habitat
(home, breeding area, hunting area, etc.)
• It uses the value of each habitat and the cost of moving
between sites.
• When travel between sites is inexpensive, e. g. birds or
honeybees* searching for a nest site, the search is often a
tradeoff study comparing alternatives in parallel.
• When travel is expensive, e.g. beavers searching for a dam
site, the search is usually sequential.
03/24/14 © 2009 Bahill94
Series versus parallelSeries versus parallel33
**
• If a person is looking for a new car, he or she might
perform a tradeoff study.
• Whereas a person looking for a used car might use a
sequential search, because the availability of cars would
change day by day.
03/24/14 © 2009 Bahill95
The need for changeThe need for change**
•People do not make
good decisions.
•A careful tradeoff
study will help you
overcome human
ineptitude and
thereby make better
decisions.
03/24/14 © 2009 Bahill96
Rational decisionsRational decisions**
• One goal
• Perfect information
• The optimal course of action can be described
• This course maximizes expected value
• This is a prescriptive model. We tell people that, in an
ideal world, this is how they should make decisions.
03/24/14 © 2009 Bahill97
SatisficingSatisficing**
• When making decisions there is always uncertainty, too
little time and insufficient resources to explore the whole
problem space.
• Therefore, people cannot make rational decisions.
• The term satisficing was coined by Noble Laureate Herb
Simon in 1955.
• Simon proposed that people do not attempt to find an
optimal solution. Instead, they search for alternatives that
are good enough, alternatives that satisfice.
03/24/14 © 2009 Bahill98
03/24/14 © 2009 Bahill99
Humans are not rationalHumans are not rational**
11
• Mark Twain said,
 “It ain’t what you don’t know that gets you into trouble. It’s
what you know for sure that just ain’t so.”
• Humans are often very certain of knowledge that is false.
 What American city is directly north of Santiago Chile?
 If you travel from Los Angeles to Reno Nevada, in what
direction would you travel?
• Most humans think that there are more words that start
with the letter r, than there are with r as the third letter.
03/24/14 © 2009 Bahill100
IllusionsIllusions**
• We call these cognitive illusions.
• We believe them with as much certainty
as we believe optical illusions.
03/24/14 © 2009 Bahill101
The MThe Müüller-Lyer Illusionller-Lyer Illusion**
03/24/14 © 2009 Bahill102
03/24/14 © 2009 Bahill103
03/24/14 © 2009 Bahill104
Humans judge probabilities poorlyHumans judge probabilities poorly**
03/24/14 © 2009 Bahill105
Monty Hall ParadoxMonty Hall Paradox11
**
03/24/14 © 2009 Bahill106
Monty Hall ParadoxMonty Hall Paradox22
**
03/24/14 © 2009 Bahill107
Monty Hall ParadoxMonty Hall Paradox33
**
03/24/14 © 2009 Bahill108
Monty Hall ParadoxMonty Hall Paradox44
**
03/24/14 © 2009 Bahill109
Monty Hall ParadoxMonty Hall Paradox55
**
• Now here is your problem.
• Are you better off sticking to your original
choice or switching?
• A lot of people say it makes no difference.
• There are two boxes and one contains a ten-
dollar bill.
• Therefore, your chances of winning are 50/50.
• However, the laws of probability say that you
should switch.
Monty Hall knew which door had the donkeyMonty Hall knew which door had the donkey
03/24/14 © 2009 Bahill110
03/24/14 © 2009 Bahill111
Monty Hall ParadoxMonty Hall Paradox66
**
• The box you originally chose has, and always will have, a
one-third probability of containing the ten-dollar bill.
• The other two, combined, have a two-thirds probability of
containing the ten-dollar bill.
• But at the moment when I open the empty box, then the
other one alone will have a two-thirds probability of
containing the ten-dollar bill.
• Therefore, your best strategy is to always switch!
03/24/14 © 2009 Bahill112
UtilityUtility
• We have just discussed the right column,
subjective probability.
• Now we will discuss the bottom row, utility
03/24/14 © 2009 Bahill113
UtilityUtility
• Utility is a measure of the happiness, satisfaction or reward
a person gains (or loses) from receiving a good or service.
• Utilities are numbers that express relative preferences
using a particular set of assumptions and methods.
• Utilities include both subjectively judged value and the
assessor's attitude toward risk.
03/24/14 © 2009 Bahill114
RiskRisk
• Systems engineers use risk to evaluate and manage bad
things that could happen, hazards. Risk is measured with the
frequency (or probability) of occurrence times the severity
of the consequences.
• However, in economics and in the psychology of decision
making, risk is defined as the variance of the expected value,
uncertainty.*
p1 x1 p2 x2 Risk,
uncertainty
A 1.0 $10 $10 $0 none
B 0.5 $5 0.5 $15 $10 $25 medium
C 0.5 $1 0.5 $19 $10 $81 high
2
σµ
03/24/14 © 2009 Bahill115
Ambiguity, uncertainty and hazards*Ambiguity, uncertainty and hazards*
• Hazard: Would you prefer my forest picked mushrooms
or portabella mushrooms from the grocery store?
• Uncertainty: Would you prefer one of my wines or a
Kendall-Jackson Napa Valley merlot?
• Ambiguity: Would you prefer my saffron and oyster sauce
or marinara sauce?
03/24/14 © 2009 Bahill116
Gains and losses are not valued equallyGains and losses are not valued equally**
03/24/14 © 2009 Bahill117
Humans are not rationalHumans are not rational22
• Even if they had the knowledge and resources, people
would not make rational decisions, because they do not
evaluate utility rationally.
• Most people would be more concerned with a large
potential loss than with a large potential gain. Losses are
felt more strongly than equal gains.
• Which of these wagers would you prefer to take?*
$2 with probability of 0.5 and $0 with probability 0.5
$1 with probability of 0.99 and $1,000,000 with probability
0.00000001
$3 with probability of 0.999999 and -$1,999,997 with
probability 0.000001
03/24/14 © 2009 Bahill118
Humans are not rationalHumans are not rational33
$2 with probability of 0.5 or $0 with probability 0.5
$0
03/24/14 © 2009 Bahill119
Humans are not rationalHumans are not rational44
$1 with probability of 0.99
$1,000,000 with
probability 0.00000001
03/24/14 © 2009 Bahill120
Humans are not rationalHumans are not rational55
You owe
me two
million
dollars!
$3 with probability
of 0.999999
-$1,999,997 with
probability 0.000001
03/24/14 © 2009 Bahill121
Humans are not rationalHumans are not rational66
• Which of these wagers would you prefer to take?
$2 with probability of 0.5 or $0 with probability 0.5
$1 with probability of 0.99 or $1,000,000 with
probability 0.00000001
$3 with probability of 0.999999 or -$1,999,997 with
probability 0.000001
• Most engineers prefer the $2 bet
• Very few people choose the $3 bet
• All three have an expected value of $1
03/24/14 © 2009 Bahill122
Subjective expected utilitySubjective expected utility
combines two subjective concepts: utility and probability.
• Utility is a measure of the happiness or satisfaction a
person gains from receiving a good or service.
• Subjective probability is the person’s assessment of the
frequency or likelihood of the event occurring.
• The subjective expected utility is the product of the utility
times the probability.
03/24/14 © 2009 Bahill123
Subjective expected utility theorySubjective expected utility theory
models human decision making as maximizing subjective
expected utility
 maximizing, because people choose the set of alternatives
with the highest total utility,
 subjective, because the choice depends on the decision
maker’s values and preferences, not on reality (e.g.
advertising improves subjective perceptions of a product
without improving the product), and
 expected, because the expected value is used.
• This is a first-order model for human decision making.
• Sometimes it is called Prospect Theory*.
03/24/14 © 2009 Bahill124
03/24/14 © 2009 Bahill125
Why teach tradeoff studies?Why teach tradeoff studies?
• Because emotions, cognitive illusions, biases,
fallacies, fear of regret and use of heuristics
make humans far from ideal decision makers.
• Using tradeoff studies judiciously can help you
make rational decisions.
• We would like to help you move your decisions
from the normal human decision-making lower-
right quadrant to the ideal decision-making
upper-left quadrant.
03/24/14 © 2009 Bahill126
Components of a tradeoff studyComponents of a tradeoff study
 Problem statement
• Evaluation criteria
• Weights of importance
• Alternative solutions
• Evaluation data
• Scoring functions
• Normalized scores
• Combining functions
• Preferred alternatives
• Sensitivity analysis
03/24/14 © 2009 Bahill127
Problem statementProblem statement
• Stating the problem properly is one of the systems
engineer’s most important tasks, because an elegant
solution to the wrong problem is less than worthless.
• Problem stating is more important than problem solving.
• The problem statement
 describes the customer’s needs,
 states the goals of the project,
 delineates the scope of the problem,
 reports the concept of operations,
 describes the stakeholders,
 lists the deliverables and
 presents the key decisions that must be made.
03/24/14 © 2009 Bahill128
Components of a tradeoff studyComponents of a tradeoff study
• Problem statement
Evaluation criteria
• Weights of importance
• Alternative solutions
• Evaluation data
• Scoring functions
• Scores
• Combining functions
• Preferred alternatives
• Sensitivity analysis
03/24/14 © 2009 Bahill129
Evaluation criteriaEvaluation criteria
• are derived from high priority tradeoff requirements.
• should be independent, but show compensation.
• Each alternative will be given a value that indicates the
degree to which it satisfies each criterion. This should help
distinguish between alternatives.
• Evaluation criteria might be things like performance, cost,
schedule, risk, security, reliability and maintainability.
03/24/14 © 2009 Bahill130
Evaluation criterion templateEvaluation criterion template
• Name of criterion
• Description
• Weight of importance (priority)
• Basic measure
• Units
• Measurement method
• Input (with expected values or the domain)
• Output
• Scoring function (type and parameters)
• Traces to (requirement of document)
03/24/14 © 2009 Bahill131
Example criterion packageExample criterion package11
• Name of criterion: Percent Happy Scouts
• Description: The percentage of scouts that leave the race
with a generally happy feeling. This criterion was suggested
by Sales and Marketing and the Customer.
• Weight of importance: 10
• Basic measure:* Percentage of scouts who leave the event
looking happy, contented or pleased
• Units: percentage
• Measurement method: Estimate by the Pinewood Derby
Marshall
• Input: The domain is 0 to 100%. The expected values are
70 to 100%.
03/24/14 © 2009 Bahill132
Example criterion pacExample criterion packkageage22
• Output: 0 to 1
• Scoring function:* Monotonic increasing with lower
threshold of 0, baseline of 90, baseline slope of 0.1 and
upper threshold of 100.
03/24/14 © 2009 Bahill133
Second example criterion packageSecond example criterion package11
**
• Name of criterion: Total Event Time
• Description: The total event time will be calculated by
subtracting the start time from the end time.
• Weight of importance: 8
• Basic measure: Duration of the derby from start to finish.
• Units: Hours
• Measurement method: Observation, recording and
calculation by the Pinewood Derby Marshall.
• Input: The domain is 0 to 8 hours. The expected values are
1 to 6 hours.
03/24/14 © 2009 Bahill134
Second example criterion pacSecond example criterion packkageage22
• Output: 0 to 1
• Scoring function: Biphasic hill shape with lower threshold
of 0, lower baseline of 2, lower baseline slope of 0.67,
optimum of 3.5, upper baseline of 4.5, upper baseline slope
of -1 and upper threshold of 8.
03/24/14 © 2009 Bahill135
Verboten criteriaVerboten criteria
• Availability should not be a criterion, because it cannot be
traded off.*
• Assume oranges are available 6 months out of the year.
• Would it make sense to do a tradeoff study selecting
between apples and oranges and give oranges an availability
expected value of 0.5?
• Suppose your tradeoff study selects oranges, but it is
October and oranges are not available: the tradeoff study
makes no sense.
03/24/14 © 2009 Bahill136
Mini-summaryMini-summary
Evaluation criteria are quantitative measures for
evaluating how well a system satisfies its
performance, cost, schedule or risk requirements.
03/24/14 © 2009 Bahill137
Evaluation criteria are also calledEvaluation criteria are also called
• Attributes*
• Objectives
• Metrics
• Measures
• Quality characteristics
• Figures of merit
• Acceptance criteria
“Regardless of what has gone before,
the acceptance criteria determine what
is actually built.”
03/24/14 © 2009 Bahill138
Other similar termsOther similar terms
• Index
• Indicators
• Factors
• Scales
• Measures of Effectiveness
• Measures of Performance
03/24/14 © 2009 Bahill139
MoE versus MoPMoE versus MoP
• Generally, it is not worth the effort to debate nuances of
these terms. But here is an example.
• Measures of Effectiveness (MoEs) show how well (utility or
value) a part of the system mission is satisfied.
For an undergraduate student trying to earn a
Bachelors degree, his or her class (Freshman,
Sophomore, Junior or Senior) would be an MoE.
• Measures of Performance (MoPs) show how well the
system functions.
For our undergraduate student, their grade point
average would be an MoP.*
• MoEs are often computed using several MoPs.
MoEs versus MoPsMoEs versus MoPs22
•The city of Tucson wants to widen Grant
Road between I-10 and Alvernon Road. They
want six lanes with a median, a 45 mph
speed limit, and no traffic jams.
•MoEs
 cars per day averaged over two weeks
 cars per hour between 5 and 6 PM, Monday to
Friday, averaged over two weeks
•MoPs
 number of pot holes after one year
 traffic noise (in dB) at local store fronts
 smoothness of the surface
 esthetics of landscaping
 straightness of the road
 travel time from I-10 to Alvernon
 number of traffic lights
03/24/14 © 2009 Bahill140
MoEs versus MoPsMoEs versus MoPs33
• MoEs are typically owned by the customer
• MoPs are typically owned by the contractor
03/24/14 © 2009 Bahill141
03/24/14 © 2009 Bahill142
Moe*
thinks at a higher level
than the mop does
MoEs, MoPs, KPIs, FoMsMoEs, MoPs, KPIs, FoMs
and evaluation criteriaand evaluation criteria
• MoEs quantify how well the mission
is satisfied
• MoPs quantify how well the system
functions
• Key performance indices (KPIs) are
the most important MoPs
• Evaluation criteria are MoPs that
are used in tradeoff studies
• Figures of Merit (FoMs) are the
same as evaluation criteria.
• All of these must trace to
requirements
03/24/14 © 2009 Bahill143
03/24/14 © 2009 Bahill144
Properties of Good Evaluation CriteriaProperties of Good Evaluation Criteria
03/24/14 © 2009 Bahill145
Properties of good evaluation criteriaProperties of good evaluation criteria
• Criteria should be objective
• Criteria should be quantitative
• Wording of criteria is very important
• Criteria should be independent
• Criteria should show compensation
• Criteria should be linked to requirements
• The criteria set should be hierarchical
• The criteria set should cover the domain evenly
• The criteria set should be transitive
• Temporal order should not be important
• Criteria should be time invariant
Overview slide
03/24/14 © 2009 Bahill146
Evaluation criteria propertiesEvaluation criteria properties
• These properties deal with
 verification
 the combining function
 individual criteria
 sets of criteria
• But problems created by violating these
properties can be ameliorated by
reengineering the criteria
03/24/14 © 2009 Bahill147
Evaluation criteria should be objectiveEvaluation criteria should be objective
(observer independent)(observer independent)
• Being Pretty or Nice should not be a criterion for
selecting crewmembers
• In sports, Most Valuable Player selections are often
controversial
• Deriving a consensus for the Best Football Player of the
Century would be impossible
03/24/14 © 2009 Bahill148
Evaluation criteria should be quantitativeEvaluation criteria should be quantitative
Each criterion should have a scoring function
03/24/14 © 2009 Bahill149
Evaluation criteria should be worded in aEvaluation criteria should be worded in a
positive manner, so that more is betterpositive manner, so that more is better**
• Use Uptime rather than Downtime.
• Use Mean Time Between Failures rather than
Failure Rate.
• Use Probability of Success, rather than
Probability of Failure.
• When using scoring functions make sure more
output is better
• “Nobody does it like Sara LeeSM
”
03/24/14 © 2009 Bahill150
Exercise: rewrite this statementExercise: rewrite this statement
We have a surgical procedure that should
cure your problem. Statistically one percent
of the people who undergo this surgery die.
Would you like to have this surgery?
03/24/14 © 2009 Bahill151
Percent happy scoutsPercent happy scouts
• The Pinewood Derby tradeoff study had these criteria
 Percent Happy Scouts
 Number of Irate Parents
• Because people evaluate losses and gains differently, the
Preferred alternatives might have been different if they
had used
 Percent Unhappy Scouts
 Number of Ecstatic Parents
03/24/14 © 2009 Bahill152
03/24/14 © 2009 Bahill153
Criteria should be independentCriteria should be independent
• Human Sex and IQ are independent
• Human Height and Weight are dependent
03/24/14 © 2009 Bahill154
The importance of independenceThe importance of independence
Buying a new car, couple-1 criteria
• Wife
 Safety
• Husband
 Peak Horse Power
03/24/14 © 2009 Bahill155
Buying a new car, couple-2 criteriaBuying a new car, couple-2 criteria
• Wife
 Safety
• Husband
 Maximum Horse Power
 Peak Torque
 Top Speed
 Time for the Standing Quarter Mile
 Engine Size (in liters)
 Number of Cylinders.
 Time to Accelerate 0 to 60 mph
What kind of a car do
you think they will buy?*
03/24/14 © 2009 Bahill156
Criteria should show compensationCriteria should show compensation
From the Systems Engineering literature, tradeoff
requirements show compensation
Dictionary definition
compensate v. 1. To offset: counterbalance.
Compensate means to tradeoff. You are happy to
accept less of one thing in order to get more of
another and vice versa.
03/24/14 © 2009 Bahill157
Perfect compensationPerfect compensation
• Astronauts growing food on a trip to Mars
• Two criteria: Amount of Rice Grown and Amount of
Beans Grown
• Goal: maximize* total amount of food
• A lot of rice and a few beans is just as good as lots of
beans and little rice
• We can tradeoff beans for rice
03/24/14 © 2009 Bahill158
No compensationNo compensation
• A system that produces oxygen and water for our
astronauts
• A system that produced a huge amount of water, but no
oxygen might get the highest score, but, clearly, it would
not support life for long.
• From Systems
Engineering,
mandatory
requirements show
no compensation
03/24/14 © 2009 Bahill159
Choosing today’s lunchChoosing today’s lunch
• Candidate meals: pizza, hamburger, fish & chips, chicken
sandwich, beer, tacos, bread and water
• Criteria: Cost, Preparation Time, Tastiness, Novelty, Low
Fat, Contains the Five Food Groups, Complements Merlot
Wine, Closeness of Venue
• These criteria are independent and also show
compensation
• Criteria are usually nouns, noun phrases or verb phrases
03/24/14 © 2009 Bahill160
03/24/14 © 2009 Bahill161
03/24/14 © 2009 Bahill162
03/24/14 © 2009 Bahill163
Sometimes it is hard to get bothSometimes it is hard to get both
independence and compensationindependence and compensation
• If two criteria are independent, they
might not show compensation
• If they show compensation, they might
not be independent
• Independence is more important for
mandatory requirements
• Compensation is more important for
tradeoff requirements
03/24/14 © 2009 Bahill164
RelationshipsRelationships
• Each evaluation criterion must be
linked to a tradeoff requirement.
 Or in early design phases to a
Mission statement, ConOps, OCD
or company policy.
• But only a few tradeoff requirements
are used in the tradeoff study.
03/24/14 © 2009 Bahill165
Evaluation criteria hierarchyEvaluation criteria hierarchy
• The criteria tree should be hierarchical
• The top level often contains
 Performance
 Cost
 Schedule
 Risk
• Dependent entries are grouped into subcategories
• The criteria set should cover the domain evenly
03/24/14 © 2009 Bahill166
Evaluation criteria set should be transitiveEvaluation criteria set should be transitive**
If A is preferred to B,
and B is preferred to C,
then A should be preferred to C.
This property is needed for assigning weights.
03/24/14 © 2009 Bahill167
Temporal orderTemporal order
should not be importantshould not be important
Criteria should be created so that the
temporal order is not important for
verifying or combining.
03/24/14 © 2009 Bahill168
The temporal order of verifyingThe temporal order of verifying
criteria should not be importantcriteria should not be important
• Criteria requiring that clothing be Flame Proof and Water
Resistant would make the verification results depend on
which we tested first
 If the criteria depend on temporal order, then an expert
system or a decision tree might be more suitable
03/24/14 © 2009 Bahill169
Temporal orderTemporal order
should not be importantshould not be important
• Fragment of a job application
• Q: “Have you ever been arrested?”
 A: “No.”
• Q: “Why?”
 A: “Never got caught.”
03/24/14 © 2009 Bahill170
The temporal order of combiningThe temporal order of combining
criteria should not be importantcriteria should not be important
• Consider a combining function (CF) that adds two
numbers truncating the fraction
(0.2 CF 0.6) CF 0.9 = 0, however,
(0.9 CF 0.6) CF 0.2 = 1,
the result depends on the order.
• With the Boolean NAND* function (↑)
(0 ↑1) ↑ 1 = 0 however, (1 ↑1) ↑ 0 = 1,
the result depends on the order.
Order of presentation is importantOrder of presentation is important
• The stared question is the only question that department and college promotion
committees look at. It is the only question reported in the TCE History.
• Larry Alimony’s CIEQ
• I would take another course that was taught this way
• The course was quite boring
• The instructor seemed interested in students as individuals
• The instructor exhibited a through knowledge of the subject matter
What is your overall rating of this instructor’s teaching effectiveness?
• TCE
 What is your overall rating of this instructor’s teaching effectiveness?
• What is your overall rating of the course?
• Rate the usefulness of HW, projects, etc.
• What is your rating of this instructor compared to other instructors?
• The difficulty level of the course is …
03/24/14 © 2009 Bahill171
03/24/14 © 2009 Bahill172
Criteria should be time invariantCriteria should be time invariant
• Criteria should not change with time
• It would be nice if the evaluation data also
did not change with time, but this is
unrealistic
03/24/14 © 2009 Bahill173
Evaluation cEvaluation criteria libraryriteria library
• Criteria should be created so that they can be reused.
• Your company should have library of generic criteria.
• Each criterion package would have the following slots
 Name
 Description
 Weight of importance (priority)
 Basic measure
 Units
 Measurement method
 Input (with allowed and expected range)
 Output
 Scoring function (type and parameters)
 Trace to (document)
03/24/14 © 2009 Bahill174
Components of a tradeoff studyComponents of a tradeoff study
• Problem statement
• Evaluation criteria
 Weights of importance
• Alternative solutions
• Evaluation data
• Scoring functions
• Scores
• Combining functions
• Preferred alternatives
• Sensitivity analysis
03/24/14 © 2009 Bahill175
Weights of importanceWeights of importance
The decision maker should
assign weights so that the
more important criteria will
have more effect on the
outcome.
03/24/14 © 2009 Bahill176
Using weightsUsing weights
For the Sum Combining Function
For the Product Combining Function,
the weights should be put in the
exponent
j
j
1
weightOutput score
n
j=
= ∏
j j
1
Output weight score
n
j=
= ∑
03/24/14 © 2009 Bahill177
Part of a Pinewood Derby tradeoff studyPart of a Pinewood Derby tradeoff study
Performance figures of merit evaluated on a prototype for a Round Robin with Best Time Scoring
Figure of Merit Input
value
Score Weight Score
times
weight
1. Average Races
per Car
6 0.94 0.20 0.19
2. Number of Ties 0 1 0.20 0.20
3. Happiness 0.87 0.60 0.52
Qualitative
weight
Normalized
weight
Input
value
Scoring
function
Score Score
times
weight
3.1 Percent
Happy Scouts
10 0.50 96 0.98 0.49
3.2 Number of
Irate Parents
5 0.25 1 0.50 0.13
3.3 Number of
Lane Repeats
5 0.25 0 1.00 0.25
Sum 0.87 0.91
03/24/14 © 2009 Bahill178
Aspects that help establish weightsAspects that help establish weights
Reference: A Prioritization Process
Organizational Commitment Time Required
Criticality to Mission Success Risk
Architecture Safety
Business Value Complexity
Priority of Scenarios (use cases) Implementation
Difficulty
Frequency of Use Stability
Benefit Dependencies
Cost Reuse Potential
Benefit to Cost Ratio
When it is needed
03/24/14 © 2009 Bahill179
03/24/14 © 2009 Bahill180
Cardinal versus ordinalCardinal versus ordinal
• Weights should be cardinal measures not
ordinal measures.
• Cardinal measures indicate size or quantity.
• Ordinal measures merely indicate rank
ordering.*
• Cardinal numbers do not just tell us that one
criteria is more important than another –
they tell us how much more important.
• If one criteria has a weight of 6 and another
a weight of 3, then the first is twice as
important as the second.
03/24/14 © 2009 Bahill181
Methods for deriving weights*Methods for deriving weights*
• Decision maker assigns numbers between 1 and
10 to criteria*
• Decision maker rank orders the criteria*
• Decision maker makes pair-wise comparisons of
criteria (AHP)*
• Algorithms are available that combine
performance, cost, schedule and risk
• Quality Function Deployment (QFD)
• The method of swing weights
• Some people advocate assigning weights only after
deriving evaluation data*
03/24/14 © 2009 Bahill182
Components of a tradeoff studyComponents of a tradeoff study
• Problem statement
• Evaluation criteria
• Weights of importance
 Alternative solutions
• Evaluation data
• Scoring functions
• Scores
• Combining functions
• Preferred alternatives
• Sensitivity analysis
03/24/14 © 2009 Bahill183
AlternativesAlternatives
03/24/14 © 2009 Bahill184
The Do Nothing AlternativeThe Do Nothing Alternative
03/24/14 © 2009 Bahill185
The status quoThe status quo
"Selecting an option from a group of similar options can be
difficult to justify and thus may increase the apparent
attractiveness of retaining the status quo. To avoid this
tendency, the decision maker should identify each
potentially attractive option and compare it directly to the
status quo, in the absence of competing alternatives. If such
direct comparison yields discrepant judgments, the decision
maker should reflect on the inconsistency before making a
final choice."
Redelmeier and Shafir, 1995
03/24/14 © 2009 Bahill186
Selecting a new carSelecting a new car
Bahill has a Datsun 240Z
with 160,000 miles
His replacement options are
DoDo
NothingNothing
03/24/14 © 2009 Bahill187
The Do Nothing alternatives forThe Do Nothing alternatives for
replacing a Datsun 240Z
 Status quo, keep the 240Z
 Nihilism, do without a car, i.e., walk or take the bus
03/24/14 © 2009 Bahill188
If the Do Nothing alternative wins,If the Do Nothing alternative wins,
your Cost, Schedule and Risk criteria may have
overwhelmed your Performance criteria.
03/24/14 © 2009 Bahill189
If a Do Nothing alternative winsIf a Do Nothing alternative wins22
• Just as you should not add apples and oranges, you
should not combine Performance, Cost, Schedule
and Risk criteria with each other
 Combine the Performance criteria (with their
weights normalized so that they add up to one)
 Combine the Cost criteria
 Combine the Schedule criteria
 Combine the Risk criteria
• Then the Performance, Cost, Schedule and Risk
combinations can be combined with clearly stated
weights, 1/4, 1/4, 1/4 and 1/4 could be the default.
• If a Do Nothing alternative still wins, you may have
the weight for Performance too low.
03/24/14 © 2009 Bahill190
Balanced scorecardBalanced scorecard
The Business community says that you
should balance these perspectives:
 Innovation (Learning and Growth)
 Internal Processes
 Customer
 Financial
03/24/14 © 2009 Bahill191
Sacred cowsSacred cows**
• One important purpose for including a do nothing
alternative (and other bizarre alternatives) is to help get
the requirements right. If a bizarre alternative wins the
tradeoff analysis, then you do not have the requirements
right.
• Similarly including sacred cows in the alternatives, will
also test the adequacy of the requirements.
• “For a successful technology, reality must take
precedence over public relations, for nature cannot be
fooled.” -- Richard Feynman
03/24/14 © 2009 Bahill192
Alternative conceptsAlternative concepts
• When formulating alternative concepts, remember
Miller’s* “magical number seven, plus or minus two.”
• Also remember that introducing more alternatives only
confuses the DM and makes him or her less likely to
choose one of the new alternatives.**
03/24/14 © 2009 Bahill193
SynonymsSynonyms
• Alternative concepts
• Alternative solutions
• Alternative designs
• Alternative architectures
• Options
03/24/14 © 2009 Bahill194
RiskRisk
• The risks included in a tradeoff study should
only be those that can be traded-off. Do not
include the highest-level risks.
• Risks might be computed in a separate section,
because they usually use the product
combining function.
03/24/14 © 2009 Bahill195
CAIVCAIV
• Cost as an independent variable (CAIV)
• Treating CAIV means that you should do the tradeoff
study with a specific cost and then go talk to your
customer and see what performance, schedule and risk
requirements he or she is willing to give up in order to get
that cost.
• So if you want to treat CAIV, then keep your tradeoff
study independent of cost: that is, do not use cost criteria
in your tradeoff study.
03/24/14 © 2009 Bahill196
Two types of requirementsTwo types of requirements
•There are two types of requirements
mandatory requirements
tradeoff requirements
03/24/14 © 2009 Bahill197
Mandatory requirementsMandatory requirements
• Mandatory requirements specify necessary and
sufficient capabilities that the system must have to
satisfy customer needs and expectations.
• They use the words shall or must.
• They are either passed or failed, with no in between.
• They should not be included in a tradeoff study.
• Here is an example of a mandatory requirement:
 The system shall not violate federal, state or local laws.
03/24/14 © 2009 Bahill198
Tradeoff requirementsTradeoff requirements
• Tradeoff requirements state capabilities that would make
the customer happier.
• They use the words should or want.
• They use measures of effectiveness and scoring functions.
• They are evaluated with multicriterion decision techniques.
• There will be tradeoffs among these requirements.
• Here is an example of a tradeoff requirement:
Dinner should have items from each of the five food
groups: Grains, Vegetables, Fruits, Wine, Milk , and
Meat and Beans.
• Mandatory requirements are often the upper or lower
limits of tradeoff requirements.
03/24/14 © 2009 Bahill199
Mandatory requirementsMandatory requirements
should not be in a tradeoff study, because they cannot be
traded off.
• Including them screws things up incredibly.
03/24/14 © 2009 Bahill200
Components of a tradeoff studyComponents of a tradeoff study
• Problem statement
• Evaluation criteria
• Weights of importance
• Alternative solutions
 Evaluation data
• Scoring functions
• Scores
• Combining functions
• Preferred alternatives
• Sensitivity analysis
03/24/14 © 2009 Bahill201
Evaluation dataEvaluation data11
• Evaluation data come from approximations,
product literature, analysis, models, simulations,
experiments and prototypes.
• It would be nice if these values were objective,
but sometimes we must resort to elicitation of
personal preferences.*
• They will be measured in natural units.
03/24/14 © 2009 Bahill202
Evaluation dataEvaluation data22
• Evaluation data should be entered into the matrix one row
(one criterion) at a time.
• They indicate the degree to which each alternative satisfies
each criterion.
• They are not probabilities: they are more like fuzzy
numbers, degree of membership or degree of fulfillment.
03/24/14 © 2009 Bahill203
UncertaintyUncertainty
• Evaluation data (and weights of importance) should, when
convenient, have measures of uncertainty associated with
the data.
• This could be done with probability density functions, fuzzy
numbers, variance, expected range, certainty factors,
confidence intervals, or simple color coding.
03/24/14 © 2009 Bahill204
NormalizationNormalization**
• Evaluation data are transformed into normalized
scores by scoring functions (utility curves) or
qualitative scales (fuzzy sets).
• The outputs of such transformations should be
cardinal numbers representing the DMs utility.
03/24/14 © 2009 Bahill205
Scoring function exampleScoring function example
This scoring function reflects the DM’s utility that he would
be twice as satisfied if there were 91% happy scouts
compared to 88% happy scouts.*
03/24/14 © 2009 Bahill206
QualitativeQualitative scales examplesscales examples
Evaluation data Qualitative evaluation Output
Good example
0 to 86% happy scouts Not satisfied 0.2
86 to 89% happy scouts Marginally satisfied 0.4
89 to 91% happy scouts Satisfied 0.6
91 to 93% happy scouts Very satisfied 0.8
93 to 100% happy scouts Elated 1.0
Bad example
0 to 20% happy scouts Not satisfied 0.2
20 to 40% happy scouts Marginally satisfied 0.4
40 to 60% happy scouts Satisfied 0.6
60 to 80 % happy scouts Very satisfied 0.8
80 to 100% happy scouts Elated 1.0
03/24/14 © 2009 Bahill207
Components of a tradeoff studyComponents of a tradeoff study
• Problem statement
• Evaluation criteria
• Weights of importance
• Alternative solutions
• Evaluation data
 Scoring functions
• Scores
• Combining functions
• Preferred alternatives
• Sensitivity analysis
03/24/14 © 2009 Bahill208
What is the best package of soda pop to buy?*What is the best package of soda pop to buy?*
Regular price of Coca-Cola in Tucson, January 1995.
The Cost criterion is the reciprocal of price.
The Performance criterion is the quantity in liters. 
Choosing Amongst Alternative Soda Pop Packages
Data Criteria Trade-off Values
Item Price
(dollars)
Cost
(dollars-1)
Quantity
(liters)
Sum Product Sum
Minus
Product
Com-
promise
with p=2
Com-
promise
with p=10
1 can 0.50 2.00 0.35 2.35 0.70 1.65 2.03 2.00
20 oz 0.60 1.67 0.59 2.26 0.98 1.27 1.77 1.67
1 liter 0.79 1.27 1.00 2.27 1.27 1.00 1.62 1.27
2 liter 1.29 0.78 2.00 2.78 1.56 1.22 2.15 2.00
6 pack 2.29 0.44 2.13 2.57 0.94 1.63 2.17 2.13
3 liter 1.69 0.59 3.00 3.59 1.78 1.81 3.06 3.00
12 pack 3.59 0.28 4.26 4.54 1.19 3.35 4.27 4.26
24 pack 5.19 0.19 8.52 8.71 1.62 7.09 8.52 8.52
03/24/14 © 2009 Bahill209
Numerical precisionNumerical precision**
03/24/14 © 2009 Bahill210
The preferred alternative depends on the unitsThe preferred alternative depends on the units
For the Sum but not for the Product Tradeoff Function.
Choosing Amongst Alternative Soda Pop Packages, Effect of Units
Item Price
(dollars)
Cost
(dollars-1)
Quantity
(liters)
Sum Product Quantity
(barrels)
Sum Product
1 can 0.50 2.00 0.35 2.35 0.70 0.0003 2.0003 0.0060
20 oz 0.60 1.67 0.59 2.26 0.98 0.0050 1.6717 0.0084
1 liter 0.79 1.27 1.00 2.27 1.27 0.0085 1.2785 0.0108
2 liter 1.29 0.78 2.00 2.78 1.56 0.0170 0.7837 0.0132
6 pack 2.29 0.44 2.13 2.57 0.94 0.0181 0.4548 0.0079
3 liter 1.69 0.59 3.00 3.59 1.78 0.0256 0.6173 0.0151
12
pack
3.59 0.28 4.26 4.54 1.19 0.0363 0.3148 0.0101
24
pack
5.19 0.19 8.52 8.71 1.62 0.0726 0.2653 0.0140
03/24/14 © 2009 Bahill211
Scoring functionsScoring functions
• Criteria should always have scoring functions so that the
preferred alternatives do not depend on the units used.
• Scoring functions are also called
 utility functions
 utility curves
 value functions
 normalization functions
 mappings
03/24/14 © 2009 Bahill212
Scoring function for CostScoring function for Cost**
03/24/14 © 2009 Bahill213
Scoring function for QuantityScoring function for Quantity**
A simple program that creates graphs such as these is
available for free at
http://www.sie.arizona.edu/sysengr/slides.
It is called the Wymorian Scoring Function tool.
03/24/14 © 2009 Bahill214
The scoring function equationThe scoring function equation**
( )2×Slope× Baseline+CriteriaValue-2×Lower
1
SSF1
Baseline-Lower
1
CriteriaValue-Lower
=
 
+   
03/24/14 © 2009 Bahill215
Evaluation data may be logarithmicEvaluation data may be logarithmic**
03/24/14 © 2009 Bahill216
The need for scoring functionsThe need for scoring functions11
**
• You can add $s and £s, but
• you can’t add $s and lbs.
03/24/14 © 2009 Bahill217
The need for scoring functionsThe need for scoring functions22
• Would you add values for something that cost a
billion dollars and lasted a nanosecond?*
• Alt-1 cost a hundred dollars and lasts one
millisecond, Sum = 100.001.
• Alt-2 only cost ninety nine dollars but it lasts two
millisecond, Sum = 99.002.
• Does the duration have any effect on the decision?
03/24/14 © 2009 Bahill218
Different Distributions of Alternatives inDifferent Distributions of Alternatives in
Criteria SpaceCriteria Space**
May Produce DifferentMay Produce Different
Preferred AlternativesPreferred Alternatives
Tradeoff of requirements*Tradeoff of requirements*
03/24/14 © 2009 Bahill219
0.25
0.50
0.75
1.00
0.00
5 10 15 200
Pages per Minute
Cost(1/k$) 4P
4Plus
4Si
03/24/14 © 2009 Bahill220
Pareto OptimalPareto Optimal
Moving from one alternative to another will improve at
least one criterion and worsen at least one criterion, i.e.,
there will be tradeoffs.
“The true value of a service or product is determined by
what one is willing to give up to obtain it.”
03/24/14 © 2009 Bahill221
NomenclatureNomenclature
Real-world data will not fall neatly onto lines such as the
circle in the pervious slide. But often they may be
bounded by such functions. In the operations research
literature such data sets are called convex, although the
function bounding them is called concave (Kuhn and
Tucker, 1951).
03/24/14 © 2009 Bahill222
Different distributionsDifferent distributions
The feasible alternatives may have
different distributions in the criteria
space. These include:
 Circle
 Straight Line
 Hyperbola
03/24/14 © 2009 Bahill223
Alternatives on a circleAlternatives on a circle**
Alternatives on a Circle
Assume the alternatives are on the circle x2
+ y2
= 1
Sum Combining Function:
with the derivative
d(Sum Combining Function)/
Product Combining Function:
with the derivative
d(Product Combining Function)/dx
Both Combining Functions have maxima at x=y=0.707
(This result does depend on the weights.)
03/24/14 © 2009 Bahill224
Alternatives on a straight-LineAlternatives on a straight-Line
Assume the alternatives are on the straight-line y = -x + 1
Sum Combining Function: x + y = x - x + 1 = 1
All alternatives are optimal (i.e. selection is not possible)
Product Combining Function: x * y = -x2
+ x with
d(Product Combining Function)/dx = -2x + 1
Product Combining Function: maximum at x=0.5
Sum Combining Function: all alternatives are equally good
Product Combining Function seems better for decision aiding
03/24/14 © 2009 Bahill225
Alternatives on a hyperbolaAlternatives on a hyperbola**
Alternatives on a Hyperbola
Assume the alternatives are on the hyperbola (x + 1)(y + 1) = 2
Sum Combining Function: x + y = with
d(Sum Combining Function)/dx =
Product Combining Function: x * y = with
d(Product Combining Function)/dx =
03/24/14 © 2009 Bahill226
03/24/14 © 2009 Bahill227
A lively baseball debateA lively baseball debate
• For over 30 years baseball statisticians have argued over
the best measure of offensive effectiveness.
• Two of the most popular measures are
 On-base plus slugging OPS = OBP + SLG
 Batter’s run average BRA = OBP x SLG
• I think their arguments ignored the most relevant data, the
shape of the distribution of OBP and SLG for major league
players.
• If it is circular either will work.
• If it is hyperbolic, do not use the sum.
03/24/14 © 2009 Bahill228
03/24/14 © 2009 Bahill229
Muscle force-velocity relationshipMuscle force-velocity relationship
• (Force + F0 )(velocity + vmax) = constant, where F0 (the isometric force)
and vmax (the maximum muscle velocity) are constants.
• Humans sometimes use one combining function and sometimes they
use another.
• If a bicyclist wants maximum acceleration, he or she uses the point (0,
F0). If there is no resistance and maximum speed is desired, use the
point (vmax, 0). These solutions result from maximizing the sum of
force and velocity.
• However, if there is energy dissipation (e.g., Friction, air resistance)
and maximum speed is desired, choose the maximum power point,
the maximum product of force and velocity.
• This shows that the appropriate tradeoff
function may depend on the task at hand.
03/24/14 © 2009 Bahill230
Nonconvex data setsNonconvex data sets
The muscle force-velocity relationship fit neatly onto lines
such as this hyperbola. This will not always be the case. But
when it is not, the data may be bounded by such functions.
In the operations research literature such data sets are
called concave, although the function bounding them is
called convex (Kuhn and Tucker, 1951).
03/24/14 © 2009 Bahill231
Mini-summaryMini-summary
• The Product Combining Function always favors
alternatives with moderate scores for all criteria. It rejects
alternatives with a low score for any criterion.
• Therefore the Product Combining Function may seem
better than the Sum Combining Function. But the Sum
Combining Function is used much more in systems
engineering.
03/24/14 © 2009 Bahill232
Components of a tradeoff studyComponents of a tradeoff study
• Problem statement
• Evaluation criteria
• Weights of importance
• Alternative solutions
• Evaluation data
• Scoring functions
• Scores
 Combining functions
• Preferred alternatives
• Sensitivity analysis
03/24/14 © 2009 Bahill233
Summation is not alwaysSummation is not always
the best way to combine datathe best way to combine data**
03/24/14 © 2009 Bahill234
Popular combining functionsPopular combining functions
• Sum Combining Function = x + y
 Used most often by engineers
• Product Combining Function = x ∗ y
 Cost to benefit ratio
 Risk analyses
 Game theory*
• Sum Minus Product = x + y - xy
 Probability theory
 Fuzzy logic systems
 Expert system certainty factors
• Compromise = ( )
1/pp p
x + y
03/24/14 © 2009 Bahill235
XORXOR**
• The previous combining functions implemented an AND
function of the criteria.
• There is no combining function that implements the
exclusive or (XOR) function, e.g.
• Criterion-1: Fuel consumption in highway driving, miles per
gallon of gasoline. Baseline = 23 mpg.
• Criterion-2: Fuel consumption in highway driving, miles per
gallon of diesel fuel. Baseline = 26 mpg.
• You want to use criterion-1 for alternatives with gasoline
engines and criterion-2 for alternatives with diesel engines.
03/24/14 © 2009 Bahill236
The American public acceptsThe American public accepts
the Sum Combining Functionthe Sum Combining Function
• It is used to rate NFL quarterbacks
• It is used to select the
best college football teams
03/24/14 © 2009 Bahill237
NFL quarterback passer ratingsNFL quarterback passer ratings
BM stands for basic measure
BM1 = (Completed Passes) / (Pass Attempts)
BM2 = (Passing Yards) / (Pass Attempts)
BM3 = (Touchdown Passes) / (Pass Attempts)
BM4 = Interceptions / (Pass Attempts)
Rating = [5(BM1-0.3) + 0.25(BM2-3) + 20(BM3) + 25(-
BM4+0.095)]*100/6
03/24/14 © 2009 Bahill238
College football BCSCollege football BCS**
BM1 = Polls: AP media & ESPN coaches
BM2 = Computer Rankings: Seattle Times, NY Times, Jeff
Sagarin, etc.
BM3 = Strength of Schedule
BM4 = Number of Losses
Rating = [BM1 + BM2 + BM3 - BM4]
http://sports.espn.go.com/ncf/abcsports/BCSStandings
www.bcsFootball.org
03/24/14 © 2009 Bahill239
What is the best package of soda pop to buy?What is the best package of soda pop to buy?**
Regular price of Coca-Cola in Tucson, January 1995.
The Cost criterion is the reciprocal of price.
The Performance criterion is the quantity in liters. 
Choosing Amongst Alternative Soda Pop Packages
Data Criteria Trade-off Values
Item Price
(dollars)
Cost
(dollars-1)
Quantity
(liters)
Sum Product Sum
Minus
Product
Com-
promise
with p=2
Com-
promise
with p=10
1 can 0.50 2.00 0.35 2.35 0.70 1.65 2.03 2.00
20 oz 0.60 1.67 0.59 2.26 0.98 1.27 1.77 1.67
1 liter 0.79 1.27 1.00 2.27 1.27 1.00 1.62 1.27
2 liter 1.29 0.78 2.00 2.78 1.56 1.22 2.15 2.00
6 pack 2.29 0.44 2.13 2.57 0.94 1.63 2.17 2.13
3 liter 1.69 0.59 3.00 3.59 1.78 1.81 3.06 3.00
12 pack 3.59 0.28 4.26 4.54 1.19 3.35 4.27 4.26
24 pack 5.19 0.19 8.52 8.71 1.62 7.09 8.52 8.52
03/24/14 © 2009 Bahill240
ResultsResults
• The Product Combining Function
suggests that the preferred
package is the three liter bottle
• However, the other combining
functions all recommend the 24
pack
• Plotting these data on Cartesian
coordinates produces a
nonconvex distribution
• The best hyperbolic fit to these
data is (quantity + 0.63)(cost +
0.08) = 2
03/24/14 © 2009 Bahill241
Soda pop dataSoda pop data
0
0.5
1
1.5
2
2.5
0 5 10
Quantity (liters)
Cost(1/dollars)
03/24/14 © 2009 Bahill242
03/24/14 © 2009 Bahill243
Which matchesWhich matches
human decision making?human decision making?
• For a nonconvex distribution, the Sum Combining
Function will favor the points at either end of the
distribution. Sometimes this matches human decision
making.
 I usually buy a case of soda for my family.
 A person working in an office building on a Sunday
afternoon might buy a single can from the vending
machine.
• A frugal person might want to maximize the product of
cost and performance, i.e. the maximum liters/dollar
(the biggest bang for the buck), which is the three liter
bottle. This matches the recommendation of the
Product Combining Function.
03/24/14 © 2009 Bahill244
Which matches humanWhich matches human
decision making?decision making? (cont.)(cont.)
This example shows that for a nonconvex
distribution of alternatives, the choice of the
combining function determines the
preferred alternative.
03/24/14 © 2009 Bahill245
Who was the best NFL quarterback?Who was the best NFL quarterback?
• NFL quarterback passer ratings
• BM1 = (Completed Passes) / (Pass Attempts)
• BM2 = (Passing Yards) / (Pass Attempts)
• BM3 = (Touchdown Passes) / (Pass Attempts)
• BM4 = Interceptions / (Pass Attempts)
• Rating = [5(BM1-0.3) + 0.25(BM2-3) + 20(BM3) +
25(-BM4+0.095)]*100/6
03/24/14 © 2009 Bahill246
The best NFL quarterback for 1999The best NFL quarterback for 1999
http://www.football.espn.go.com/nfl/statistics/
Sum
(p=1)
Product Sum Minus
Product
Compromise
with p=2
Compromise
with p=∞
Kurt
Warner
Kurt
Warner
Kurt
Warner
Kurt
Warner
Kurt
Warner
Steve
Beuerlein
Jeff
George
Steve
Beuerlein
Steve
Beuerlein
Jeff
George
Jeff
George
Steve
Beuerlein
Jeff
George
Peyton
Manning
Steve
Beuerlein
Peyton
Manning
Peyton
Manning
Peyton
Manning
Jeff
George
Peyton
Manning
The best NFL quarterback 1994The best NFL quarterback 1994
03/24/14 © 2009 Bahill247
Sum Product Sum Minus
Product
Compromise
with p=∞
Steve Young Steve Young Steve Bono Steve Bono
John Elway John Elway Bubby Brister Steve Young
Dan Marino Dan Marino Steve
Beuerlein
Bobby
Herbert
Bobby
Herbert
Bobby
Herbert
Jeff George Dan Marino
Eric Kramer Warren Moon Neil O’Donnell Eric Kramer
03/24/14 © 2009 Bahill248
A manned mission to MarsA manned mission to Mars11
• The astronauts will grow beans and rice
• Lots of beans and a little rice is just as
good as lots of rice and a few beans
• Both the Sum and the Product Combining
Functions work fine
03/24/14 © 2009 Bahill249
A manned mission to MarsA manned mission to Mars22
• The astronauts need a system that produces
oxygen and water
• The Product Combining Function works fine
• But the Sum Combining Function could
recommend zero water or zero oxygen
03/24/14 © 2009 Bahill250
Implementing the combining functionsImplementing the combining functions
• The Analytic Hierarchy Process (implemented by the
commercial tool Expert Choice) allows the user to
choose between the sum and the product combining
functions.
• You would have to implement the other combining
functions by yourself.
03/24/14 © 2009 Bahill251
TheThe compromise combining function*compromise combining function*
Compromise = ( )
1/ pp p
x y+
03/24/14 © 2009 Bahill252
When shouldWhen should pp be 1, 2 orbe 1, 2 or ∞∞??
• Use p = 1 if the criteria show perfect
compensation
• Use p = 2 if you want Euclidean distance.
• Use p = ∞ if you are selecting a hero and there
is no compensation
• Compromise = ( )
1/ pp p
x y+
03/24/14 © 2009 Bahill253
IfIf pp == ∞∞
• The preferred alternative is the one with the largest
criterion
• There is no compensation, because only one criterion (the
largest) is considered
• Compromise Output =
• If p is large and x > y then xp
>> yp
and
Compromise Output
1/ pp p
x y + 
1/ pp
x x = = 
03/24/14 © 2009 Bahill254
UseUse pp == ∞∞ when selectingwhen selecting
• the greatest athlete of the century using Number of
National Championship Rings* and Peak Salary
• the baseball player of the week using Home Runs and
Pitching Strikeouts
• a movie using Romance, Action and Comedy
03/24/14 © 2009 Bahill255
NBA teams seem to useNBA teams seem to use pp == ∞∞
• When drafting basketball players
• Criteria are Height and Assists
• They want seven-foot players with ten assists per game
(the ideal point)
• In years when there are many point guards but no
centers, they draft the best point guards
• Choose the criterion with the maximum score (Assists)
and then select the alternative whose number of Assists
has the minimum distance to the ideal point
03/24/14 © 2009 Bahill256
UseUse pp == ∞∞ when choosing minimaxwhen choosing minimax
• A water treatment plant to reduce the amount of
mercury, lead and arsenic in the water.
• Trace amounts are not of concern.
• First, find the poison with the maxmaximum concentration,
then choose the alternative with the miniminimum amount of
that poison.
• Hence the term minimaxminimax.
03/24/14 © 2009 Bahill257
Design of a baseball batDesign of a baseball bat
• The ball goes the farthest, if it hits the sweet spot
of the bat
• Error = |sweet spot - hit point|
• Loss = number of feet short of 500
• For an amateur use minimax: minimize the Loss, if
the Error is maximum
• For Alex Rodriguez use minimin
03/24/14 © 2009 Bahill258
The distanceThe distance
the ballthe ball
travelstravels
depends ondepends on
where the ballwhere the ball
hits the bathits the bat**
03/24/14 © 2009 Bahill259
UseUse pp == ∞∞ if you are very risk averseif you are very risk averse
• A million dollar house on a river bank: a 100-year flood
would cause $900K damage
• A million dollar house on a mountain top: a violent
thunderstorm would cause $100K damage
• Minimax: choose the worst risk, the 100-year flood, and
choose the alternative that minimizes it: build your house
on the mountain top*
03/24/14 © 2009 Bahill260
UseUse pp = 1 if you are probabilistic= 1 if you are probabilistic**
• Risk equals (probability times severity of a 100 year flood)
plus (probability times severity of a violent thunderstorm)
• Risk(River Bank) = 0.01×0.9 + 0.1×0 = 0.009
• Risk(Mountain Top) = 0.01×0 + 0.1×0.1 = 0.010
• Therefore, build your house on the river bank
03/24/14 © 2009 Bahill261
SynonymsSynonyms
• Combining functions are also called
 objective functions
 optimization functions
 performance indices
• Combining functions may include
probability density functions*
03/24/14 © 2009 Bahill262
Summary about combining functionsSummary about combining functions
• Summation of weighted scores is the most common.
• Product combining function eliminates alternatives with a
zero for any criterion.*
• Compromise function with p=∞ uses only one criterion.
03/24/14 © 2009 Bahill263
Components of a tradeoff studyComponents of a tradeoff study
• Problem statement
• Evaluation criteria
• Weights of importance
• Alternative solutions
• Evaluation data
• Scoring functions
• Scores
• Combining functions
 Preferred alternatives
• Sensitivity analysis
03/24/14 © 2009 Bahill264
Select preferred alternativesSelect preferred alternatives
• Select the preferred alternatives.
• Present the results of the tradeoff study to the original
decision maker and other relevant stakeholders.
• A sensitivity analysis will help validate your study.
03/24/14 © 2009 Bahill265
SynonymsSynonyms
• Preferred alternatives
• Recommended alternatives
• Preferred solutions
03/24/14 © 2009 Bahill266
Components of a tradeoff studyComponents of a tradeoff study
• Problem statement
• Evaluation criteria
• Weights of importance
• Alternative solutions
• Evaluation data
• Scoring functions
• Scores
• Combining functions
• Preferred alternatives
 Sensitivity analysis
03/24/14 © 2009 Bahill267
PurposePurpose
A sensitivity analysis identifies
the most important parameters
in a tradeoff study.
03/24/14 © 2009 Bahill268
Sensitivity analysesSensitivity analyses
• A sensitivity analysis of the tradeoff study is imperative.
• Vary the inputs and parameters and discover which ones
are the most important.
• The Pinewood Derby had 89 criteria. Only three of them
could change the preferred alternative.
03/24/14 © 2009 Bahill269
Sensitivity analysis of Pinewood Derby (simulation data)Sensitivity analysis of Pinewood Derby (simulation data)
03/24/14 © 2009 Bahill270
The Do Nothing alternativesThe Do Nothing alternatives
• The double elimination tournament was the status quo.
• The single elimination tournament was the nihilistic do
nothing alternative.
03/24/14 © 2009 Bahill271
Sensitivity analysis of Pinewood Derby (prototype data)Sensitivity analysis of Pinewood Derby (prototype data)
Sensitivity of Pinewood Derby (prototype data)
0.5
0.55
0.6
0.65
0.7
0.75
0.8
0.85
0.9
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Performance Weight
OverallScore
Double elimination
Round robin, best-time
Round robin, points
03/24/14 © 2009 Bahill272
Semirelative-sensitivity functionsSemirelative-sensitivity functions
The semirelative-sensitivity of the
function F to variations in the
parameter α is
0
NOP
F F
Sα α α
∂
=
∂
%
03/24/14 © 2009 Bahill273
Tradeoff studyTradeoff study
A Generic Tradeoff Study
Criteria
Weight of
Importance
Alternative
1
Alternative
2
Criterion 1 Wt1 S11 S12
Criterion 2 Wt2 S21 S22
Final Score F1 F2
A Numeric Example of a Tradeoff Study
Alternatives
Criteria
Weight of
Importance
Umpire’s
Assistant
Seeing
Eye Dog
Accuracy 0.75 0.67 0.33
Silence of
Signaling
0.25 0.83 0.17
Sum of weight
times score
0.71
The
winner
0.29
1 1 11 2 21 2 1 12 2 22andF Wt S Wt S F Wt S Wt S= × + × = × + ×
03/24/14 © 2009 Bahill274
Which parameters could changeWhich parameters could change
the recommendations?the recommendations?
Use this performance index*
Compute the semirelative-sensitivity functions.
1 2 1 11 2 21 1 12 2 22 0.420F F F Wt S Wt S Wt S Wt S= − = × + × − × − × =
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making
Decision making

Mais conteúdo relacionado

Mais procurados

Bjarne Mansson - Risk-based Testing,A Must For Medical Devices - EuroSTAR 2010
Bjarne Mansson - Risk-based Testing,A Must For Medical Devices - EuroSTAR 2010Bjarne Mansson - Risk-based Testing,A Must For Medical Devices - EuroSTAR 2010
Bjarne Mansson - Risk-based Testing,A Must For Medical Devices - EuroSTAR 2010TEST Huddle
 
Dirk Van Dael - Test Accounting - EuroSTAR 2010
Dirk Van Dael - Test Accounting - EuroSTAR 2010Dirk Van Dael - Test Accounting - EuroSTAR 2010
Dirk Van Dael - Test Accounting - EuroSTAR 2010TEST Huddle
 
Quality cost and quality control tools
Quality cost and quality control toolsQuality cost and quality control tools
Quality cost and quality control toolsYashwant Singh Rathore
 
Stuart Reid - ISO 29119: The New International Software Testing Standard
Stuart Reid - ISO 29119: The New International Software Testing StandardStuart Reid - ISO 29119: The New International Software Testing Standard
Stuart Reid - ISO 29119: The New International Software Testing StandardTEST Huddle
 
Critical thinking
Critical thinkingCritical thinking
Critical thinkingamkrisha
 
Ben Walters - Creating Customer Value With Agile Testing - EuroSTAR 2011
Ben Walters - Creating Customer Value With Agile Testing - EuroSTAR 2011Ben Walters - Creating Customer Value With Agile Testing - EuroSTAR 2011
Ben Walters - Creating Customer Value With Agile Testing - EuroSTAR 2011TEST Huddle
 
PMP Training - 12 project procurement management
PMP Training - 12 project procurement managementPMP Training - 12 project procurement management
PMP Training - 12 project procurement managementejlp12
 
Cost management
Cost managementCost management
Cost managementshkadry
 
DoD Joint Weapons System Product Support Business Case Analysis Example
DoD Joint Weapons System Product Support Business Case Analysis ExampleDoD Joint Weapons System Product Support Business Case Analysis Example
DoD Joint Weapons System Product Support Business Case Analysis ExampleRon Giuntini
 
Test strategy &-testplanning
Test strategy &-testplanningTest strategy &-testplanning
Test strategy &-testplanningsrivinayak
 
Sample sem 2 spring 2015
Sample sem 2 spring 2015Sample sem 2 spring 2015
Sample sem 2 spring 2015Tushar Ahuja
 
Quality Improvement And Cos Reduction
Quality Improvement And Cos ReductionQuality Improvement And Cos Reduction
Quality Improvement And Cos ReductionHenmaidi Alfian
 
Innovation day 2012 8. ann van mele - verhaert - 'drastically increase your...
Innovation day 2012   8. ann van mele - verhaert - 'drastically increase your...Innovation day 2012   8. ann van mele - verhaert - 'drastically increase your...
Innovation day 2012 8. ann van mele - verhaert - 'drastically increase your...Verhaert Masters in Innovation
 
Quantitative analysis and pitfalls in decision making
Quantitative analysis and pitfalls in decision makingQuantitative analysis and pitfalls in decision making
Quantitative analysis and pitfalls in decision makingMelvs Garcia
 

Mais procurados (16)

Bjarne Mansson - Risk-based Testing,A Must For Medical Devices - EuroSTAR 2010
Bjarne Mansson - Risk-based Testing,A Must For Medical Devices - EuroSTAR 2010Bjarne Mansson - Risk-based Testing,A Must For Medical Devices - EuroSTAR 2010
Bjarne Mansson - Risk-based Testing,A Must For Medical Devices - EuroSTAR 2010
 
Dirk Van Dael - Test Accounting - EuroSTAR 2010
Dirk Van Dael - Test Accounting - EuroSTAR 2010Dirk Van Dael - Test Accounting - EuroSTAR 2010
Dirk Van Dael - Test Accounting - EuroSTAR 2010
 
Quality cost and quality control tools
Quality cost and quality control toolsQuality cost and quality control tools
Quality cost and quality control tools
 
Stuart Reid - ISO 29119: The New International Software Testing Standard
Stuart Reid - ISO 29119: The New International Software Testing StandardStuart Reid - ISO 29119: The New International Software Testing Standard
Stuart Reid - ISO 29119: The New International Software Testing Standard
 
Critical thinking
Critical thinkingCritical thinking
Critical thinking
 
Ben Walters - Creating Customer Value With Agile Testing - EuroSTAR 2011
Ben Walters - Creating Customer Value With Agile Testing - EuroSTAR 2011Ben Walters - Creating Customer Value With Agile Testing - EuroSTAR 2011
Ben Walters - Creating Customer Value With Agile Testing - EuroSTAR 2011
 
PMP Training - 12 project procurement management
PMP Training - 12 project procurement managementPMP Training - 12 project procurement management
PMP Training - 12 project procurement management
 
Cost management
Cost managementCost management
Cost management
 
Project analysis-worksheet-1.3
Project analysis-worksheet-1.3Project analysis-worksheet-1.3
Project analysis-worksheet-1.3
 
DoD Joint Weapons System Product Support Business Case Analysis Example
DoD Joint Weapons System Product Support Business Case Analysis ExampleDoD Joint Weapons System Product Support Business Case Analysis Example
DoD Joint Weapons System Product Support Business Case Analysis Example
 
Defect Prevention
Defect PreventionDefect Prevention
Defect Prevention
 
Test strategy &-testplanning
Test strategy &-testplanningTest strategy &-testplanning
Test strategy &-testplanning
 
Sample sem 2 spring 2015
Sample sem 2 spring 2015Sample sem 2 spring 2015
Sample sem 2 spring 2015
 
Quality Improvement And Cos Reduction
Quality Improvement And Cos ReductionQuality Improvement And Cos Reduction
Quality Improvement And Cos Reduction
 
Innovation day 2012 8. ann van mele - verhaert - 'drastically increase your...
Innovation day 2012   8. ann van mele - verhaert - 'drastically increase your...Innovation day 2012   8. ann van mele - verhaert - 'drastically increase your...
Innovation day 2012 8. ann van mele - verhaert - 'drastically increase your...
 
Quantitative analysis and pitfalls in decision making
Quantitative analysis and pitfalls in decision makingQuantitative analysis and pitfalls in decision making
Quantitative analysis and pitfalls in decision making
 

Semelhante a Decision making

PMINEO_2012_03_OPM3_Organizational_PM_Maturity
PMINEO_2012_03_OPM3_Organizational_PM_MaturityPMINEO_2012_03_OPM3_Organizational_PM_Maturity
PMINEO_2012_03_OPM3_Organizational_PM_MaturityBob Zoller
 
A pre study for selecting a supplier relationship management tool
A pre study for selecting a supplier relationship management toolA pre study for selecting a supplier relationship management tool
A pre study for selecting a supplier relationship management toolAlaa Karam
 
Example - It Project prioritization
Example - It Project prioritizationExample - It Project prioritization
Example - It Project prioritizationTransparentChoice
 
CH 2 Operations Strategy New 2013.pptx
CH 2 Operations Strategy New 2013.pptxCH 2 Operations Strategy New 2013.pptx
CH 2 Operations Strategy New 2013.pptxamanuel236786
 
Basic Engineering Design (Part 4): Selecting the Best Solution
Basic Engineering Design (Part 4): Selecting the Best SolutionBasic Engineering Design (Part 4): Selecting the Best Solution
Basic Engineering Design (Part 4): Selecting the Best SolutionDenise Wilson
 
AMA-Lecture-5.pdf advance management accounting
AMA-Lecture-5.pdf advance management accountingAMA-Lecture-5.pdf advance management accounting
AMA-Lecture-5.pdf advance management accountingMohammedFouad66
 
Capítulo disponibilizado na Internet pela editora..pdf
Capítulo disponibilizado na Internet pela editora..pdfCapítulo disponibilizado na Internet pela editora..pdf
Capítulo disponibilizado na Internet pela editora..pdfPrakharDwivedi30
 
CH-2 Operations Strategy.pptx
CH-2 Operations Strategy.pptxCH-2 Operations Strategy.pptx
CH-2 Operations Strategy.pptxamanuel236786
 
Making Smart Choices: Strategies for CMMI Adoption
Making Smart Choices: Strategies for CMMI AdoptionMaking Smart Choices: Strategies for CMMI Adoption
Making Smart Choices: Strategies for CMMI Adoptionrhefner
 
How to Evaluate Solutions and Build your Evaluation Committee
How to Evaluate Solutions and Build your Evaluation CommitteeHow to Evaluate Solutions and Build your Evaluation Committee
How to Evaluate Solutions and Build your Evaluation CommitteeBlytheco
 
Quality management checklist
Quality management checklistQuality management checklist
Quality management checklistselinasimpson321
 
Streamlining a Global Life Sciences Company's Pharmacovigilance Operations
Streamlining a Global Life Sciences Company's Pharmacovigilance OperationsStreamlining a Global Life Sciences Company's Pharmacovigilance Operations
Streamlining a Global Life Sciences Company's Pharmacovigilance OperationsPerficient
 
2013 OHSUG - Facilitating Pharmacovigilance Globalization with Process Reengi...
2013 OHSUG - Facilitating Pharmacovigilance Globalization with Process Reengi...2013 OHSUG - Facilitating Pharmacovigilance Globalization with Process Reengi...
2013 OHSUG - Facilitating Pharmacovigilance Globalization with Process Reengi...Perficient
 
Detailed concepts of the Plan Do Check Act Process – Critical to achieving an...
Detailed concepts of the Plan Do Check Act Process – Critical to achieving an...Detailed concepts of the Plan Do Check Act Process – Critical to achieving an...
Detailed concepts of the Plan Do Check Act Process – Critical to achieving an...ASQ Buffalo NY
 
Julie Gardiner - Branch out using Classification Trees for Test Case Design -...
Julie Gardiner - Branch out using Classification Trees for Test Case Design -...Julie Gardiner - Branch out using Classification Trees for Test Case Design -...
Julie Gardiner - Branch out using Classification Trees for Test Case Design -...TEST Huddle
 
Minggu 1 - Decision making and cost concept.pptx
Minggu 1 - Decision making and cost concept.pptxMinggu 1 - Decision making and cost concept.pptx
Minggu 1 - Decision making and cost concept.pptxleddss01
 
Tender Evaluation Process Notes
Tender Evaluation Process NotesTender Evaluation Process Notes
Tender Evaluation Process NotesAlan McSweeney
 

Semelhante a Decision making (20)

PMINEO_2012_03_OPM3_Organizational_PM_Maturity
PMINEO_2012_03_OPM3_Organizational_PM_MaturityPMINEO_2012_03_OPM3_Organizational_PM_Maturity
PMINEO_2012_03_OPM3_Organizational_PM_Maturity
 
A pre study for selecting a supplier relationship management tool
A pre study for selecting a supplier relationship management toolA pre study for selecting a supplier relationship management tool
A pre study for selecting a supplier relationship management tool
 
Benchmarking for Superior Performance
Benchmarking for Superior PerformanceBenchmarking for Superior Performance
Benchmarking for Superior Performance
 
Example - It Project prioritization
Example - It Project prioritizationExample - It Project prioritization
Example - It Project prioritization
 
Week 4 new
Week 4 newWeek 4 new
Week 4 new
 
CH 2 Operations Strategy New 2013.pptx
CH 2 Operations Strategy New 2013.pptxCH 2 Operations Strategy New 2013.pptx
CH 2 Operations Strategy New 2013.pptx
 
Basic Engineering Design (Part 4): Selecting the Best Solution
Basic Engineering Design (Part 4): Selecting the Best SolutionBasic Engineering Design (Part 4): Selecting the Best Solution
Basic Engineering Design (Part 4): Selecting the Best Solution
 
AMA-Lecture-5.pdf advance management accounting
AMA-Lecture-5.pdf advance management accountingAMA-Lecture-5.pdf advance management accounting
AMA-Lecture-5.pdf advance management accounting
 
Capítulo disponibilizado na Internet pela editora..pdf
Capítulo disponibilizado na Internet pela editora..pdfCapítulo disponibilizado na Internet pela editora..pdf
Capítulo disponibilizado na Internet pela editora..pdf
 
CH-2 Operations Strategy.pptx
CH-2 Operations Strategy.pptxCH-2 Operations Strategy.pptx
CH-2 Operations Strategy.pptx
 
Making Smart Choices: Strategies for CMMI Adoption
Making Smart Choices: Strategies for CMMI AdoptionMaking Smart Choices: Strategies for CMMI Adoption
Making Smart Choices: Strategies for CMMI Adoption
 
How to Evaluate Solutions and Build your Evaluation Committee
How to Evaluate Solutions and Build your Evaluation CommitteeHow to Evaluate Solutions and Build your Evaluation Committee
How to Evaluate Solutions and Build your Evaluation Committee
 
Quality management checklist
Quality management checklistQuality management checklist
Quality management checklist
 
Streamlining a Global Life Sciences Company's Pharmacovigilance Operations
Streamlining a Global Life Sciences Company's Pharmacovigilance OperationsStreamlining a Global Life Sciences Company's Pharmacovigilance Operations
Streamlining a Global Life Sciences Company's Pharmacovigilance Operations
 
2013 OHSUG - Facilitating Pharmacovigilance Globalization with Process Reengi...
2013 OHSUG - Facilitating Pharmacovigilance Globalization with Process Reengi...2013 OHSUG - Facilitating Pharmacovigilance Globalization with Process Reengi...
2013 OHSUG - Facilitating Pharmacovigilance Globalization with Process Reengi...
 
Detailed concepts of the Plan Do Check Act Process – Critical to achieving an...
Detailed concepts of the Plan Do Check Act Process – Critical to achieving an...Detailed concepts of the Plan Do Check Act Process – Critical to achieving an...
Detailed concepts of the Plan Do Check Act Process – Critical to achieving an...
 
Julie Gardiner - Branch out using Classification Trees for Test Case Design -...
Julie Gardiner - Branch out using Classification Trees for Test Case Design -...Julie Gardiner - Branch out using Classification Trees for Test Case Design -...
Julie Gardiner - Branch out using Classification Trees for Test Case Design -...
 
Minggu 1 - Decision making and cost concept.pptx
Minggu 1 - Decision making and cost concept.pptxMinggu 1 - Decision making and cost concept.pptx
Minggu 1 - Decision making and cost concept.pptx
 
Tender Evaluation Process Notes
Tender Evaluation Process NotesTender Evaluation Process Notes
Tender Evaluation Process Notes
 
Module 2.pdf
Module 2.pdfModule 2.pdf
Module 2.pdf
 

Decision making

  • 1. Decision AnalysisDecision Analysis and Tradeoff Studiesand Tradeoff Studies Terry BahillTerry Bahill Systems and Industrial EngineeringSystems and Industrial Engineering University of ArizonaUniversity of Arizona terry@sie.arizona.eduterry@sie.arizona.edu ©, 2000-10, Bahill©, 2000-10, Bahill This file is located inThis file is located in http://www.sie.arizona.edu/sysengr/slides/http://www.sie.arizona.edu/sysengr/slides/
  • 2. 03/24/14 © 2009 Bahill2 AcknowledgementAcknowledgement This research was supported by AFOSR/MURI F49620-03-1-0377.
  • 3. 03/24/14 © 2009 Bahill3 Timing estimate for this course*Timing estimate for this course* • Introduction (10 minutes) • Decision analysis and resolution (49 slides, 40 minutes) • San Diego Airport example (7 slides, 5 minutes) • The tradeoff study process and potential problems (238 slides, 145 minutes) • Summary (6 slides, 10 minutes) • Dog system exercise (140 minutes) • Mathematical summary of tradeoff methods (38 slides, 70 minutes) • Course summary (10 minutes) • Breaks (50 minutes) • Total (480 minutes)
  • 4. 03/24/14 © 2009 Bahill4 OutlineOutline** • This course starts with brief model of human decision making (slides 14-27). Then it presents a crisp description of the tradeoff study processes (Slides 14- 67), which includes a simple example of choosing between two combining methods. • Then it shows a complex, but well-known tradeoff study example that most people will be familiar with: the San Diego airport site selection (Slides 68-75). • Then we go back and examine many difficulties that could arise when designing a tradeoff study; we show many methods that have been used to overcome these potential problems (Slides 76-338). • The course is summarized with slides 339-346. • In the Dog System Exercise, students create their own solutions for a tradeoff study. These exercises will be computer based. The students complete one of the exercise’s eight parts. Then we give them our solutions. They complete another portion and we give them another solution. The computers will be preloaded with all of the problems and solutions. The students will use Excel spreadsheets and a simple program for graphing scoring (utility) functions. • After the exercise there will be a mathematical summary of tradeoff methods. Students who are algebraically challenged may excuse themselves.
  • 5. 03/24/14 © 2009 Bahill5 Course administrationCourse administration • AWO: • Course Name: Decision Making and Tradeoff Studies • Course Number: • Facilities Telephones* Bathrooms Vending Machines Exits ExitExit
  • 6. 03/24/14 © 2009 Bahill6 Course objectivesCourse objectives** • The students should be able to  Understand human decision making  Use many techniques, including tradeoff studies, to help select among alternatives  Decide whether a problem is a good candidate for a tradeoff study  Establish evaluation criteria with weights of importance  Understand scoring (utility) functions  Perform a valid tradeoff study  Fix the do nothing problem  Use several different combining functions  Perform a sensitivity analysis  Be aware of many tradeoff methods  Develop a decision tree
  • 7. 03/24/14 © 2009 Bahill7 Student introductionsStudent introductions •Name •Current program assignment •Related experience
  • 8. Decision AnalysisDecision Analysis and Resolutionand Resolution
  • 9. 03/24/14 © 2009 Bahill9 CMMICMMI • The Capability Maturity Model Integrated (CMMI) is a collection of best practices from diverse engineering companies • Improvements to our organization will come from process improvements, not from people improvements or technology improvements • CMMI provides guidance for improving an organization’s processes • One of the CMMI process areas is Decision Analysis and Resolution (DAR)
  • 10. 03/24/14 © 2009 Bahill10 DARDAR • Programs and Departments select the decision problems that require DAR and incorporate them in their plans (e.g. SEMPs) • DAR is a common process • Common processes are tools that the user gets, tailors and uses • DAR is invoked throughout the whole program lifecycle whenever a critical decision is to be made • DAR is invoked by IPT leads on programs, financial analysts, program core teams, etc. • Invoke the DAR Process in work instructions, in gate reviews, in phase reviews or with other triggers, which can be used anytime in the system life cycle
  • 11. 03/24/14 © 2009 Bahill11 Typical decisionsTypical decisions • Decision problems that may require a formal decision process  Tradeoff studies  Bid/no-bid  Make-reuse-buy  Formal inspection versus checklist inspection  Tool and vendor selection  Cost estimating  Incipient architectural design  Hiring and promotions  Helping your customer to choose a solution
  • 12. 03/24/14 © 2009 Bahill12 It’s not done just onceIt’s not done just once • A tradeoff study is not something that you do once at the beginning of a project. • Throughout a project you are continually making tradeoffs  creating team communication methods  selecting components  choosing implementation techniques  designing test programs  maintaining schedule • Many of these tradeoffs should be formally documented.
  • 13. 03/24/14 © 2009 Bahill13 PurposePurpose** “In all decisions you gain something and lose something. Know what they are and do it deliberately.”
  • 14. 03/24/14 © 2009 Bahill14 Tradeoff StudiesTradeoff Studies
  • 15. 03/24/14 © 2009 Bahill15 A simple tradeoff studyA simple tradeoff study
  • 16. 03/24/14 © 2009 Bahill16 DAR Specific Practice Decide if formal evaluation is needed When to do a tradeoff study Establish Evaluation Criteria What is in a tradeoff study Identify Alternative Solutions Select Evaluation Methods Evaluate Alternatives Select Preferred Solutions CMMI’s DAR processCMMI’s DAR process
  • 17. 03/24/14 © 2009 Bahill17 Tradeoff Study ProcessTradeoff Study Process** These tasks are drawn serially, but they are not performed in a serial manner. Rather, it is an iterative process with many feedback loops, which are not shown. Decide if Formal Evaluation is Needed Decide if Formal Evaluation is Needed Problem Statement Problem Statement Select Evaluation Methods Select Evaluation Methods Establish Evaluation Criteria Establish Evaluation Criteria Identify Alternative Solutions Identify Alternative Solutions Proposed Alternatives Proposed Alternatives Evaluation Criteria Evaluation Criteria Evaluate Alternatives Evaluate Alternatives Select Preferred Solutions Select Preferred Solutions Formal Evaluations Formal Evaluations Perform Expert Review Perform Expert Review Preferred Solutions Preferred Solutions Present Results Present Results Put In PPAL Put In PPAL ∑
  • 18. 03/24/14 © 2009 Bahill18 When creating a processWhen creating a process the most important facets are • illustrating tasks that can be done in parallel • suggesting feedback loops • configuration management • including a process to improve the process
  • 19. 03/24/14 © 2009 Bahill19 Humans make four types of decisions:Humans make four types of decisions: • Allocating resources among competing projects* • Generating plans, schedules and novel ideas • Negotiating agreements • Choosing amongst alternatives  Alternatives can be examined in series or parallel.  When examined in series it is called sequential search  When examined in parallel it is called a tradeoff or a trade study  “Tradeoff studies address a range of problems from selecting high-level system architecture to selecting a specific piece of commercial off the shelf hardware or software. Tradeoff studies are typical outputs of formal evaluation processes.”*
  • 20. 03/24/14 © 2009 Bahill20 HistoryHistory Ben Franklin’s letter* to Joseph Priestly outlined one of the first descriptions of a tradeoff study.
  • 21. 03/24/14 © 2009 Bahill21 Decide if Formal Evaluation is NeededDecide if Formal Evaluation is Needed Decide ifDecide if FormalFormal Evaluation isEvaluation is NeededNeeded Problem Statement Problem Statement Select Evaluation Methods Select Evaluation Methods Establish Evaluation Criteria Establish Evaluation Criteria Identify Alternative Solutions Identify Alternative Solutions Proposed Alternatives Proposed Alternatives Evaluation Criteria Evaluation Criteria Evaluate Alternatives Evaluate Alternatives Select Preferred Solutions Select Preferred Solutions Formal Evaluations Formal Evaluations Perform Expert Review Perform Expert Review Preferred Solutions Preferred Solutions Present Results Present Results Put In PPAL Put In PPAL
  • 22. 03/24/14 © 2009 Bahill22 Is formal evaluation needed?Is formal evaluation needed? Companies should have polices for when to do formal decision analysis. Criteria include • When the decision is related to a moderate or high-risk issue • When the decision affects work products under configuration management • When the result of the decision could cause significant schedule delays • When the result of the decision could cause significant cost overruns • On material procurement of the 20 percent of the parts that constitute 80 percent of the total material costs
  • 23. 03/24/14 © 2009 Bahill23 Guidelines for formal evaluationGuidelines for formal evaluation • When the decision is selecting one or a few alternatives from a list • When a decision is related to major changes in work products that have been baselined • When a decision affects the ability to achieve project objectives • When the cost of the formal evaluation is reasonable when compared to the decision’s impact • On design-implementation decisions when technical performance failure may cause a catastrophic failure • On decisions with the potential to significantly reduce design risk, engineering changes, cycle time or production costs
  • 24. 03/24/14 © 2009 Bahill24 Establish Evaluation CriteriaEstablish Evaluation Criteria Decide if Formal Evaluation is Needed Decide if Formal Evaluation is Needed Problem Statement Problem Statement Select Evaluation Methods Select Evaluation Methods EstablishEstablish EvaluationEvaluation CriteriaCriteria Identify Alternative Solutions Identify Alternative Solutions Proposed Alternatives Proposed Alternatives Evaluation Criteria Evaluation Criteria Evaluate Alternatives Evaluate Alternatives Select Preferred Solutions Select Preferred Solutions Formal Evaluations Formal Evaluations Perform Expert Review Perform Expert Review Preferred Solutions Preferred Solutions Present Results Present Results Put In PPAL Put In PPAL
  • 25. 03/24/14 © 2009 Bahill25 Establish evaluation criteriaEstablish evaluation criteria** • Establish and maintain criteria for evaluating alternatives • Each criterion must have a weight of importance • Each criterion should link to a tradeoff requirement, i.e. a requirement whose acceptable value can be more or less depending on quantitative values of other requirements. • Criteria must be arranged hierarchically. The top-level may be performance, cost, schedule and risk.  Program Management should prioritize these four criteria at the beginning of the project and make sure everyone knows the priorities. • All companies should have a repository of generic evaluation criteria.
  • 26. 03/24/14 © 2009 Bahill26 What will you eat for lunch today?What will you eat for lunch today? •In class exercise. •Write some evaluation criteria that will, help you decide.*
  • 27. 03/24/14 © 2009 Bahill27 Killer tradesKiller trades •Evaluating alternatives is expensive. •Therefore, early in tradeoff study, identify very important requirements* that can eliminate many alternatives. •These requirements produce killer criteria.** •Subsequent killer trades can often eliminate 90% of the possible alternatives.
  • 28. 03/24/14 © 2009 Bahill28 Identify Alternative SolutionsIdentify Alternative Solutions Decide if Formal Evaluation is Needed Decide if Formal Evaluation is Needed Problem Statement Problem Statement Select Evaluation Methods Select Evaluation Methods Establish Evaluation Criteria Establish Evaluation Criteria IdentifyIdentify AlternativeAlternative SolutionsSolutions Proposed Alternatives Proposed Alternatives Evaluation Criteria Evaluation Criteria Evaluate Alternatives Evaluate Alternatives Select Preferred Solutions Select Preferred Solutions Formal Evaluations Formal Evaluations Perform Expert Review Perform Expert Review Preferred Solutions Preferred Solutions Present Results Present Results Put In PPAL Put In PPAL
  • 29. 03/24/14 © 2009 Bahill29 Identify alternative solutionsIdentify alternative solutions • Identify alternative solutions for the problem statement • Consider unusual alternatives in order to test the system requirements* • Do not list alternatives that do not satisfy all mandatory requirements** • Consider use of commercial off the shelf and in- house entities*** • Use killer trades to eliminate thousands of infeasible alternatives
  • 30. 03/24/14 © 2009 Bahill30 What will you eat for lunch today?What will you eat for lunch today? •In class exercise. •List some alternatives for today’s lunch.*
  • 31. 03/24/14 © 2009 Bahill31 Select Evaluation MethodsSelect Evaluation Methods Decide if Formal Evaluation is Needed Decide if Formal Evaluation is Needed Problem Statement Problem Statement SelectSelect EvaluationEvaluation MethodsMethods Establish Evaluation Criteria Establish Evaluation Criteria Identify Alternative Solutions Identify Alternative Solutions Proposed Alternatives Proposed Alternatives Evaluation Criteria Evaluation Criteria Evaluate Alternatives Evaluate Alternatives Select Preferred Solutions Select Preferred Solutions Formal Evaluations Formal Evaluations Perform Expert Review Perform Expert Review Preferred Solutions Preferred Solutions Present Results Present Results Put In PPAL Put In PPAL
  • 32. 03/24/14 © 2009 Bahill32 Select evaluation methodsSelect evaluation methods • Select the source of the evaluation data and the method for evaluating the data • Typical sources for evaluation data include approximations, product literature, analysis, models, simulations, experiments and prototypes* • Methods for combining data and evaluating alternatives include Multi-Attribute Utility Technique (MAUT), Ideal Point, Search Beam, Fuzzy Databases, Decision Trees, Expected Utility, Pair- wise Comparisons, Analytic Hierarchy Process (AHP), Financial Analysis, Simulation, Monte Carlo, Linear Programming, Design of Experiments, Group Techniques, Quality Function Deployment (QFD), radar charts, forming a consensus and Tradeoff Studies
  • 33. 03/24/14 © 2009 Bahill33 Collect evaluation dataCollect evaluation data •Using the appropriate source (approximations, product literature, analysis, models, simulations, experiments or prototypes) collect data for evaluating each alternative.
  • 34. 03/24/14 © 2009 Bahill34 Evaluate AlternativesEvaluate Alternatives Decide if Formal Evaluation is Needed Decide if Formal Evaluation is Needed Problem Statement Problem Statement Select Evaluation Methods Select Evaluation Methods Establish Evaluation Criteria Establish Evaluation Criteria Identify Alternative Solutions Identify Alternative Solutions Proposed Alternatives Proposed Alternatives Evaluation Criteria Evaluation Criteria EvaluateEvaluate AlternativesAlternatives Select Preferred Solutions Select Preferred Solutions Formal Evaluations Formal Evaluations Perform Expert Review Perform Expert Review Preferred Solutions Preferred Solutions Present Results Present Results Put In PPAL Put In PPAL
  • 35. 03/24/14 © 2009 Bahill35 Evaluate alternativesEvaluate alternatives • Evaluate alternative solutions using the evaluation criteria, weights of importance, evaluation data, scoring functions and combining functions. • Evaluating alternative solutions involves analysis, discussion and review. Iterative cycles of analysis are sometimes necessary. Supporting analyses, experimentation, prototyping, or simulations may be needed to substantiate scoring and conclusions.
  • 36. 03/24/14 © 2009 Bahill36 Select Preferred SolutionsSelect Preferred Solutions Decide if Formal Evaluation is Needed Decide if Formal Evaluation is Needed Problem Statement Problem Statement Select Evaluation Methods Select Evaluation Methods Establish Evaluation Criteria Establish Evaluation Criteria Identify Alternative Solutions Identify Alternative Solutions Proposed Alternatives Proposed Alternatives Evaluation Criteria Evaluation Criteria Evaluate Alternatives Evaluate Alternatives SelectSelect PreferredPreferred SolutionsSolutions Formal Evaluations Formal Evaluations Perform Expert Review Perform Expert Review PreferredPreferred SolutionsSolutions Present Results Present Results Put In PPAL Put In PPAL
  • 37. 03/24/14 © 2009 Bahill37 Select preferred solutionsSelect preferred solutions • Select preferred solutions from the alternatives based on evaluation criteria. • Selecting preferred alternatives involves weighing and combining the results from the evaluation of alternatives. Many combining methods are available. • The true value of a formal decision process might not be listing the preferred alternatives. More important outputs are stimulating thought processes and documenting their outcomes. • A sensitivity analysis will help validate your recommendations. • The least sensitive criteria should be given weights of 0.
  • 38. 03/24/14 © 2009 Bahill38 Perform Expert ReviewPerform Expert Review Decide if Formal Evaluation is Needed Decide if Formal Evaluation is Needed Problem Statement Problem Statement Select Evaluation Methods Select Evaluation Methods Establish Evaluation Criteria Establish Evaluation Criteria Identify Alternative Solutions Identify Alternative Solutions Proposed Alternatives Proposed Alternatives Evaluation Criteria Evaluation Criteria Evaluate Alternatives Evaluate Alternatives Select Preferred Solutions Select Preferred Solutions Formal Evaluations Formal Evaluations Perform Expert Review Perform Expert Review Preferred Solutions Preferred Solutions Present Results Present Results Put In PPAL Put In PPAL ∑
  • 39. 03/24/14 © 2009 Bahill39 Perform expert reviewPerform expert review11 • Formal evaluations should be reviewed* at regular gate reviews such as SRR, PDR and CDR or by special expert reviews • Technical reviews started about the same time as Systems Engineering, in 1960. The concept was formalized with MIL-STD-1521 in 1972. • Technical reviews are still around, because there is evidence that they help produce better systems at less cost.
  • 40. 03/24/14 © 2009 Bahill40 Perform expert reviewPerform expert review22 • Technical reviews evaluate the product of an IPT* • They are conducted by a knowledgeable board of specialists including supplier and customer representatives • The number of board members should be less than the number of IPT members • But board expertise should be greater than the IPT’s experience base
  • 41. 03/24/14 © 2009 Bahill41 Who should come to the review?Who should come to the review? • Program Manager • Chief Systems Engineer • Review Inspector • Lead Systems Engineer • Domain Experts • IPT Lead • Facilitator • Stakeholders for this decision  Builder  Customer  Designer  Tester  PC Server • Depending on the decision, the Lead Hardware Engineer and the Lead Software Engineer
  • 42. 03/24/14 © 2009 Bahill42 Present resultsPresent results Present the results* of the formal evaluation to the original decision maker and other relevant stakeholders.
  • 43. 03/24/14 © 2009 Bahill43 Put in the PALPut in the PAL • Formal evaluations reviewed by experts should be put in the organizational Process Asset Library (PAL) or the Project Process Asset Library (PPAL) • Evaluation data for tradeoff studies come from approximations, analysis, models, simulations, experiments and prototypes. Each time better data is obtained the PAL should be updated. • Formal evaluations should be designed with reuse in mind.
  • 44. 03/24/14 © 2009 Bahill44 Closed Book Quiz, 5 minutesClosed Book Quiz, 5 minutes Fill in the empty boxesFill in the empty boxes Problem Statement Problem Statement Proposed Alternatives Proposed Alternatives Evaluation Criteria Evaluation Criteria Formal Evaluations Formal Evaluations Preferred Solutions Preferred Solutions∑
  • 45. 03/24/14 © 2009 Bahill45 Tradeoff Study ExampleTradeoff Study Example
  • 46. 03/24/14 © 2009 Bahill46 Example: What method shouldExample: What method should we use for evaluating alternatives?we use for evaluating alternatives?** • Is formal evaluation needed? • Check the Guidance for Formal Evaluations • We find that many of its criteria are satisfied including “On decisions with the potential to significantly reduce design risk … cycle time ...” • Establish evaluation criteria • Ease of Use • Familiarity • Killer criterion • Engineers must think that use of the technique is intuitive.
  • 47. 03/24/14 © 2009 Bahill47 Example (continued)Example (continued)11 • Identify alternative solutions  Linear addition of weight times scores, Multiattribute Utility Theory (MAUT).* This method is often called a “trade study.” It is often implemented with an Excel spreadsheet.  Analytic Hierarchy Process (AHP)**
  • 48. 03/24/14 © 2009 Bahill48 Example (continued)Example (continued)22 • Select evaluation methods  The evaluation data will come from expert opinion  Common methods for combining data and evaluating alternatives include: Multi-Attribute Utility Technique (MAUT), Decision Trees, Analytic Hierarchy Process (AHP), Pair-wise Comparisons, Ideal Point, Search Beam, etc.  In the following slides we will use two methods: linear addition of weight times scores (MAUT) and the Analytic Hierarchy Process (AHP)*
  • 49. 03/24/14 © 2009 Bahill49 Example (continued)Example (continued)33 • Evaluate alternatives  Let the weights and evaluation data be integers between 1 and 10, with 10 being the best. The computer can normalize the weights if necessary.
  • 50. 03/24/14 © 2009 Bahill50 Multi-Attribute Utility Technique (MAUT)Multi-Attribute Utility Technique (MAUT)11 Criteria Weight of Importance MAUT AHP Ease of Use 8 4 Familiarity Sum of weight times score Assess evaluation data* row by row
  • 51. 03/24/14 © 2009 Bahill51 Multi-Attribute Utility Technique (MAUT)Multi-Attribute Utility Technique (MAUT)22 Criteria Weight* of Importance MAUT AHP Ease of Use 9 8 4 Familiarity 3 9 2 Sum of weight times score 99 42 The winner
  • 52. 03/24/14 © 2009 Bahill52 Analytic Hierarchy Process (AHP)Analytic Hierarchy Process (AHP) Verbal scale Numerical value Equally important, likely or preferred 1 Moderately more important, likely or preferred 3 Strongly more important, likely or preferred 5 Very strongly more important, likely or preferred 7 Extremely more important, likely or preferred 9
  • 53. 03/24/14 © 2009 Bahill53 AHP, make comparisonsAHP, make comparisons Create a matrix with the criteria on the diagonal and make pair-wise comparisons*Ease of Use Ease of Use is moderately more important than Familiarity (3) Reciprocal of 3 = 1/3 Familiarity
  • 54. 03/24/14 © 2009 Bahill54 AHP, compute weightsAHP, compute weights • Create a matrix • Square the matrix • Add the rows • Normalize* 1 1 2 3 3 3 1 3 1 3 2 6 8 1 1 2 2 0.7 . 5.6 5 0 27       × = ⇒ ⇒           
  • 55. 03/24/14 © 2009 Bahill55 In-class exerciseIn-class exercise • Use these criteria to help select your lunch today. Closeness, distance to the venue. Is it in the same building, the next building or do you have to get in a car and drive? Tastiness, including gustatory delightfulness, healthiness, novelty and savoriness. Price,* total purchase price including tax and tip.
  • 56. 03/24/14 © 2009 Bahill56 To help select lunch todayTo help select lunch today11 • closeness is ??? more important than tastiness, • closeness is ??? more important than price, • tastiness is ??? more important than price. Closeness Tastiness Price Closeness Tastiness Price
  • 57. 03/24/14 © 2009 Bahill57 To help select lunch todayTo help select lunch today22 • closeness is strongly more important (5) than tastiness, • closeness is very strongly more important (7) than price, • tastiness is moderately more important (3) than price. Closeness Tastiness Price Closeness 1 5 7 Tastiness 1 3 Price 1
  • 58. 03/24/14 © 2009 Bahill58 To help select lunch todayTo help select lunch today33 1 5 7 1 5 7 3 12.3 29 44.3 0.73 1 1 1 3 1 3 0.8 3 7.4 11.2 0.19 5 5 0.4 1.4 3 4.8 0.08 1 1 1 1 1 1 7 3 7 3                  × = ⇒ ⇒                    Closeness Tastiness Price Weight of Importance Closeness 1 5 7 0.73 Tastiness 1/5 1 3 0.19 Price 1/7 1/3 1 0.08
  • 59. 03/24/14 © 2009 Bahill59 AHP, get scoresAHP, get scores Compare each alternative on the first criterion 1 1 2 2 1 2 1 2 2 4 6 1 1 1 2 3 0.67 0.33       × = ⇒ ⇒           Ease of Use MAUT In terms of Ease of Use, MAUT is slightly preferred (2) 1/2 AHP
  • 60. 03/24/14 © 2009 Bahill60 AHP, get scoresAHP, get scores22 Compare each alternative on the second criterion 1 1 5 5 1 5 1 5 2 10 0.83 0.17 12 1 1 0.4 2 2.4       × = ⇒ ⇒           Familiarity MAUT In terms of Familiarity, MAUT is strongly preferred (5) 1/5 AHP
  • 61. 03/24/14 © 2009 Bahill61 AHP, form comparison matrixAHP, form comparison matrix**** Combine with linear addition* Criteria Weight of Importance MAUT AHP Ease of Use 0.75 0.67 0.33 Familiarity 0.25 0.83 0.17 Sum of weight times score 0.71 0.29 The winner
  • 62. 03/24/14 © 2009 Bahill62 Example (continued)Example (continued)44 • Select Preferred Solutions  Linear addition of weight times scores (MAUT) was the preferred alternative  Now consider new criteria, such as Repeatability of Result, Consistency*, Time to Compute  Do a sensitivity analysis
  • 63. 03/24/14 © 2009 Bahill63 Sensitivity analysis, simpleSensitivity analysis, simple In terms of Familiarity, MAUT was strongly preferred (5) over the AHP. Now change this 5 to a 3 and to a 7. • Changing the scores for Familiarity does not change the recommended alternative. • This is good. • It means the Tradeoff study is robust with respect to these scores. Final Score Familiarity MAUT AHP 3 0.69 0.31 5 0.71 0.29 7 0.72 0.28
  • 64. 03/24/14 © 2009 Bahill64 Sensitivity analysis, analyticSensitivity analysis, analytic Compute the six semirelative-sensitivity functions, which are defined as which reads, the semirelative-sensitivity function of the performance index F with respect to the parameter β is the partial derivative of F with respect to β times β with everything evaluated at the normal operating point (NOP). F NOP F Sβ β β ∂ = ∂ %
  • 65. 03/24/14 © 2009 Bahill65 Sensitivity analysisSensitivity analysis22 For the performance index use the alternative rating for MAUT minus the alternative rating for AHP* F = F1 - F2 = Wt1×S11 + Wt2×S21 – Wt1×S12 –Wt2×S22 Criteria Weight of Importance MAUT AHP Ease of Use Wt1 S11 S12 Familiarity Wt2 S21 S22 Sum of weight times score F1 F2
  • 66. 03/24/14 © 2009 Bahill66 Sensitivity analysisSensitivity analysis33 The semirelative-sensitivity functions* ( ) ( ) 1 2 11 21 12 22 11 12 1 21 22 2 1 11 2 21 1 12 2 22 0.26 0.16 0.50 0.21 -0.25 -0.04 F Wt F Wt F S F S F S F S S S S Wt S S S Wt S Wt S S Wt S S Wt S S Wt S = − = = − = = = = = = − = = − = % % % % % % S11 is the most important parameter. So go back and reevaluate it.
  • 67. 03/24/14 © 2009 Bahill67 Sensitivity analysisSensitivity analysis44 • The most important parameter is the score for MAUT on the criterion Ease of Use • We should go back and re-evaluate the derivation of that score Ease of Use MAUT In terms of Ease of Use, MAUT is slightly preferred (2) 1/2 AHP
  • 68. 03/24/14 © 2009 Bahill68
  • 69. 03/24/14 © 2009 Bahill69 Example (continued)Example (continued)55 • Perform expert review of the tradeoff study. • Present results to original decision maker. • Put tradeoff study in PAL. • Improve the DAR process.  Add some other techniques, such as AHP, to the DAR web course  Fix the utility curves document  Add image theory to the DAR process  Change linkages in the documentation system  Create a course, Decision Making and Tradeoff Studies
  • 70. 03/24/14 © 2009 Bahill70 Quintessential exampleQuintessential example A Tradeoff Study of Tradeoff Study Tools is available at http://www.sie.arizona.edu/sysengr/sie554/tradeoffStudyOfT radeoffStudyTools.doc
  • 71. San Diego CountySan Diego County Regional AirportRegional Airport Tradeoff StudyTradeoff Study This tradeoff study has cost $17 million.This tradeoff study has cost $17 million. http://www.san.org/authority/assp/index.asp http://www.san.org/airport_authority/archives/index.asp#master_plan
  • 72. 03/24/14 © 2009 Bahill72 The evaluation criteria treeThe evaluation criteria tree** Operational Requirement Optimal Airport Layout Runway Alignment Terrain Weather Existing land uses Wildlife Hazards Joint Use and National Defense Compatibility Expandability Ground Access Travel Time, percentage of population in three travel time segments Roadway Network Capacity, existing and projected daily roadway volumes Highway and Transit Accessibility, distance to existing and planned freeways Environmental Impacts Quantity of residential land to be displaced by the airport development Noise Impact, population within each of three specific decibel ranges Biological Resources Wetlands Protected species Water quality Significant cultural resources Site Development Evaluations
  • 73. 03/24/14 © 2009 Bahill73 Top-level criteriaTop-level criteria 1. Operational Requirements 2. Ground Access 3. Environmental Impacts 4. Site Development Evaluations These four evaluation criteria are then decomposed into a hierarchy
  • 74. 03/24/14 © 2009 Bahill74 Operational RequirementsOperational Requirements Optimal Airport Layout Runway Alignment Terrain, weather and existing land uses Wildlife Hazards Joint Use and National Defense Compatibility Expandability
  • 75. 03/24/14 © 2009 Bahill75 Ground AccessGround Access • Travel Time, percentage of population in three travel time segments • Roadway Network Capacity, existing and projected daily roadway volumes • Highway and Transit Accessibility, distance to existing and planned freeways
  • 76. 03/24/14 © 2009 Bahill76 Environmental ImpactsEnvironmental Impacts • Quantity of residential land to be displaced by the airport development • Noise Impact, population within each of three specific decibel ranges • Biological Resources  Wetlands  Protected species • Water quality • Significant cultural resources
  • 77. 03/24/14 © 2009 Bahill77 Alternative LocationsAlternative Locations • Miramar Marine Corps Air Station • East Miramar • North Island Naval Air Station • March Air Force Base • Marine Corps Base Camp Pendleton • Imperial County desert site • Campo and Borrego Springs • Lindberg Field • Off-Shore floating airport • Corte Madera Valley
  • 78. 03/24/14 © 2009 Bahill78
  • 79. Tradeoff Studies:Tradeoff Studies: the Process and Potentialthe Process and Potential ProblemsProblems**
  • 80. 03/24/14 © 2009 Bahill80 Outline of this sectionOutline of this section • Problem statement • Models of human decision making • Components of a tradeoff study  Problem statement  Evaluation criteria  Weights of importance  Alternative solutions  The do nothing alternative  Different distributions of alternatives  Evaluation data  Scoring functions  Scores  Combining functions  Preferred alternatives  Sensitivity analysis • Other tradeoff techniques  The ideal point  The search beam  Fuzzy sets  Decision trees • The wrong answer • Tradeoff study on tradeoff study tools • Summary
  • 81. 03/24/14 © 2009 Bahill81 ReferenceReference J. Daniels, P. W. Werner and A. T. Bahill, Quantitative Methods for Tradeoff Analyses, Systems Engineering, 4(3), 199-212, 2001.
  • 82. 03/24/14 © 2009 Bahill82 PurposePurpose The systems engineer’s job is to elucidate domain knowledge and capture the values and preferences of the decision maker, so that the decision maker (and other stakeholders) will have confidence in the decision. The decision maker balances effort with confidence*
  • 83. 03/24/14 © 2009 Bahill83
  • 84. 03/24/14 © 2009 Bahill84 Tradeoff studiesTradeoff studies • Humans exhibit four types of decision making activities 1. Allocating resources among competing projects 2. Making plans, which includes scheduling 3. Negotiating agreements 4. Choosing alternatives from a list  Series  Parallel, a tradeoff study 
  • 85. 03/24/14 © 2009 Bahill85 A typical tradeoff study matrix Alternative-A Alternative-B Criteria Qualitative weight Normalized weight Scoring function Input value Output score Score times weight Input value Output score Score times weight Criterion-1 1 to 10 0 to 1 Type and parameters Natural units 0 to 1 0 to 1 Natural units 0 to 1 0 to 1 Criterion-2 1 to 10 0 to 1 Type and parameters Natural units 0 to1 0 to 1 Natural units 0 to1 0 to 1 Sum 0 to1 0 to1
  • 86. 03/24/14 © 2009 Bahill86 Pinewood Derby*
  • 87. 03/24/14 © 2009 Bahill87 Part of a Pinewood Derby tradeoff studyPart of a Pinewood Derby tradeoff study Performance figures of merit evaluated on a prototype for a Round Robin with Best Time Scoring Evaluation criteria Input value Score Weight Score times weight 1. Average Races per Car 6 0.94 0.20 0.19 2. Number of Ties 0 1 0.20 0.20 3. Happiness 0.87 0.60 0.52 Qualitative weight Normalized weight Input value Scoring function Output score Score times weight 3.1 Percent Happy Scouts 10 0.50 96 0.98 0.49 3.2 Number of Irate Parents 5 0.25 1 0.50 0.13 3.3 Number of Lane Repeats 5 0.25 0 1.00 0.25 Sum 0.87 0.91 http://www.sie.arizona.edu/sysengr/pinewood/pinewood.pdf
  • 88. 03/24/14 © 2009 Bahill88 When do people do tradeoff studies?When do people do tradeoff studies? • Buying a car • Buying a house • Selecting a job • These decisions are important, you have lots of time to make the decision and alternatives are apparent.* • We would not use a tradeoff study to select a drink for lunch or to select a husband or wife. • You would also do a tradeoff study when your boss asks you to do one.
  • 89. 03/24/14 © 2009 Bahill89 Do the tradeoff studies upfrontDo the tradeoff studies upfront before all of the costs are locked inbefore all of the costs are locked in**
  • 90. 03/24/14 © 2009 Bahill90 Why discuss this topic?Why discuss this topic? • Many multicriterion decision-making techniques exist, but few decision-makers use them. • Perhaps, because  They seem complicated  Different techniques have given different preferred alternatives  Different life experiences give different preferred alternatives  People don’t think that way*
  • 91. 03/24/14 © 2009 Bahill91 Models of Human Decision MakingModels of Human Decision Making
  • 92. 03/24/14 © 2009 Bahill92 Series versus parallelSeries versus parallel11 • Looking at alternatives in parallel is not an innate human action. • Usually people select one hypothesis and work on it until it is disproved, then they switch to a new alternative: that’s the scientific method. • Such serial processing of alternatives has been demonstrated for  Fire fighters  Airline pilots  Physicians  Detectives  Baseball managers  People looking for restaurants*
  • 93. 03/24/14 © 2009 Bahill93 Series versus parallelSeries versus parallel22 • V. V. Krishnan has a model of animals searching for habitat (home, breeding area, hunting area, etc.) • It uses the value of each habitat and the cost of moving between sites. • When travel between sites is inexpensive, e. g. birds or honeybees* searching for a nest site, the search is often a tradeoff study comparing alternatives in parallel. • When travel is expensive, e.g. beavers searching for a dam site, the search is usually sequential.
  • 94. 03/24/14 © 2009 Bahill94 Series versus parallelSeries versus parallel33 ** • If a person is looking for a new car, he or she might perform a tradeoff study. • Whereas a person looking for a used car might use a sequential search, because the availability of cars would change day by day.
  • 95. 03/24/14 © 2009 Bahill95 The need for changeThe need for change** •People do not make good decisions. •A careful tradeoff study will help you overcome human ineptitude and thereby make better decisions.
  • 96. 03/24/14 © 2009 Bahill96 Rational decisionsRational decisions** • One goal • Perfect information • The optimal course of action can be described • This course maximizes expected value • This is a prescriptive model. We tell people that, in an ideal world, this is how they should make decisions.
  • 97. 03/24/14 © 2009 Bahill97 SatisficingSatisficing** • When making decisions there is always uncertainty, too little time and insufficient resources to explore the whole problem space. • Therefore, people cannot make rational decisions. • The term satisficing was coined by Noble Laureate Herb Simon in 1955. • Simon proposed that people do not attempt to find an optimal solution. Instead, they search for alternatives that are good enough, alternatives that satisfice.
  • 98. 03/24/14 © 2009 Bahill98
  • 99. 03/24/14 © 2009 Bahill99 Humans are not rationalHumans are not rational** 11 • Mark Twain said,  “It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.” • Humans are often very certain of knowledge that is false.  What American city is directly north of Santiago Chile?  If you travel from Los Angeles to Reno Nevada, in what direction would you travel? • Most humans think that there are more words that start with the letter r, than there are with r as the third letter.
  • 100. 03/24/14 © 2009 Bahill100 IllusionsIllusions** • We call these cognitive illusions. • We believe them with as much certainty as we believe optical illusions.
  • 101. 03/24/14 © 2009 Bahill101 The MThe Müüller-Lyer Illusionller-Lyer Illusion**
  • 102. 03/24/14 © 2009 Bahill102
  • 103. 03/24/14 © 2009 Bahill103
  • 104. 03/24/14 © 2009 Bahill104 Humans judge probabilities poorlyHumans judge probabilities poorly**
  • 105. 03/24/14 © 2009 Bahill105 Monty Hall ParadoxMonty Hall Paradox11 **
  • 106. 03/24/14 © 2009 Bahill106 Monty Hall ParadoxMonty Hall Paradox22 **
  • 107. 03/24/14 © 2009 Bahill107 Monty Hall ParadoxMonty Hall Paradox33 **
  • 108. 03/24/14 © 2009 Bahill108 Monty Hall ParadoxMonty Hall Paradox44 **
  • 109. 03/24/14 © 2009 Bahill109 Monty Hall ParadoxMonty Hall Paradox55 ** • Now here is your problem. • Are you better off sticking to your original choice or switching? • A lot of people say it makes no difference. • There are two boxes and one contains a ten- dollar bill. • Therefore, your chances of winning are 50/50. • However, the laws of probability say that you should switch.
  • 110. Monty Hall knew which door had the donkeyMonty Hall knew which door had the donkey 03/24/14 © 2009 Bahill110
  • 111. 03/24/14 © 2009 Bahill111 Monty Hall ParadoxMonty Hall Paradox66 ** • The box you originally chose has, and always will have, a one-third probability of containing the ten-dollar bill. • The other two, combined, have a two-thirds probability of containing the ten-dollar bill. • But at the moment when I open the empty box, then the other one alone will have a two-thirds probability of containing the ten-dollar bill. • Therefore, your best strategy is to always switch!
  • 112. 03/24/14 © 2009 Bahill112 UtilityUtility • We have just discussed the right column, subjective probability. • Now we will discuss the bottom row, utility
  • 113. 03/24/14 © 2009 Bahill113 UtilityUtility • Utility is a measure of the happiness, satisfaction or reward a person gains (or loses) from receiving a good or service. • Utilities are numbers that express relative preferences using a particular set of assumptions and methods. • Utilities include both subjectively judged value and the assessor's attitude toward risk.
  • 114. 03/24/14 © 2009 Bahill114 RiskRisk • Systems engineers use risk to evaluate and manage bad things that could happen, hazards. Risk is measured with the frequency (or probability) of occurrence times the severity of the consequences. • However, in economics and in the psychology of decision making, risk is defined as the variance of the expected value, uncertainty.* p1 x1 p2 x2 Risk, uncertainty A 1.0 $10 $10 $0 none B 0.5 $5 0.5 $15 $10 $25 medium C 0.5 $1 0.5 $19 $10 $81 high 2 σµ
  • 115. 03/24/14 © 2009 Bahill115 Ambiguity, uncertainty and hazards*Ambiguity, uncertainty and hazards* • Hazard: Would you prefer my forest picked mushrooms or portabella mushrooms from the grocery store? • Uncertainty: Would you prefer one of my wines or a Kendall-Jackson Napa Valley merlot? • Ambiguity: Would you prefer my saffron and oyster sauce or marinara sauce?
  • 116. 03/24/14 © 2009 Bahill116 Gains and losses are not valued equallyGains and losses are not valued equally**
  • 117. 03/24/14 © 2009 Bahill117 Humans are not rationalHumans are not rational22 • Even if they had the knowledge and resources, people would not make rational decisions, because they do not evaluate utility rationally. • Most people would be more concerned with a large potential loss than with a large potential gain. Losses are felt more strongly than equal gains. • Which of these wagers would you prefer to take?* $2 with probability of 0.5 and $0 with probability 0.5 $1 with probability of 0.99 and $1,000,000 with probability 0.00000001 $3 with probability of 0.999999 and -$1,999,997 with probability 0.000001
  • 118. 03/24/14 © 2009 Bahill118 Humans are not rationalHumans are not rational33 $2 with probability of 0.5 or $0 with probability 0.5 $0
  • 119. 03/24/14 © 2009 Bahill119 Humans are not rationalHumans are not rational44 $1 with probability of 0.99 $1,000,000 with probability 0.00000001
  • 120. 03/24/14 © 2009 Bahill120 Humans are not rationalHumans are not rational55 You owe me two million dollars! $3 with probability of 0.999999 -$1,999,997 with probability 0.000001
  • 121. 03/24/14 © 2009 Bahill121 Humans are not rationalHumans are not rational66 • Which of these wagers would you prefer to take? $2 with probability of 0.5 or $0 with probability 0.5 $1 with probability of 0.99 or $1,000,000 with probability 0.00000001 $3 with probability of 0.999999 or -$1,999,997 with probability 0.000001 • Most engineers prefer the $2 bet • Very few people choose the $3 bet • All three have an expected value of $1
  • 122. 03/24/14 © 2009 Bahill122 Subjective expected utilitySubjective expected utility combines two subjective concepts: utility and probability. • Utility is a measure of the happiness or satisfaction a person gains from receiving a good or service. • Subjective probability is the person’s assessment of the frequency or likelihood of the event occurring. • The subjective expected utility is the product of the utility times the probability.
  • 123. 03/24/14 © 2009 Bahill123 Subjective expected utility theorySubjective expected utility theory models human decision making as maximizing subjective expected utility  maximizing, because people choose the set of alternatives with the highest total utility,  subjective, because the choice depends on the decision maker’s values and preferences, not on reality (e.g. advertising improves subjective perceptions of a product without improving the product), and  expected, because the expected value is used. • This is a first-order model for human decision making. • Sometimes it is called Prospect Theory*.
  • 124. 03/24/14 © 2009 Bahill124
  • 125. 03/24/14 © 2009 Bahill125 Why teach tradeoff studies?Why teach tradeoff studies? • Because emotions, cognitive illusions, biases, fallacies, fear of regret and use of heuristics make humans far from ideal decision makers. • Using tradeoff studies judiciously can help you make rational decisions. • We would like to help you move your decisions from the normal human decision-making lower- right quadrant to the ideal decision-making upper-left quadrant.
  • 126. 03/24/14 © 2009 Bahill126 Components of a tradeoff studyComponents of a tradeoff study  Problem statement • Evaluation criteria • Weights of importance • Alternative solutions • Evaluation data • Scoring functions • Normalized scores • Combining functions • Preferred alternatives • Sensitivity analysis
  • 127. 03/24/14 © 2009 Bahill127 Problem statementProblem statement • Stating the problem properly is one of the systems engineer’s most important tasks, because an elegant solution to the wrong problem is less than worthless. • Problem stating is more important than problem solving. • The problem statement  describes the customer’s needs,  states the goals of the project,  delineates the scope of the problem,  reports the concept of operations,  describes the stakeholders,  lists the deliverables and  presents the key decisions that must be made.
  • 128. 03/24/14 © 2009 Bahill128 Components of a tradeoff studyComponents of a tradeoff study • Problem statement Evaluation criteria • Weights of importance • Alternative solutions • Evaluation data • Scoring functions • Scores • Combining functions • Preferred alternatives • Sensitivity analysis
  • 129. 03/24/14 © 2009 Bahill129 Evaluation criteriaEvaluation criteria • are derived from high priority tradeoff requirements. • should be independent, but show compensation. • Each alternative will be given a value that indicates the degree to which it satisfies each criterion. This should help distinguish between alternatives. • Evaluation criteria might be things like performance, cost, schedule, risk, security, reliability and maintainability.
  • 130. 03/24/14 © 2009 Bahill130 Evaluation criterion templateEvaluation criterion template • Name of criterion • Description • Weight of importance (priority) • Basic measure • Units • Measurement method • Input (with expected values or the domain) • Output • Scoring function (type and parameters) • Traces to (requirement of document)
  • 131. 03/24/14 © 2009 Bahill131 Example criterion packageExample criterion package11 • Name of criterion: Percent Happy Scouts • Description: The percentage of scouts that leave the race with a generally happy feeling. This criterion was suggested by Sales and Marketing and the Customer. • Weight of importance: 10 • Basic measure:* Percentage of scouts who leave the event looking happy, contented or pleased • Units: percentage • Measurement method: Estimate by the Pinewood Derby Marshall • Input: The domain is 0 to 100%. The expected values are 70 to 100%.
  • 132. 03/24/14 © 2009 Bahill132 Example criterion pacExample criterion packkageage22 • Output: 0 to 1 • Scoring function:* Monotonic increasing with lower threshold of 0, baseline of 90, baseline slope of 0.1 and upper threshold of 100.
  • 133. 03/24/14 © 2009 Bahill133 Second example criterion packageSecond example criterion package11 ** • Name of criterion: Total Event Time • Description: The total event time will be calculated by subtracting the start time from the end time. • Weight of importance: 8 • Basic measure: Duration of the derby from start to finish. • Units: Hours • Measurement method: Observation, recording and calculation by the Pinewood Derby Marshall. • Input: The domain is 0 to 8 hours. The expected values are 1 to 6 hours.
  • 134. 03/24/14 © 2009 Bahill134 Second example criterion pacSecond example criterion packkageage22 • Output: 0 to 1 • Scoring function: Biphasic hill shape with lower threshold of 0, lower baseline of 2, lower baseline slope of 0.67, optimum of 3.5, upper baseline of 4.5, upper baseline slope of -1 and upper threshold of 8.
  • 135. 03/24/14 © 2009 Bahill135 Verboten criteriaVerboten criteria • Availability should not be a criterion, because it cannot be traded off.* • Assume oranges are available 6 months out of the year. • Would it make sense to do a tradeoff study selecting between apples and oranges and give oranges an availability expected value of 0.5? • Suppose your tradeoff study selects oranges, but it is October and oranges are not available: the tradeoff study makes no sense.
  • 136. 03/24/14 © 2009 Bahill136 Mini-summaryMini-summary Evaluation criteria are quantitative measures for evaluating how well a system satisfies its performance, cost, schedule or risk requirements.
  • 137. 03/24/14 © 2009 Bahill137 Evaluation criteria are also calledEvaluation criteria are also called • Attributes* • Objectives • Metrics • Measures • Quality characteristics • Figures of merit • Acceptance criteria “Regardless of what has gone before, the acceptance criteria determine what is actually built.”
  • 138. 03/24/14 © 2009 Bahill138 Other similar termsOther similar terms • Index • Indicators • Factors • Scales • Measures of Effectiveness • Measures of Performance
  • 139. 03/24/14 © 2009 Bahill139 MoE versus MoPMoE versus MoP • Generally, it is not worth the effort to debate nuances of these terms. But here is an example. • Measures of Effectiveness (MoEs) show how well (utility or value) a part of the system mission is satisfied. For an undergraduate student trying to earn a Bachelors degree, his or her class (Freshman, Sophomore, Junior or Senior) would be an MoE. • Measures of Performance (MoPs) show how well the system functions. For our undergraduate student, their grade point average would be an MoP.* • MoEs are often computed using several MoPs.
  • 140. MoEs versus MoPsMoEs versus MoPs22 •The city of Tucson wants to widen Grant Road between I-10 and Alvernon Road. They want six lanes with a median, a 45 mph speed limit, and no traffic jams. •MoEs  cars per day averaged over two weeks  cars per hour between 5 and 6 PM, Monday to Friday, averaged over two weeks •MoPs  number of pot holes after one year  traffic noise (in dB) at local store fronts  smoothness of the surface  esthetics of landscaping  straightness of the road  travel time from I-10 to Alvernon  number of traffic lights 03/24/14 © 2009 Bahill140
  • 141. MoEs versus MoPsMoEs versus MoPs33 • MoEs are typically owned by the customer • MoPs are typically owned by the contractor 03/24/14 © 2009 Bahill141
  • 142. 03/24/14 © 2009 Bahill142 Moe* thinks at a higher level than the mop does
  • 143. MoEs, MoPs, KPIs, FoMsMoEs, MoPs, KPIs, FoMs and evaluation criteriaand evaluation criteria • MoEs quantify how well the mission is satisfied • MoPs quantify how well the system functions • Key performance indices (KPIs) are the most important MoPs • Evaluation criteria are MoPs that are used in tradeoff studies • Figures of Merit (FoMs) are the same as evaluation criteria. • All of these must trace to requirements 03/24/14 © 2009 Bahill143
  • 144. 03/24/14 © 2009 Bahill144 Properties of Good Evaluation CriteriaProperties of Good Evaluation Criteria
  • 145. 03/24/14 © 2009 Bahill145 Properties of good evaluation criteriaProperties of good evaluation criteria • Criteria should be objective • Criteria should be quantitative • Wording of criteria is very important • Criteria should be independent • Criteria should show compensation • Criteria should be linked to requirements • The criteria set should be hierarchical • The criteria set should cover the domain evenly • The criteria set should be transitive • Temporal order should not be important • Criteria should be time invariant Overview slide
  • 146. 03/24/14 © 2009 Bahill146 Evaluation criteria propertiesEvaluation criteria properties • These properties deal with  verification  the combining function  individual criteria  sets of criteria • But problems created by violating these properties can be ameliorated by reengineering the criteria
  • 147. 03/24/14 © 2009 Bahill147 Evaluation criteria should be objectiveEvaluation criteria should be objective (observer independent)(observer independent) • Being Pretty or Nice should not be a criterion for selecting crewmembers • In sports, Most Valuable Player selections are often controversial • Deriving a consensus for the Best Football Player of the Century would be impossible
  • 148. 03/24/14 © 2009 Bahill148 Evaluation criteria should be quantitativeEvaluation criteria should be quantitative Each criterion should have a scoring function
  • 149. 03/24/14 © 2009 Bahill149 Evaluation criteria should be worded in aEvaluation criteria should be worded in a positive manner, so that more is betterpositive manner, so that more is better** • Use Uptime rather than Downtime. • Use Mean Time Between Failures rather than Failure Rate. • Use Probability of Success, rather than Probability of Failure. • When using scoring functions make sure more output is better • “Nobody does it like Sara LeeSM ”
  • 150. 03/24/14 © 2009 Bahill150 Exercise: rewrite this statementExercise: rewrite this statement We have a surgical procedure that should cure your problem. Statistically one percent of the people who undergo this surgery die. Would you like to have this surgery?
  • 151. 03/24/14 © 2009 Bahill151 Percent happy scoutsPercent happy scouts • The Pinewood Derby tradeoff study had these criteria  Percent Happy Scouts  Number of Irate Parents • Because people evaluate losses and gains differently, the Preferred alternatives might have been different if they had used  Percent Unhappy Scouts  Number of Ecstatic Parents
  • 152. 03/24/14 © 2009 Bahill152
  • 153. 03/24/14 © 2009 Bahill153 Criteria should be independentCriteria should be independent • Human Sex and IQ are independent • Human Height and Weight are dependent
  • 154. 03/24/14 © 2009 Bahill154 The importance of independenceThe importance of independence Buying a new car, couple-1 criteria • Wife  Safety • Husband  Peak Horse Power
  • 155. 03/24/14 © 2009 Bahill155 Buying a new car, couple-2 criteriaBuying a new car, couple-2 criteria • Wife  Safety • Husband  Maximum Horse Power  Peak Torque  Top Speed  Time for the Standing Quarter Mile  Engine Size (in liters)  Number of Cylinders.  Time to Accelerate 0 to 60 mph What kind of a car do you think they will buy?*
  • 156. 03/24/14 © 2009 Bahill156 Criteria should show compensationCriteria should show compensation From the Systems Engineering literature, tradeoff requirements show compensation Dictionary definition compensate v. 1. To offset: counterbalance. Compensate means to tradeoff. You are happy to accept less of one thing in order to get more of another and vice versa.
  • 157. 03/24/14 © 2009 Bahill157 Perfect compensationPerfect compensation • Astronauts growing food on a trip to Mars • Two criteria: Amount of Rice Grown and Amount of Beans Grown • Goal: maximize* total amount of food • A lot of rice and a few beans is just as good as lots of beans and little rice • We can tradeoff beans for rice
  • 158. 03/24/14 © 2009 Bahill158 No compensationNo compensation • A system that produces oxygen and water for our astronauts • A system that produced a huge amount of water, but no oxygen might get the highest score, but, clearly, it would not support life for long. • From Systems Engineering, mandatory requirements show no compensation
  • 159. 03/24/14 © 2009 Bahill159 Choosing today’s lunchChoosing today’s lunch • Candidate meals: pizza, hamburger, fish & chips, chicken sandwich, beer, tacos, bread and water • Criteria: Cost, Preparation Time, Tastiness, Novelty, Low Fat, Contains the Five Food Groups, Complements Merlot Wine, Closeness of Venue • These criteria are independent and also show compensation • Criteria are usually nouns, noun phrases or verb phrases
  • 160. 03/24/14 © 2009 Bahill160
  • 161. 03/24/14 © 2009 Bahill161
  • 162. 03/24/14 © 2009 Bahill162
  • 163. 03/24/14 © 2009 Bahill163 Sometimes it is hard to get bothSometimes it is hard to get both independence and compensationindependence and compensation • If two criteria are independent, they might not show compensation • If they show compensation, they might not be independent • Independence is more important for mandatory requirements • Compensation is more important for tradeoff requirements
  • 164. 03/24/14 © 2009 Bahill164 RelationshipsRelationships • Each evaluation criterion must be linked to a tradeoff requirement.  Or in early design phases to a Mission statement, ConOps, OCD or company policy. • But only a few tradeoff requirements are used in the tradeoff study.
  • 165. 03/24/14 © 2009 Bahill165 Evaluation criteria hierarchyEvaluation criteria hierarchy • The criteria tree should be hierarchical • The top level often contains  Performance  Cost  Schedule  Risk • Dependent entries are grouped into subcategories • The criteria set should cover the domain evenly
  • 166. 03/24/14 © 2009 Bahill166 Evaluation criteria set should be transitiveEvaluation criteria set should be transitive** If A is preferred to B, and B is preferred to C, then A should be preferred to C. This property is needed for assigning weights.
  • 167. 03/24/14 © 2009 Bahill167 Temporal orderTemporal order should not be importantshould not be important Criteria should be created so that the temporal order is not important for verifying or combining.
  • 168. 03/24/14 © 2009 Bahill168 The temporal order of verifyingThe temporal order of verifying criteria should not be importantcriteria should not be important • Criteria requiring that clothing be Flame Proof and Water Resistant would make the verification results depend on which we tested first  If the criteria depend on temporal order, then an expert system or a decision tree might be more suitable
  • 169. 03/24/14 © 2009 Bahill169 Temporal orderTemporal order should not be importantshould not be important • Fragment of a job application • Q: “Have you ever been arrested?”  A: “No.” • Q: “Why?”  A: “Never got caught.”
  • 170. 03/24/14 © 2009 Bahill170 The temporal order of combiningThe temporal order of combining criteria should not be importantcriteria should not be important • Consider a combining function (CF) that adds two numbers truncating the fraction (0.2 CF 0.6) CF 0.9 = 0, however, (0.9 CF 0.6) CF 0.2 = 1, the result depends on the order. • With the Boolean NAND* function (↑) (0 ↑1) ↑ 1 = 0 however, (1 ↑1) ↑ 0 = 1, the result depends on the order.
  • 171. Order of presentation is importantOrder of presentation is important • The stared question is the only question that department and college promotion committees look at. It is the only question reported in the TCE History. • Larry Alimony’s CIEQ • I would take another course that was taught this way • The course was quite boring • The instructor seemed interested in students as individuals • The instructor exhibited a through knowledge of the subject matter What is your overall rating of this instructor’s teaching effectiveness? • TCE  What is your overall rating of this instructor’s teaching effectiveness? • What is your overall rating of the course? • Rate the usefulness of HW, projects, etc. • What is your rating of this instructor compared to other instructors? • The difficulty level of the course is … 03/24/14 © 2009 Bahill171
  • 172. 03/24/14 © 2009 Bahill172 Criteria should be time invariantCriteria should be time invariant • Criteria should not change with time • It would be nice if the evaluation data also did not change with time, but this is unrealistic
  • 173. 03/24/14 © 2009 Bahill173 Evaluation cEvaluation criteria libraryriteria library • Criteria should be created so that they can be reused. • Your company should have library of generic criteria. • Each criterion package would have the following slots  Name  Description  Weight of importance (priority)  Basic measure  Units  Measurement method  Input (with allowed and expected range)  Output  Scoring function (type and parameters)  Trace to (document)
  • 174. 03/24/14 © 2009 Bahill174 Components of a tradeoff studyComponents of a tradeoff study • Problem statement • Evaluation criteria  Weights of importance • Alternative solutions • Evaluation data • Scoring functions • Scores • Combining functions • Preferred alternatives • Sensitivity analysis
  • 175. 03/24/14 © 2009 Bahill175 Weights of importanceWeights of importance The decision maker should assign weights so that the more important criteria will have more effect on the outcome.
  • 176. 03/24/14 © 2009 Bahill176 Using weightsUsing weights For the Sum Combining Function For the Product Combining Function, the weights should be put in the exponent j j 1 weightOutput score n j= = ∏ j j 1 Output weight score n j= = ∑
  • 177. 03/24/14 © 2009 Bahill177 Part of a Pinewood Derby tradeoff studyPart of a Pinewood Derby tradeoff study Performance figures of merit evaluated on a prototype for a Round Robin with Best Time Scoring Figure of Merit Input value Score Weight Score times weight 1. Average Races per Car 6 0.94 0.20 0.19 2. Number of Ties 0 1 0.20 0.20 3. Happiness 0.87 0.60 0.52 Qualitative weight Normalized weight Input value Scoring function Score Score times weight 3.1 Percent Happy Scouts 10 0.50 96 0.98 0.49 3.2 Number of Irate Parents 5 0.25 1 0.50 0.13 3.3 Number of Lane Repeats 5 0.25 0 1.00 0.25 Sum 0.87 0.91
  • 178. 03/24/14 © 2009 Bahill178 Aspects that help establish weightsAspects that help establish weights Reference: A Prioritization Process Organizational Commitment Time Required Criticality to Mission Success Risk Architecture Safety Business Value Complexity Priority of Scenarios (use cases) Implementation Difficulty Frequency of Use Stability Benefit Dependencies Cost Reuse Potential Benefit to Cost Ratio When it is needed
  • 179. 03/24/14 © 2009 Bahill179
  • 180. 03/24/14 © 2009 Bahill180 Cardinal versus ordinalCardinal versus ordinal • Weights should be cardinal measures not ordinal measures. • Cardinal measures indicate size or quantity. • Ordinal measures merely indicate rank ordering.* • Cardinal numbers do not just tell us that one criteria is more important than another – they tell us how much more important. • If one criteria has a weight of 6 and another a weight of 3, then the first is twice as important as the second.
  • 181. 03/24/14 © 2009 Bahill181 Methods for deriving weights*Methods for deriving weights* • Decision maker assigns numbers between 1 and 10 to criteria* • Decision maker rank orders the criteria* • Decision maker makes pair-wise comparisons of criteria (AHP)* • Algorithms are available that combine performance, cost, schedule and risk • Quality Function Deployment (QFD) • The method of swing weights • Some people advocate assigning weights only after deriving evaluation data*
  • 182. 03/24/14 © 2009 Bahill182 Components of a tradeoff studyComponents of a tradeoff study • Problem statement • Evaluation criteria • Weights of importance  Alternative solutions • Evaluation data • Scoring functions • Scores • Combining functions • Preferred alternatives • Sensitivity analysis
  • 183. 03/24/14 © 2009 Bahill183 AlternativesAlternatives
  • 184. 03/24/14 © 2009 Bahill184 The Do Nothing AlternativeThe Do Nothing Alternative
  • 185. 03/24/14 © 2009 Bahill185 The status quoThe status quo "Selecting an option from a group of similar options can be difficult to justify and thus may increase the apparent attractiveness of retaining the status quo. To avoid this tendency, the decision maker should identify each potentially attractive option and compare it directly to the status quo, in the absence of competing alternatives. If such direct comparison yields discrepant judgments, the decision maker should reflect on the inconsistency before making a final choice." Redelmeier and Shafir, 1995
  • 186. 03/24/14 © 2009 Bahill186 Selecting a new carSelecting a new car Bahill has a Datsun 240Z with 160,000 miles His replacement options are DoDo NothingNothing
  • 187. 03/24/14 © 2009 Bahill187 The Do Nothing alternatives forThe Do Nothing alternatives for replacing a Datsun 240Z  Status quo, keep the 240Z  Nihilism, do without a car, i.e., walk or take the bus
  • 188. 03/24/14 © 2009 Bahill188 If the Do Nothing alternative wins,If the Do Nothing alternative wins, your Cost, Schedule and Risk criteria may have overwhelmed your Performance criteria.
  • 189. 03/24/14 © 2009 Bahill189 If a Do Nothing alternative winsIf a Do Nothing alternative wins22 • Just as you should not add apples and oranges, you should not combine Performance, Cost, Schedule and Risk criteria with each other  Combine the Performance criteria (with their weights normalized so that they add up to one)  Combine the Cost criteria  Combine the Schedule criteria  Combine the Risk criteria • Then the Performance, Cost, Schedule and Risk combinations can be combined with clearly stated weights, 1/4, 1/4, 1/4 and 1/4 could be the default. • If a Do Nothing alternative still wins, you may have the weight for Performance too low.
  • 190. 03/24/14 © 2009 Bahill190 Balanced scorecardBalanced scorecard The Business community says that you should balance these perspectives:  Innovation (Learning and Growth)  Internal Processes  Customer  Financial
  • 191. 03/24/14 © 2009 Bahill191 Sacred cowsSacred cows** • One important purpose for including a do nothing alternative (and other bizarre alternatives) is to help get the requirements right. If a bizarre alternative wins the tradeoff analysis, then you do not have the requirements right. • Similarly including sacred cows in the alternatives, will also test the adequacy of the requirements. • “For a successful technology, reality must take precedence over public relations, for nature cannot be fooled.” -- Richard Feynman
  • 192. 03/24/14 © 2009 Bahill192 Alternative conceptsAlternative concepts • When formulating alternative concepts, remember Miller’s* “magical number seven, plus or minus two.” • Also remember that introducing more alternatives only confuses the DM and makes him or her less likely to choose one of the new alternatives.**
  • 193. 03/24/14 © 2009 Bahill193 SynonymsSynonyms • Alternative concepts • Alternative solutions • Alternative designs • Alternative architectures • Options
  • 194. 03/24/14 © 2009 Bahill194 RiskRisk • The risks included in a tradeoff study should only be those that can be traded-off. Do not include the highest-level risks. • Risks might be computed in a separate section, because they usually use the product combining function.
  • 195. 03/24/14 © 2009 Bahill195 CAIVCAIV • Cost as an independent variable (CAIV) • Treating CAIV means that you should do the tradeoff study with a specific cost and then go talk to your customer and see what performance, schedule and risk requirements he or she is willing to give up in order to get that cost. • So if you want to treat CAIV, then keep your tradeoff study independent of cost: that is, do not use cost criteria in your tradeoff study.
  • 196. 03/24/14 © 2009 Bahill196 Two types of requirementsTwo types of requirements •There are two types of requirements mandatory requirements tradeoff requirements
  • 197. 03/24/14 © 2009 Bahill197 Mandatory requirementsMandatory requirements • Mandatory requirements specify necessary and sufficient capabilities that the system must have to satisfy customer needs and expectations. • They use the words shall or must. • They are either passed or failed, with no in between. • They should not be included in a tradeoff study. • Here is an example of a mandatory requirement:  The system shall not violate federal, state or local laws.
  • 198. 03/24/14 © 2009 Bahill198 Tradeoff requirementsTradeoff requirements • Tradeoff requirements state capabilities that would make the customer happier. • They use the words should or want. • They use measures of effectiveness and scoring functions. • They are evaluated with multicriterion decision techniques. • There will be tradeoffs among these requirements. • Here is an example of a tradeoff requirement: Dinner should have items from each of the five food groups: Grains, Vegetables, Fruits, Wine, Milk , and Meat and Beans. • Mandatory requirements are often the upper or lower limits of tradeoff requirements.
  • 199. 03/24/14 © 2009 Bahill199 Mandatory requirementsMandatory requirements should not be in a tradeoff study, because they cannot be traded off. • Including them screws things up incredibly.
  • 200. 03/24/14 © 2009 Bahill200 Components of a tradeoff studyComponents of a tradeoff study • Problem statement • Evaluation criteria • Weights of importance • Alternative solutions  Evaluation data • Scoring functions • Scores • Combining functions • Preferred alternatives • Sensitivity analysis
  • 201. 03/24/14 © 2009 Bahill201 Evaluation dataEvaluation data11 • Evaluation data come from approximations, product literature, analysis, models, simulations, experiments and prototypes. • It would be nice if these values were objective, but sometimes we must resort to elicitation of personal preferences.* • They will be measured in natural units.
  • 202. 03/24/14 © 2009 Bahill202 Evaluation dataEvaluation data22 • Evaluation data should be entered into the matrix one row (one criterion) at a time. • They indicate the degree to which each alternative satisfies each criterion. • They are not probabilities: they are more like fuzzy numbers, degree of membership or degree of fulfillment.
  • 203. 03/24/14 © 2009 Bahill203 UncertaintyUncertainty • Evaluation data (and weights of importance) should, when convenient, have measures of uncertainty associated with the data. • This could be done with probability density functions, fuzzy numbers, variance, expected range, certainty factors, confidence intervals, or simple color coding.
  • 204. 03/24/14 © 2009 Bahill204 NormalizationNormalization** • Evaluation data are transformed into normalized scores by scoring functions (utility curves) or qualitative scales (fuzzy sets). • The outputs of such transformations should be cardinal numbers representing the DMs utility.
  • 205. 03/24/14 © 2009 Bahill205 Scoring function exampleScoring function example This scoring function reflects the DM’s utility that he would be twice as satisfied if there were 91% happy scouts compared to 88% happy scouts.*
  • 206. 03/24/14 © 2009 Bahill206 QualitativeQualitative scales examplesscales examples Evaluation data Qualitative evaluation Output Good example 0 to 86% happy scouts Not satisfied 0.2 86 to 89% happy scouts Marginally satisfied 0.4 89 to 91% happy scouts Satisfied 0.6 91 to 93% happy scouts Very satisfied 0.8 93 to 100% happy scouts Elated 1.0 Bad example 0 to 20% happy scouts Not satisfied 0.2 20 to 40% happy scouts Marginally satisfied 0.4 40 to 60% happy scouts Satisfied 0.6 60 to 80 % happy scouts Very satisfied 0.8 80 to 100% happy scouts Elated 1.0
  • 207. 03/24/14 © 2009 Bahill207 Components of a tradeoff studyComponents of a tradeoff study • Problem statement • Evaluation criteria • Weights of importance • Alternative solutions • Evaluation data  Scoring functions • Scores • Combining functions • Preferred alternatives • Sensitivity analysis
  • 208. 03/24/14 © 2009 Bahill208 What is the best package of soda pop to buy?*What is the best package of soda pop to buy?* Regular price of Coca-Cola in Tucson, January 1995. The Cost criterion is the reciprocal of price. The Performance criterion is the quantity in liters.  Choosing Amongst Alternative Soda Pop Packages Data Criteria Trade-off Values Item Price (dollars) Cost (dollars-1) Quantity (liters) Sum Product Sum Minus Product Com- promise with p=2 Com- promise with p=10 1 can 0.50 2.00 0.35 2.35 0.70 1.65 2.03 2.00 20 oz 0.60 1.67 0.59 2.26 0.98 1.27 1.77 1.67 1 liter 0.79 1.27 1.00 2.27 1.27 1.00 1.62 1.27 2 liter 1.29 0.78 2.00 2.78 1.56 1.22 2.15 2.00 6 pack 2.29 0.44 2.13 2.57 0.94 1.63 2.17 2.13 3 liter 1.69 0.59 3.00 3.59 1.78 1.81 3.06 3.00 12 pack 3.59 0.28 4.26 4.54 1.19 3.35 4.27 4.26 24 pack 5.19 0.19 8.52 8.71 1.62 7.09 8.52 8.52
  • 209. 03/24/14 © 2009 Bahill209 Numerical precisionNumerical precision**
  • 210. 03/24/14 © 2009 Bahill210 The preferred alternative depends on the unitsThe preferred alternative depends on the units For the Sum but not for the Product Tradeoff Function. Choosing Amongst Alternative Soda Pop Packages, Effect of Units Item Price (dollars) Cost (dollars-1) Quantity (liters) Sum Product Quantity (barrels) Sum Product 1 can 0.50 2.00 0.35 2.35 0.70 0.0003 2.0003 0.0060 20 oz 0.60 1.67 0.59 2.26 0.98 0.0050 1.6717 0.0084 1 liter 0.79 1.27 1.00 2.27 1.27 0.0085 1.2785 0.0108 2 liter 1.29 0.78 2.00 2.78 1.56 0.0170 0.7837 0.0132 6 pack 2.29 0.44 2.13 2.57 0.94 0.0181 0.4548 0.0079 3 liter 1.69 0.59 3.00 3.59 1.78 0.0256 0.6173 0.0151 12 pack 3.59 0.28 4.26 4.54 1.19 0.0363 0.3148 0.0101 24 pack 5.19 0.19 8.52 8.71 1.62 0.0726 0.2653 0.0140
  • 211. 03/24/14 © 2009 Bahill211 Scoring functionsScoring functions • Criteria should always have scoring functions so that the preferred alternatives do not depend on the units used. • Scoring functions are also called  utility functions  utility curves  value functions  normalization functions  mappings
  • 212. 03/24/14 © 2009 Bahill212 Scoring function for CostScoring function for Cost**
  • 213. 03/24/14 © 2009 Bahill213 Scoring function for QuantityScoring function for Quantity** A simple program that creates graphs such as these is available for free at http://www.sie.arizona.edu/sysengr/slides. It is called the Wymorian Scoring Function tool.
  • 214. 03/24/14 © 2009 Bahill214 The scoring function equationThe scoring function equation** ( )2×Slope× Baseline+CriteriaValue-2×Lower 1 SSF1 Baseline-Lower 1 CriteriaValue-Lower =   +   
  • 215. 03/24/14 © 2009 Bahill215 Evaluation data may be logarithmicEvaluation data may be logarithmic**
  • 216. 03/24/14 © 2009 Bahill216 The need for scoring functionsThe need for scoring functions11 ** • You can add $s and £s, but • you can’t add $s and lbs.
  • 217. 03/24/14 © 2009 Bahill217 The need for scoring functionsThe need for scoring functions22 • Would you add values for something that cost a billion dollars and lasted a nanosecond?* • Alt-1 cost a hundred dollars and lasts one millisecond, Sum = 100.001. • Alt-2 only cost ninety nine dollars but it lasts two millisecond, Sum = 99.002. • Does the duration have any effect on the decision?
  • 218. 03/24/14 © 2009 Bahill218 Different Distributions of Alternatives inDifferent Distributions of Alternatives in Criteria SpaceCriteria Space** May Produce DifferentMay Produce Different Preferred AlternativesPreferred Alternatives
  • 219. Tradeoff of requirements*Tradeoff of requirements* 03/24/14 © 2009 Bahill219 0.25 0.50 0.75 1.00 0.00 5 10 15 200 Pages per Minute Cost(1/k$) 4P 4Plus 4Si
  • 220. 03/24/14 © 2009 Bahill220 Pareto OptimalPareto Optimal Moving from one alternative to another will improve at least one criterion and worsen at least one criterion, i.e., there will be tradeoffs. “The true value of a service or product is determined by what one is willing to give up to obtain it.”
  • 221. 03/24/14 © 2009 Bahill221 NomenclatureNomenclature Real-world data will not fall neatly onto lines such as the circle in the pervious slide. But often they may be bounded by such functions. In the operations research literature such data sets are called convex, although the function bounding them is called concave (Kuhn and Tucker, 1951).
  • 222. 03/24/14 © 2009 Bahill222 Different distributionsDifferent distributions The feasible alternatives may have different distributions in the criteria space. These include:  Circle  Straight Line  Hyperbola
  • 223. 03/24/14 © 2009 Bahill223 Alternatives on a circleAlternatives on a circle** Alternatives on a Circle Assume the alternatives are on the circle x2 + y2 = 1 Sum Combining Function: with the derivative d(Sum Combining Function)/ Product Combining Function: with the derivative d(Product Combining Function)/dx Both Combining Functions have maxima at x=y=0.707 (This result does depend on the weights.)
  • 224. 03/24/14 © 2009 Bahill224 Alternatives on a straight-LineAlternatives on a straight-Line Assume the alternatives are on the straight-line y = -x + 1 Sum Combining Function: x + y = x - x + 1 = 1 All alternatives are optimal (i.e. selection is not possible) Product Combining Function: x * y = -x2 + x with d(Product Combining Function)/dx = -2x + 1 Product Combining Function: maximum at x=0.5 Sum Combining Function: all alternatives are equally good Product Combining Function seems better for decision aiding
  • 225. 03/24/14 © 2009 Bahill225 Alternatives on a hyperbolaAlternatives on a hyperbola** Alternatives on a Hyperbola Assume the alternatives are on the hyperbola (x + 1)(y + 1) = 2 Sum Combining Function: x + y = with d(Sum Combining Function)/dx = Product Combining Function: x * y = with d(Product Combining Function)/dx =
  • 226. 03/24/14 © 2009 Bahill226
  • 227. 03/24/14 © 2009 Bahill227
  • 228. A lively baseball debateA lively baseball debate • For over 30 years baseball statisticians have argued over the best measure of offensive effectiveness. • Two of the most popular measures are  On-base plus slugging OPS = OBP + SLG  Batter’s run average BRA = OBP x SLG • I think their arguments ignored the most relevant data, the shape of the distribution of OBP and SLG for major league players. • If it is circular either will work. • If it is hyperbolic, do not use the sum. 03/24/14 © 2009 Bahill228
  • 229. 03/24/14 © 2009 Bahill229 Muscle force-velocity relationshipMuscle force-velocity relationship • (Force + F0 )(velocity + vmax) = constant, where F0 (the isometric force) and vmax (the maximum muscle velocity) are constants. • Humans sometimes use one combining function and sometimes they use another. • If a bicyclist wants maximum acceleration, he or she uses the point (0, F0). If there is no resistance and maximum speed is desired, use the point (vmax, 0). These solutions result from maximizing the sum of force and velocity. • However, if there is energy dissipation (e.g., Friction, air resistance) and maximum speed is desired, choose the maximum power point, the maximum product of force and velocity. • This shows that the appropriate tradeoff function may depend on the task at hand.
  • 230. 03/24/14 © 2009 Bahill230 Nonconvex data setsNonconvex data sets The muscle force-velocity relationship fit neatly onto lines such as this hyperbola. This will not always be the case. But when it is not, the data may be bounded by such functions. In the operations research literature such data sets are called concave, although the function bounding them is called convex (Kuhn and Tucker, 1951).
  • 231. 03/24/14 © 2009 Bahill231 Mini-summaryMini-summary • The Product Combining Function always favors alternatives with moderate scores for all criteria. It rejects alternatives with a low score for any criterion. • Therefore the Product Combining Function may seem better than the Sum Combining Function. But the Sum Combining Function is used much more in systems engineering.
  • 232. 03/24/14 © 2009 Bahill232 Components of a tradeoff studyComponents of a tradeoff study • Problem statement • Evaluation criteria • Weights of importance • Alternative solutions • Evaluation data • Scoring functions • Scores  Combining functions • Preferred alternatives • Sensitivity analysis
  • 233. 03/24/14 © 2009 Bahill233 Summation is not alwaysSummation is not always the best way to combine datathe best way to combine data**
  • 234. 03/24/14 © 2009 Bahill234 Popular combining functionsPopular combining functions • Sum Combining Function = x + y  Used most often by engineers • Product Combining Function = x ∗ y  Cost to benefit ratio  Risk analyses  Game theory* • Sum Minus Product = x + y - xy  Probability theory  Fuzzy logic systems  Expert system certainty factors • Compromise = ( ) 1/pp p x + y
  • 235. 03/24/14 © 2009 Bahill235 XORXOR** • The previous combining functions implemented an AND function of the criteria. • There is no combining function that implements the exclusive or (XOR) function, e.g. • Criterion-1: Fuel consumption in highway driving, miles per gallon of gasoline. Baseline = 23 mpg. • Criterion-2: Fuel consumption in highway driving, miles per gallon of diesel fuel. Baseline = 26 mpg. • You want to use criterion-1 for alternatives with gasoline engines and criterion-2 for alternatives with diesel engines.
  • 236. 03/24/14 © 2009 Bahill236 The American public acceptsThe American public accepts the Sum Combining Functionthe Sum Combining Function • It is used to rate NFL quarterbacks • It is used to select the best college football teams
  • 237. 03/24/14 © 2009 Bahill237 NFL quarterback passer ratingsNFL quarterback passer ratings BM stands for basic measure BM1 = (Completed Passes) / (Pass Attempts) BM2 = (Passing Yards) / (Pass Attempts) BM3 = (Touchdown Passes) / (Pass Attempts) BM4 = Interceptions / (Pass Attempts) Rating = [5(BM1-0.3) + 0.25(BM2-3) + 20(BM3) + 25(- BM4+0.095)]*100/6
  • 238. 03/24/14 © 2009 Bahill238 College football BCSCollege football BCS** BM1 = Polls: AP media & ESPN coaches BM2 = Computer Rankings: Seattle Times, NY Times, Jeff Sagarin, etc. BM3 = Strength of Schedule BM4 = Number of Losses Rating = [BM1 + BM2 + BM3 - BM4] http://sports.espn.go.com/ncf/abcsports/BCSStandings www.bcsFootball.org
  • 239. 03/24/14 © 2009 Bahill239 What is the best package of soda pop to buy?What is the best package of soda pop to buy?** Regular price of Coca-Cola in Tucson, January 1995. The Cost criterion is the reciprocal of price. The Performance criterion is the quantity in liters.  Choosing Amongst Alternative Soda Pop Packages Data Criteria Trade-off Values Item Price (dollars) Cost (dollars-1) Quantity (liters) Sum Product Sum Minus Product Com- promise with p=2 Com- promise with p=10 1 can 0.50 2.00 0.35 2.35 0.70 1.65 2.03 2.00 20 oz 0.60 1.67 0.59 2.26 0.98 1.27 1.77 1.67 1 liter 0.79 1.27 1.00 2.27 1.27 1.00 1.62 1.27 2 liter 1.29 0.78 2.00 2.78 1.56 1.22 2.15 2.00 6 pack 2.29 0.44 2.13 2.57 0.94 1.63 2.17 2.13 3 liter 1.69 0.59 3.00 3.59 1.78 1.81 3.06 3.00 12 pack 3.59 0.28 4.26 4.54 1.19 3.35 4.27 4.26 24 pack 5.19 0.19 8.52 8.71 1.62 7.09 8.52 8.52
  • 240. 03/24/14 © 2009 Bahill240 ResultsResults • The Product Combining Function suggests that the preferred package is the three liter bottle • However, the other combining functions all recommend the 24 pack • Plotting these data on Cartesian coordinates produces a nonconvex distribution • The best hyperbolic fit to these data is (quantity + 0.63)(cost + 0.08) = 2
  • 241. 03/24/14 © 2009 Bahill241 Soda pop dataSoda pop data 0 0.5 1 1.5 2 2.5 0 5 10 Quantity (liters) Cost(1/dollars)
  • 242. 03/24/14 © 2009 Bahill242
  • 243. 03/24/14 © 2009 Bahill243 Which matchesWhich matches human decision making?human decision making? • For a nonconvex distribution, the Sum Combining Function will favor the points at either end of the distribution. Sometimes this matches human decision making.  I usually buy a case of soda for my family.  A person working in an office building on a Sunday afternoon might buy a single can from the vending machine. • A frugal person might want to maximize the product of cost and performance, i.e. the maximum liters/dollar (the biggest bang for the buck), which is the three liter bottle. This matches the recommendation of the Product Combining Function.
  • 244. 03/24/14 © 2009 Bahill244 Which matches humanWhich matches human decision making?decision making? (cont.)(cont.) This example shows that for a nonconvex distribution of alternatives, the choice of the combining function determines the preferred alternative.
  • 245. 03/24/14 © 2009 Bahill245 Who was the best NFL quarterback?Who was the best NFL quarterback? • NFL quarterback passer ratings • BM1 = (Completed Passes) / (Pass Attempts) • BM2 = (Passing Yards) / (Pass Attempts) • BM3 = (Touchdown Passes) / (Pass Attempts) • BM4 = Interceptions / (Pass Attempts) • Rating = [5(BM1-0.3) + 0.25(BM2-3) + 20(BM3) + 25(-BM4+0.095)]*100/6
  • 246. 03/24/14 © 2009 Bahill246 The best NFL quarterback for 1999The best NFL quarterback for 1999 http://www.football.espn.go.com/nfl/statistics/ Sum (p=1) Product Sum Minus Product Compromise with p=2 Compromise with p=∞ Kurt Warner Kurt Warner Kurt Warner Kurt Warner Kurt Warner Steve Beuerlein Jeff George Steve Beuerlein Steve Beuerlein Jeff George Jeff George Steve Beuerlein Jeff George Peyton Manning Steve Beuerlein Peyton Manning Peyton Manning Peyton Manning Jeff George Peyton Manning
  • 247. The best NFL quarterback 1994The best NFL quarterback 1994 03/24/14 © 2009 Bahill247 Sum Product Sum Minus Product Compromise with p=∞ Steve Young Steve Young Steve Bono Steve Bono John Elway John Elway Bubby Brister Steve Young Dan Marino Dan Marino Steve Beuerlein Bobby Herbert Bobby Herbert Bobby Herbert Jeff George Dan Marino Eric Kramer Warren Moon Neil O’Donnell Eric Kramer
  • 248. 03/24/14 © 2009 Bahill248 A manned mission to MarsA manned mission to Mars11 • The astronauts will grow beans and rice • Lots of beans and a little rice is just as good as lots of rice and a few beans • Both the Sum and the Product Combining Functions work fine
  • 249. 03/24/14 © 2009 Bahill249 A manned mission to MarsA manned mission to Mars22 • The astronauts need a system that produces oxygen and water • The Product Combining Function works fine • But the Sum Combining Function could recommend zero water or zero oxygen
  • 250. 03/24/14 © 2009 Bahill250 Implementing the combining functionsImplementing the combining functions • The Analytic Hierarchy Process (implemented by the commercial tool Expert Choice) allows the user to choose between the sum and the product combining functions. • You would have to implement the other combining functions by yourself.
  • 251. 03/24/14 © 2009 Bahill251 TheThe compromise combining function*compromise combining function* Compromise = ( ) 1/ pp p x y+
  • 252. 03/24/14 © 2009 Bahill252 When shouldWhen should pp be 1, 2 orbe 1, 2 or ∞∞?? • Use p = 1 if the criteria show perfect compensation • Use p = 2 if you want Euclidean distance. • Use p = ∞ if you are selecting a hero and there is no compensation • Compromise = ( ) 1/ pp p x y+
  • 253. 03/24/14 © 2009 Bahill253 IfIf pp == ∞∞ • The preferred alternative is the one with the largest criterion • There is no compensation, because only one criterion (the largest) is considered • Compromise Output = • If p is large and x > y then xp >> yp and Compromise Output 1/ pp p x y +  1/ pp x x = = 
  • 254. 03/24/14 © 2009 Bahill254 UseUse pp == ∞∞ when selectingwhen selecting • the greatest athlete of the century using Number of National Championship Rings* and Peak Salary • the baseball player of the week using Home Runs and Pitching Strikeouts • a movie using Romance, Action and Comedy
  • 255. 03/24/14 © 2009 Bahill255 NBA teams seem to useNBA teams seem to use pp == ∞∞ • When drafting basketball players • Criteria are Height and Assists • They want seven-foot players with ten assists per game (the ideal point) • In years when there are many point guards but no centers, they draft the best point guards • Choose the criterion with the maximum score (Assists) and then select the alternative whose number of Assists has the minimum distance to the ideal point
  • 256. 03/24/14 © 2009 Bahill256 UseUse pp == ∞∞ when choosing minimaxwhen choosing minimax • A water treatment plant to reduce the amount of mercury, lead and arsenic in the water. • Trace amounts are not of concern. • First, find the poison with the maxmaximum concentration, then choose the alternative with the miniminimum amount of that poison. • Hence the term minimaxminimax.
  • 257. 03/24/14 © 2009 Bahill257 Design of a baseball batDesign of a baseball bat • The ball goes the farthest, if it hits the sweet spot of the bat • Error = |sweet spot - hit point| • Loss = number of feet short of 500 • For an amateur use minimax: minimize the Loss, if the Error is maximum • For Alex Rodriguez use minimin
  • 258. 03/24/14 © 2009 Bahill258 The distanceThe distance the ballthe ball travelstravels depends ondepends on where the ballwhere the ball hits the bathits the bat**
  • 259. 03/24/14 © 2009 Bahill259 UseUse pp == ∞∞ if you are very risk averseif you are very risk averse • A million dollar house on a river bank: a 100-year flood would cause $900K damage • A million dollar house on a mountain top: a violent thunderstorm would cause $100K damage • Minimax: choose the worst risk, the 100-year flood, and choose the alternative that minimizes it: build your house on the mountain top*
  • 260. 03/24/14 © 2009 Bahill260 UseUse pp = 1 if you are probabilistic= 1 if you are probabilistic** • Risk equals (probability times severity of a 100 year flood) plus (probability times severity of a violent thunderstorm) • Risk(River Bank) = 0.01×0.9 + 0.1×0 = 0.009 • Risk(Mountain Top) = 0.01×0 + 0.1×0.1 = 0.010 • Therefore, build your house on the river bank
  • 261. 03/24/14 © 2009 Bahill261 SynonymsSynonyms • Combining functions are also called  objective functions  optimization functions  performance indices • Combining functions may include probability density functions*
  • 262. 03/24/14 © 2009 Bahill262 Summary about combining functionsSummary about combining functions • Summation of weighted scores is the most common. • Product combining function eliminates alternatives with a zero for any criterion.* • Compromise function with p=∞ uses only one criterion.
  • 263. 03/24/14 © 2009 Bahill263 Components of a tradeoff studyComponents of a tradeoff study • Problem statement • Evaluation criteria • Weights of importance • Alternative solutions • Evaluation data • Scoring functions • Scores • Combining functions  Preferred alternatives • Sensitivity analysis
  • 264. 03/24/14 © 2009 Bahill264 Select preferred alternativesSelect preferred alternatives • Select the preferred alternatives. • Present the results of the tradeoff study to the original decision maker and other relevant stakeholders. • A sensitivity analysis will help validate your study.
  • 265. 03/24/14 © 2009 Bahill265 SynonymsSynonyms • Preferred alternatives • Recommended alternatives • Preferred solutions
  • 266. 03/24/14 © 2009 Bahill266 Components of a tradeoff studyComponents of a tradeoff study • Problem statement • Evaluation criteria • Weights of importance • Alternative solutions • Evaluation data • Scoring functions • Scores • Combining functions • Preferred alternatives  Sensitivity analysis
  • 267. 03/24/14 © 2009 Bahill267 PurposePurpose A sensitivity analysis identifies the most important parameters in a tradeoff study.
  • 268. 03/24/14 © 2009 Bahill268 Sensitivity analysesSensitivity analyses • A sensitivity analysis of the tradeoff study is imperative. • Vary the inputs and parameters and discover which ones are the most important. • The Pinewood Derby had 89 criteria. Only three of them could change the preferred alternative.
  • 269. 03/24/14 © 2009 Bahill269 Sensitivity analysis of Pinewood Derby (simulation data)Sensitivity analysis of Pinewood Derby (simulation data)
  • 270. 03/24/14 © 2009 Bahill270 The Do Nothing alternativesThe Do Nothing alternatives • The double elimination tournament was the status quo. • The single elimination tournament was the nihilistic do nothing alternative.
  • 271. 03/24/14 © 2009 Bahill271 Sensitivity analysis of Pinewood Derby (prototype data)Sensitivity analysis of Pinewood Derby (prototype data) Sensitivity of Pinewood Derby (prototype data) 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Performance Weight OverallScore Double elimination Round robin, best-time Round robin, points
  • 272. 03/24/14 © 2009 Bahill272 Semirelative-sensitivity functionsSemirelative-sensitivity functions The semirelative-sensitivity of the function F to variations in the parameter α is 0 NOP F F Sα α α ∂ = ∂ %
  • 273. 03/24/14 © 2009 Bahill273 Tradeoff studyTradeoff study A Generic Tradeoff Study Criteria Weight of Importance Alternative 1 Alternative 2 Criterion 1 Wt1 S11 S12 Criterion 2 Wt2 S21 S22 Final Score F1 F2 A Numeric Example of a Tradeoff Study Alternatives Criteria Weight of Importance Umpire’s Assistant Seeing Eye Dog Accuracy 0.75 0.67 0.33 Silence of Signaling 0.25 0.83 0.17 Sum of weight times score 0.71 The winner 0.29 1 1 11 2 21 2 1 12 2 22andF Wt S Wt S F Wt S Wt S= × + × = × + ×
  • 274. 03/24/14 © 2009 Bahill274 Which parameters could changeWhich parameters could change the recommendations?the recommendations? Use this performance index* Compute the semirelative-sensitivity functions. 1 2 1 11 2 21 1 12 2 22 0.420F F F Wt S Wt S Wt S Wt S= − = × + × − × − × =

Notas do Editor

  1. The most common term is Trade Study. I’ve also seen Trade-off Study. But I prefer Tradeoff Study.
  2. Asterisks (*) in the title or in individual bullets indicate that there are comments for the instructor in the notes section. This slide is intended for the instructor, not the students.
  3. This slide is intended for the instructor, not the students.
  4. Telephones are in your pockets and purses.
  5. Slides with a red title are overview slides, each bullet will be discussed with several subsequent slides.
  6. The purpose of these slides is to show the big picture and where tradeoff studies fit into the big picture. The top level activity is CMMI, DAR is one process area of CMMI, and tradeoff studies are one technique of DAR.
  7. When I give a quote without a source, I am usually the author. I just put the quote marks around it to make it seem more important.
  8. The left column has the CMMI specific practices associated with DAR.
  9. Perform Decision Analysis and Resolution PS0317 Perform Formal Evaluation PD0240 When designing a process, put as many things in parallel as possible.
  10. I seldom make such bold statements.
  11. The task of allocating resources is not a tradeoff study, but it certainly would use the results of a tradeoff study. The quote is probably from CMMI.
  12. Give the students a copy of the letter, which is available at www.sie.arizona.edu/sysengr/slides/tradeoffMath.doc, page 24.
  13. Ref: Decide Formal Evaluation
  14. Ref: Guide Formal Evaluations
  15. Ref: Guide Formal Evaluations
  16. Ref: Establish Evaluation Criteria
  17. Some people will do a tradeoff study when buying a house or a car, but seldom for lesser purchases. All companies should have a repository of good evaluation criteria that have been used. Each would contain the following slots Name of criterion Description Weight of importance (priority) Basic measure Units Measurement method Input (with expected values or the domain) Output Scoring function (type and parameters)
  18. Evaluation criteria: Cost, Preparation Time, Tastiness, Novelty, Low Fat, Contains the Five Food Groups, Complements Merlot Wine, Distance to Venue, length of line, messiness, who you are eating with (if it’s your Mormon boss you should forgo the beer) If you get them wrong, you’ll get the rhinoceros instead of the chocolate torte.
  19. *If these very important requirements are performance related, then they are called key performance parameters. **Killer criteria for today’s lunch: must be vegetarian, non alcoholic, Kosher, diabetic,
  20. *The Creativity Tools Memory Jogger, by D. Ritter & M. Brassard, GOAL/QPC 1998, explains several tools for creative brainstorming. **If a requirement cannot be traded off then it should not be in the tradeoff study. ***The make-reuse-buy process is a part of the Decision Analysis and Resolution (DAR) process.
  21. Candidate meals: pizza, hamburger, fish & chips, chicken sandwich, beer, tacos, bread and water. Be sure that you consider left-overs in the refrigerator.
  22. Ref: Select Evaluation Methods
  23. Additional sources include customer statements, expert opinion, historical data, surveys, and the real system.
  24. Ref: Evaluate Alternatives
  25. Ref: Select Preferred Solutions
  26. Ref: Expert Review of Trade off Studies
  27. Note that this slide says that the formal evaluations should be reviewed. It does not say that the results of the formal evaluations should be reviewed.
  28. IPT stands for integrated product team or integrated product development team.
  29. These results might be the preferred alternatives, or they could be recommendations to expand the search, re-evaluate the original problem statement, or negotiate goals and capabilities with the stakeholders. A most important part of these results is the sensitivity analysis.
  30. Slide 46 lists some possible methods. The title of this slide is the example that we will present in the next 18 slides. In these next 18 slides, the phrases in pink will be the DAR specific practices (rectangular boxes of the process diagram) we are referring to. Some people get confused by the recursion in this example. The May-June 2007 issue of the American Scientist says recursive thinking is the only thing that distinguishes humans from animals. I do a tradeoff study to select a tradeoff study tool.
  31. *MAUT was originally called Multicriterion Decision Analysis. The first complete exposition of MCDA was given in 1976 by Keeney, R. L., & Raiffa, H. Decisions With Multiple Objectives: Preferences and Value Tradeoffs, John Wiley, New York, reprinted, Cambridge University Press, 1993. **AHP is often implemented with the software tool Expert Choice.
  32. Sorry if this is confusing, but this example is recursive. MAUT and AHP are both the alternatives being evaluated and the methods being used to select the preferred alternatives.
  33. In this example we are not using scoring functions, therefore the evaluation data are the Scores. The evaluation data are derived from approximations, models, simulations or experiments on prototypes. Typically the evaluation data are normalized on a scale of 0 to 1 before the calculations are done: for simplicity, we have not done that here. The numbers in this example indicate that MAUT is twice as easy to use as AHP.
  34. Weights are usually based on expert opinion or quantitative decision techniques. Typically the weights are normalized on a scale of 0 to 1 before the calculations are done: I did not do that here. How did I we get the weights of importance? I pulled them out of the blue sky. Is there a systematic way to get weights? Yes, there are many. One is the AHP.
  35. If you had ten criteria, then this matrix would be ten by ten.
  36. Remember the numbers in the right column. They will go into the matrix seven slides from here. Expert Choice has two methods for normalization, and they often give slightly different numbers. It might be difficult to square large matrices, so Saaty (1980) gave 4 approximation methods. AHP, exact solution Raise the preference matrix (with forced reciprocals) to arbitrarily large powers, and divide the sum of each row by the sum of the elements of the matrix to get a weights column. (Dr. Bahill’s example, with a power of 2) To compute the Consistency Index: Multiply preference matrix by weights column Divide the elements of this new column by the elements in the weights column Sum the components and divide by the number of components. This gives λmax (called the maximum or principal eigenvalue). The closer λmax is to n, the elements in the preference matrix, the more consistent the result. Deviation from consistency may be represented the Consistency Index (C.I.) = (λmax – n)/(n-1) Calculating the average C.I. from a many randomly generated preference matrices gives the Random Index (R.I.), which depends on the number of preference matrix columns (or rows): 1,0.00; 2,0.00; 3,0.58; 4,0.90; 5,1.12; 6,1.24; 7,1.32; 8,1.41; 9,1.45; 10,1.49; 11,1.51; 12,1.48; 13,1.56; 14,1.57; 15,1.59. The ratio of the C.I. to the average R.I. for the same order matrix is called the Consistency Ratio (C.R.). A Consistency Ratio of 0.10 or less is considered acceptable. Saaty, T. L. The Analytic Hierarchy Process: Planning, Priority Setting, Resource Allocation. New York, McGraw-Hill, 1980. Saaty gives 4 approximation methods: The crudest: Sum the elements in each row and normalize by dividing each sum by the total of all the sums, thus the results now add up to unity. The first entry of the resulting vector is the priority of the first activity (or criterion); the second of the second activity and so on. Better: Take the sum of the elements in each column and form the reciprocals of these sums. To normalize so that these numbers add to unity, divide each reciprocal by the sum of the reciprocals. Good: Divide the elements of each column by the sum of that column (i.e., normalize the column) and then add the elements in each resulting row and divide this sum by the number of elements in the row. This is a process of averaging over the normalized columns. (Dr. Goldberg’s example) Good: Multiply the n elements in each row and take the nth root. Normalize the resulting numbers.
  37. Obviously you really want the inverse of price. All criteria must be phrased as more is better.
  38. Filling in this table is an in-class exercise
  39. All of the students should get this far. If you think that tastiness is moderately less important than price, then you could put in 1/3 or -3 depending on the software you are using.
  40. Some of the students might do this.
  41. Remember the numbers in the right column. They will go into the matrix two slides from here.
  42. Remember the numbers in the right column. They will go into the matrix on the next slide.
  43. *The AHP software (Expert Choice) can also use the product combining function. Of course there is AHP software (e. g. Expert Choice) that will do all of the math for you. **The original data had only one significant figure, so these numbers should be rounded to one digit after the decimal point.
  44. The AHP software computes an inconsistency index. If A is preferred to B, and B is preferred to C, then A should be preferred to C. AHP detects intransitivities and presents it as an inconsistency index.
  45. The result is robust.
  46. For a tradeoff study with many alternatives, where the rankings change often, a better performance index is just the alternative rating of the winning alternative, F1. This function gives more weight to the weights of importance.
  47. We only care about absolute values. If the sensitivity is positive it means when the parameter gets bigger, the function gets bigger. If the sensitivity is negative it means when the parameter gets bigger, the function gets smaller.
  48. Improve the DAR process. Add some other techniques, such as AHP, to the DAR web course, not done yet Fix the utility curves document, done by Harley Henning Spring 2005 Add image theory to the DAR process, proposed for summer 2007 Change linkages in the documentation system, done Fall 2004 Create a course, Decision Making and Tradeoff Studies, done Fall 2004
  49. This example should be familiar to the students. It shows that tradeoff studies really are done. The web site used to have a really good tradeoff study right up front.
  50. You cannot read this slide. It shows the tree structure of the criteria. It is expanded in the next 4 slides.
  51. This section is the heart of this course. It is intended to teach the students how to do a good tradeoff study.
  52. so that the decision maker can trust the results of a tradeoff study
  53. The God Anubis weighing of the heart of the dead against Maat's feather of Truth. If your heart doesn’t balance with the feather of truth, then the crocodile monster eats you up.
  54. Back in the Image Theory section we said there were two types of decisions. Adoption decisions determine whether to add new goals to the trajectory image or new plans to the strategic image. This could include Allocating resources. Progress decisions determine whether a plan is making progress toward achieving a goal. This could include Making plans.
  55. The complete design of a Pinewood Derby is given in chapter 5 of Chapman, W. L., Bahill, A. T., and Wymore, A.W., Engineering Modeling and Design, CRC Press Inc., Boca Raton, FL, 1992, which is located at http://www.sie.arizona.edu/sysengr/pinewood/pinewood.pdf
  56. This is only a fragment of the Pinewood Derby tradeoff study.
  57. In football and baseball the managers do tradeoff studies to select each play, except at the beginning of some football games where they have a preplanned sequence of plays. In basketball they select plays with tradeoff studies only a few times per game. One of my friends (from India) argued with me about the selecting a husband or wife comment.
  58. You should do tradeoff studies at the very beginning of the design process, but you also do tradeoff studies throughout the whole system life cycle. The 80-20 principle was invented by Juran and attributed to Pareto in the 1st ed of Juran’s Quality Control Handbook. Much later in his article, Mea Culpa, he comments on the development of his idea, and notes that many quality colleagues urged him to correct the attribution. The original data for this slide come from a Toyota auto manufacturing report, from around 1985.
  59. The last bullet provides a segue to the next topic, “Well how do people think?”
  60. Assume you are going to lunch in Little Italy or on Coronado Island and you don’t know any of the restaurants in the area. You drive along until you get “close enough” and then decide to take the next parking space you see. You don’t do a tradeoff study of parking lots and different on-street areas. You park your car. Then you walk along and look at restaurant-1. Let’s say that you decide that it is not satisfactory. You look at restaurant-2. Let’s say that you decide that it is not satisfactory. You look at restaurant-3. Let’s say that you find it to be satisfactory. But you keep on looking. You look at restaurant-4 and you compare it to restaurant-3. Let’s say that you decide that restaurant-3 is better than restaurant-4. You look at restaurant-5 and you compare it to restaurant-3. Let’s say that you decide that restaurant-3 is better than restaurant-5. You look at restaurant-6 and you compare it to restaurant-3. Let’s say that you decide that restaurant-3 is better than restaurant-6. Now let’s assume that your friends say that they are hungry and tired and they don’t want to look any more. You probably go back to restaurant-3. You never considered doing a tradeoff study of all six restaurants. At the most you did pair-wise comparisons.
  61. Driving down a freeway looking for a gas station, I might see a gas station with a price of $2.60 per gallon. I would say that is too expensive. The next gas station might ask $2.65, I would also pass that one by. However, I might start to run out of gas, and then see a station offering $2.70 per gallon. I would take it, because the expense of going back to the first station would be too high. T. D. Seeley, P. K. Visscher and K. M. Passino, Group Decision Making in Honey Bee Swarms, American Scientist, 94(3): 220-229, May-June 2006.
  62. Customers of eBay might use either strategy. At first I asked my wife and niece to look for Tinkertoy kits on eBay and let me know what was available. Then I switched strategies and said, Buy any kit you see that contains a yellow figure or a red lid.
  63. Often we need a burning platform to get people to move.
  64. There is one goal and everyone agrees upon it. DMs have unlimited information and the cognitive ability to use it efficiently. They know all of the opportunities open to them and all of the consequences. The optimal course of action can be described and it will, in the long run, be more profitable than any other. A synonym often used for prescriptive model is normative model. In contrast a descriptive model explains what people actually do. Von Neumann and Morgenstern (1947)
  65. Systems engineers do not seek optimal designs, we seek satisficing designs. Systems engineers are not philosophers. Philosophers spend endless hours trying to phrase a proposition so that it can have only one interpretation. SEs try to be unambiguous, but not at the cost of never getting anything written. H. A. Simon, A behavioral model of rational choice, Quarterly Journal of Economics, 59, 99-118, 1955.
  66. Our first example of irrationality is that often we have wrong information in our heads. What American city is directly north of Santiago Chile? Most Americans would say that New Orleans or Detroit is north of Santiago, instead of Boston Or, if you travel from Los Angeles to Reno Nevada, in what direction would you travel? Most Americans would suggest that Reno is northeast of LA, instead of northwest. Which end of the Panama canal is farther West the Atlantic side or the Pacific side? Most Americans would say the Pacific. These examples were derived from Massimo Piattelli-Palmarini, Inevitable illusions: how mistakes of reason rule our minds, John Wiley & Sons, 1994.
  67. The previous slide gave examples of one type of cognitive illusion. In the next slides we will give examples of a few more types. A couple dozen more types are given in Massimo Piattelli-Palmarini, Inevitable illusions: how mistakes of reason rule our minds, John Wiley & Sons, 1994.
  68. Probably the most famous and most studied optical illusion was created by German psychiatrist Franz Müller-Lyer in 1889. Which of the two horizontal line segments is longer? Although your visual system tells you that the one on the left is longer, a ruler will confirm that they are equal in length. Do you think that the slide's’ title is centered? It is.
  69. Stare at the black cross. When do the green dots come from? This illusion is from http://www.patmedia.net/marklevinson/cool/cool_illusion.html The illusion only works in PowerPoint presentation mode. However if you stare at the black " +" in the centre, the moving dot turns to green.Now, concentrate on the black " + " in the centre of the picture. After a short period, all the pink dots will slowly disappear, and you will only see only a single green dot rotating. Another good web site for visual illusions is http://www.socsci.uci.edu/~ddhoff/
  70. The upper-left quadrant is defined as rational behavior. EV means expected value. SEV is subjective expected value. In the next slides we will show how human behavior differs from rational behavior. Edwards, W., "An Attempt to Predict Gambling Decisions," Mathematical Models of Human Behavior, Dunlap, J.W. (Editor), Dunlap and Associates, Stamford, CT, 1955, pp. 12-32.
  71. People overestimate events with low probabilities, like being killed by a terrorist or in an airplane crash, and underestimate high probability events, such as adults dying of cardiovascular disease. The existence of state lotteries depends upon such overestimation of small probabilities. At the right side of this figure, the probability of a brand new car starting every time is very close to 1.0. But a lot of people put jumper cables in the trunk and buy memberships in AAA. M. G. Preston and P. Baratta, An experimental study of the auction-value of an uncertain outcome, American Journal of Psychology, 61, pp. 183-193, 1948. Kahneman, D. and Tversky, A., Prospect Theory: An Analysis of Decision under Risk, Econometrica 46 (2) (1979), 171-185. Tversky and Kahneman, (1992) Drazen Prelec, in D. Kahneman & A. Tversky (Eds.) “Choices, Values and Frames” (2000) Animals exhibit similar behavior. People overestimate low probabilities and do not distinguish much between intermediate probabilities. Rats show this pattern too (Kagel 1995). People are more risk-averse when the set of gamble choices is better. But humans also violate this pattern, and so do rats (Kagel 1995). People also exhibit “context-dependence”: Whether A is chosen more often than B can depend on the presence of an irrelevant third choice C (which is dominated and never chosen). Context dependence means people compare choices within a set rather than assigning separate numerical utilities. Honeybees exhibit the same pattern (Shafir, et al. 2002). Animals are also risk averse, as defined about a dozen slides from here. John Kagel, Economic Choice Theory: An Experimental Analysis of Animal Behavior, Cambridge University Press, 1995. S. Shafir, T. M. Waite and B. H. Smith. “Context-dependent violations of rational choice in honeybees (Apis mellifera) and gray jays (Perisoreus canadensis).” Behavioral Ecology and Sociobiology, 2002, 51, 180-187. Every year 50 Americans die of cardiovascular disease for every one that dies of AIDS.
  72. Humans are not good at computing probabilities, as is illustrated by the Monty Hall Paradox. This paradox was invented by Martin Gardner and published in his Scientific American column in 1959. It is called the Monty Hall paradox because of its resemblance to the TV show Let’s Make a Deal. I have taken this version from Massimo Piattelli-Palmarini, Inevitable illusions: how mistakes of reason rule our minds, John Wiley & Sons, 1994. I am running a game that I can repeat hundreds of times. On a table in front of me are a stack of ten-dollar bills and three identical boxes, each with a lid. You are my subject. Here are the rules for each game. You leave the room and while you are out, I put a ten-dollar bill in one of the three boxes. Then I close the lids on the boxes. I know which box contains the ten-dollar bill, but you don’t. Now I invite you back into the room and you try to guess which box contains the money. If you guess correctly, you get to keep the ten-dollar bill.
  73. Each game is divided into two phases. In the first phase, you point to your choice. (You cannot not open, lift, weigh, shake or manipulate the boxes.) The boxes remain closed.
  74. After you make your choice, I open one of the two remaining boxes. I will always open an empty box (remember that I know where the ten-dollar bill is).
  75. Having seen one empty box (the one that I just opened) you now see two closed boxes, one of which contains the ten-dollar bill.
  76. Leave this slide up for a while and let people discuss what they think.
  77. This explanation is from Massimo Piattelli-Palmarini, Inevitable illusions: how mistakes of reason rule our minds, John Wiley & Sons, 1994.
  78. This table explains three bets: A, B and C. The p’s are the probabilities, the x’s are the outcomes, is the mean and is the variance. This table shows, for example, that half the time bet C would pay $1 and the other half of the time it would pay $19. Thus, this bet has an expected value of $10 and a variance of $9. This is a comparatively big variance, so the risk (or uncertainty) is said to be high. Most people prefer the A bet, the certain bet. To model risk averseness across different situations the coefficient of variability is often better than variance. Coefficient of variability = (Standard Deviation) / (Expected Value). In choosing between alternatives that are identical with respect to quantity (expected value) and quality of reinforcement, but that differ with respect to probability of reinforcement humans, rats (Battalio, Kagel and MacDonald, 1985), bumblebees (Real, 1991), honeybees (Shafir, Watts and Smith, 2002) and gray jays (Shafir, Watts and Smith, 2002) prefer the alternative with the lower variance. To avoid the confusion caused by system engineers and decision theorist using the word risk in two different ways, we can refuse to use the word risk and instead use ambiguity, uncertainty and hazards. J. H. Kagel, R. C. Battalio and L. Greene, Economic Choice Theory: An Experimental Analysis of Animal Behavior, Cambridge University Press, 1995.
  79. A little while ago, a wild fire was heading toward our house. We packed our car with our valuables, but we did not have room to save everything, so I put my wines in the swimming pool. We put the dog in the car and drove off. When we came back, the house was burned to the ground, but the swimming pool survived. However, all of the labels had soaked off of the wine bottles. Tonight I am giving a dinner party to celebrate our survival. I am serving mushrooms that I picked in the forest while we were waiting for the fire to pass. There may be some hazard here, because I am not a mushroom expert. We will drink some of my wine: therefore, there is some uncertainty here. You know that none of my wines are bad, but some are much better than others. Finally I tell you that my sauce for the mushrooms contains saffron and oyster sauce. This produces ambiguity, because you probably do not know what these ingredients taste like. How would you respond to each of these choices? Hazard: Would you prefer my forest picked mushrooms or portabella mushrooms from the grocery store? Uncertainty: Would you prefer one of my wines or a Kendall-Jackson merlot? Ambiguity: Would you prefer my saffron and oyster sauce or marinara sauce? Decisions involving these three concepts are probably made in different parts of the brain. Hsu, Bhatt, Adolphs, Tranel and Camerer [2005] used the Ellsberg paradox to explain the difference between ambiguity and uncertainty. They gave their subjects a deck of cards and told them it contained 10 red cards and 10 blue cards (the uncertain deck). Another deck had 20 red or blue cards but the percentage of each was unknown (the ambiguous deck). The subjects could take their chances drawing a card from the uncertain deck: if the card were the color they predicted they won $10, else they got nothing. Or they could just take $3 and quit. Most people picked a card. Then their subjects were offered the same bets with the ambiguous deck. Most people took the $3 avoiding the ambiguous decision. Hsu et al. recorded functional magnetic resonance images (fMRI) of the brain while their subjects made these decisions. While contemplating decision about the uncertain deck, the dorsal striatum showed the most activity and when contemplating decisions about the ambiguous deck the amygdala and the orbitofrontal cortex showed the most activity. Ambiguity, uncertainty and hazards are three different things. But people prefer to avoid all three.
  80. This slide also shows saturation. This slide also shows the importance of the reference point: $10 to a poor man means a lot more than $10 to a rich man. Kahneman, D. and Tversky, A., Prospect Theory: An Analysis of Decision under Risk, Econometrica 46 (2) (1979), 171-185. Massimo would prefer that we label the ordinate and abscissa as subjective worth and numerical value.
  81. The $2 bet means I put down a $2 bill and flip a coin to see if you get it or not. The $1 bet means I give you one dollar and a state lottery ticket. If the lottery ticket is a winner, you keep the $1 million, else you keep the dollar bill. The $3 bet has consequences that you might have to give me two million dollars. The $1 bet has the highest utility for most engineers. The message of this slide can be dramatically demonstrated with two $2 bills, a coin, two $1 bills, a lottery ticket and the last two slides of this presentation.
  82. The $2 bet means I put down a $2 bill and flip a coin to see if you get it or not. The $1 bet means I give you one dollar and a state lottery ticket. If the lottery ticket is a winner, you keep the $1 million, else you keep the dollar bill. The $3 bet has consequences that you might have to give me two million dollars. The $1 bet has the highest utility for most engineers. The message of this slide can be dramatically demonstrated with two $2 bills, a coin, two $1 bills, a lottery ticket and the last two slides of this presentation.
  83. The $2 bet means I put down a $2 bill and flip a coin to see if you get it or not. The $1 bet means I give you one dollar and a state lottery ticket. If the lottery ticket is a winner, you keep the $1 million, else you keep the dollar bill. The $3 bet has consequences that you might have to give me two million dollars. The $1 bet has the highest utility for most engineers. The message of this slide can be dramatically demonstrated with two $2 bills, a coin, two $1 bills, a lottery ticket and the last two slides of this presentation.
  84. The $2 bet means I put down a $2 bill and flip a coin to see if you get it or not. The $1 bet means I give you one dollar and a state lottery ticket. If the lottery ticket is a winner, you keep the $1 million, else you keep the dollar bill. The $3 bet has consequences that you might have to give me two million dollars. The $1 bet has the highest utility for most engineers. The message of this slide can be dramatically demonstrated with two $2 bills, a coin, two $1 bills, a lottery ticket and the last two slides of this presentation.
  85. The $1 bet has the highest utility for most engineers.
  86. Savage (1954)
  87. Kahneman got the Nobel Prize in 2002 for his part in developing Prospect Theory. Prospect theory is often called a descriptive model for human decision making.
  88. In the last two dozen slides, we showed how human behavior differed from rational behavior. Next we are going to show that tradeoff studies can help move you toward more rational decisions.
  89. Evaluation data for evaluation criteria come from approximations, product literature, analysis, models, simulations, experiments and prototypes.
  90. This is a template that can be used for criteria.
  91. This example comes from the Pinewood Derby study located at http://www.sie.arizona.edu/sysengr/pinewood/pinewood.pdf A lot of confusion has been caused by failure to differentiate between the name of the criterion and the basic measure for that criterion. As in this case, the words are often very similar. At this point it might also be useful to differentiate between metric and measure. Measure. A measure indicates the degree to which an entity possesses and exhibits a quality or an attribute. A measure has a specified method, which when executed produces values (or metrics) for the measure. Metric. A measured, calculated or derived value (or number) used by a measure to quantify the degree to which an entity possesses and exhibits a quality or an attribute. Measurement. A value obtained by measuring, which makes it a type of metric.
  92. Spend some time on this criteria, because we will bring it back later. Monotonic increasing, lower=0, baseline=90, slope=0.1, upper=100, plot limits 70 to 100.
  93. This example comes from the Pinewood Derby study located at http://www.sie.arizona.edu/sysengr/pinewood/pinewood.pdf This second example was chosen to highlight the difference between the name of the criterion and the basic measure for that criterion. This Pinewood Derby chapter is from W. L. Chapman, A. T. Bahill and A. W. Wymore, Engineering modeling and design, CRC Press, Boca Raton, 1992. The reason we are using such an old reference is to show that we didn’t just jimmy up the example. It has been around for a long time.
  94. Of course, it depends on the circumstances. if availability were a probabilistic value, then it could be used. Perhaps like going to the library to get a copy of the latest best-selling book.
  95. These are sometimes hierarchal with attributes, criteria and then objectives. But an SEI papers says criteria contain attributes and objectives.
  96. Other MoPs could be overall GPA, GPA in the major, extracurricular activities, summer internships, number of undergraduate credits, number of graduate credits, honorary societies, special awards, semesters in the program,
  97. From left to right, Moe Howard, Jerry (Curley) Howard and Larry Fine.
  98. If you are not using a scoring function, then instead of Total Life Cycle Cost, use the negative or the reciprocal
  99. http://www.sie.arizona.edu/sysengr/pinewood/pinewood.pdf
  100. When we showed people the top curve and asked, “How would you feel about an alternative that gave 90% happy scouts?” they typically said, “It’s pretty good.” In contrast, when we showed people the bottom curve and asked, “How would you feel about an alternative that gave 10% happy scouts?” they typically said, “It’s not very good.” When we allowed them to change the parameters, they typically pushed the baseline for the Percent Unhappy Scouts scoring function to the left.
  101. The solution to this problem is to group all of the husband’s criteria into one higher level criterion called power.
  102. The deprecated words maximize and minimize should not be used in requirements, but they are OK in goals. On the other hand we could rewrite this as Selection criteria: The preferred alternative will be the one that produces the largest amount of food.
  103. I would like to have a rich, intransitive uncle. Assume that I have an Alpha Romero and a BMW. And my Uncle has a Corvette. I would love to hear him say, “I prefer your BMW to my Corvette, therefore I will give you $2000 and my Corvette for your BMW.” Next he might say, “I prefer your Alpha Romero to my BMW, therefore I will give you $2000 and my BMW for your Alpha Romero.” And finally I would wait with baited breath for him to say, “I prefer your Corvette to my Alpha Romero, therefore I will give you $2000 and my Alpha Romero for your Corvette.” We would now have our original cars, but I would be $6000 richer. I would call him Uncle Money Pump. This example can start with any car and go in either direction. The only trick is that you must go in a circle.
  104. The NAND operator is not associative.
  105. The “A Prioritization Process” paper explains why each of these aspects is important. Read that paper before discussing this slide. Botta, Rick, and A. Terry Bahill, “A Prioritization Process,” Engineering Management Journal, 19:4 (2007), pp. 20-27.
  106. Mnemonic: ordinal is ordering, as in rank ordering.
  107. *Those bullets are ORed. *The systems engineer should derive straw men priorities for all of the criteria. These priorities shall be numbers (usually integers) in the range of 0 to 10, where 10 is the most important. Then he or she should meet with the customer (how ever many people that might be). For each criterion, the systems engineer should lead a discussion of the criteria in the above table and then try to get a consensus for the priority. In the first pass, he or she might ask each stakeholder to evaluate each criterion and take the average value. However, if the customer only looks at one or two criteria and says the criterion is a 10, then it’s a 10. *Yes rank ordering gives ordinal numbers not cardinal numbers, but often the technique works well. *The systems engineer can help the customer make pair-wise comparisons of all the criteria and then use the analytic hierarchy process to derive the priorities. This would not be feasible without a commercial tool such as Expert Choice. This tool is discussed in Ref: COTS-Based Engineering Design of a Tradeoff Study Tool. *One algorithmic technique is on Karl Wiegers’ web site. *If all of the alternatives are very close on a criterion, then you might want to discount (give a low weight to) that criterion. Many other methods for deriving weights exist, including: the ratio method [Edwards, 1977], tradeoff method [Keeney and Raiffa, 1976], swing weights [Kirkwood, 1992], rank-order centroid techniques [Buede, 2000], and paired comparison techniques discussed in Buede [2000] such as the Analytic Hierarchy Process [Saaty, 1980], trade offs [Watson and Buede, 1987], balance beam [Watson and Buede, 1987], judgments and lottery questions [Keeney and Raiffa, 1976]. These methods are more formal and some have an axiomatic basis. For a comparison of weighting techniques, see Borcherding, Eppel, and Winterfeldt [1991]. K. Borcherding, T. Eppel, D. von Winterfeldt, Comparison of weighting judgments in multiattribute utility measurement, Management Science 37: 1603-1619, 1991. D. Buede, The Engineering Design of Systems, John Wiley, New York, 2000. W. Edwards, How to Use Multiattribute Utility Analysis for Social Decision Making, IEEE Trans Syst Man Cybernetics, SMC-7: 326-340, 1977. R. L. Keeney and H. Raiffa, Decisions with Multiple Objectives: Preferences and Value Tradeoffs, John Wiley, New York, 1976. C. W. Kirkwood, Strategic Decision Making: Multiobjective Decision Analysis with Spreadsheets, Duxbury Press, Belmont, 1997. T. L. Saaty, The Analytical Hierarchy Process, McGraw-Hill, New York, 1980. S. R. Watson, and D. M. Buede, Decision Synthesis: The Principles and Practice of Decision Analysis, Cambridge University Press, Cambridge, UK, 1987. The method of swing weighting is based on comparisons of how does the swing from 0 to 10 on one preference scale compare to the 0 to 10 swing on another scale? Assessors should take into account both the difference between the least and most preferred options, and how much they care about that difference. For example, in purchasing a car, you might consider its cost to be important. However, in a particular tradeoff study for a new car, you might have narrowed your choice to a few cars. If they only differ in price by $400, you might not care very much about price. That criterion would receive a low weight because the difference between the highest and lowest price cars is so small.
  108. D. Redelmeier, and E. Shafir, Medical decision making in situations that offer multiple alternatives, Journal of the American Medical Association, 273(4) (1995), 302-305.
  109. A sacred cow is an idea that is unreasonably held to be immune to criticism. Saving the spotted owl, gnatcatchers, the Ferruginous Pigmy Owl and putting out all forest fires have been sacred cows to environmentalists. Most things that are termed politically correct are sacred cows. In Tucson, all transportation proposals contain the light rail alternative, because the lobby for this technology is very strong.
  110. *G. A. Miller, The magical number seven, plus or minus two: some limits on our capacity for processing information, The Psychological Review, 1956, vol. 63, pp. 81-97, www.well.com/user/smalin/miller.html. ** D.A. Redelmeier and E. Shafir, Medical decision making in situations that offer multiple alternatives, JAMA, Jan. 25, 1995, 273 (4) 302-305.
  111. CAIV is only used in the requirements phase. After the requirements are set it is too late.
  112. Near the end of this process the data will be quantitative and objective. But in the beginning they will be based on personal opinion of domain experts. There are techniques to help get such data from the experts. The literature on this topic is called preference elicitation (see Chen and Pu, 20xx).
  113. Cardinal measures indicate size or quantity. They were introduced about 15 slides ago. Fuzzy numbers will be discussed about 40 slides from here.
  114. Input of 88% produces output of 0.31. Input of 91% produces output of 0.6.
  115. The Bad example is just a linear transformation. You can do better than that. The output is intended to be cardinal (not ordinal) numbers. That is an output of 0.8 is intended to be twice as important as an output of 0.4.
  116. The propose of this slide is to show that different combining methods can produce different preferred alternatives.
  117. When I was living in Pittsburgh, I went to the Carnegie Institute. I saw the fossil skeleton of a Brontosaur (that is what it was called at that time). I asked the guard, “How old are those dinosaur bones?’ He replied, “They are 70 million, four years and six months.” “That is an awfully precise number,” I said. How do you know their age so precisely?” Is there a new form of radiocarbon dating?” The guard answered, “Well, they told me that those dinosaur bones were 70 million years old when I started working here, and that was four and a half years ago.” This story is an example of false precision. Often students list their results with six digits after the decimal point, because that is the default on their calculators. You should not accept the default value. Deliberately choose the number of digits after the decimal point. In my last slide I choose two, because that was necessary and sufficient to show the differences between the alternatives. The number of digits to print can also be determined by the technique of significant figures.
  118. Monotonic decreasing, lower=0, baseline=3, slope=-0.34, upper=10, plot limits 0 to 6.
  119. Monotonic increasing, lower=0, baseline=3, slope=0.34, upper=10, plot limits 0 to 10.
  120. Please do not try to explain this equation. It is only here in case someone asks about it. SSF1 is the first of twelve Standard Scoring Functions.
  121. If you could reduce the probability of loss of life for operators of your system from one in a million to one in ten million, I’m sure your customer would be happy. Using logarithms is a way to show this.
  122. That slide is spoken, “You can add dollars and pounds, but you can’t add dollars and pounds.” Therefore you need scoring functions in order to combine apples and oranges.
  123. An atomic bomb (actually a thermonuclear weapon) costs a billion dollars and lasted a nanosecond.
  124. Wymore (1993) calls the criteria space the buildability cotyledon.
  125. These criteria are for selecting a printer for a computer. Cost is the inverse of selling price, because I didn’t want to use scoring functions yet. There will be lots of printers in the lower left area, but they are all inferior. There will be no printers in the upper right corner, because this is the infeasible region. The best alternatives will be on the quarter-circle.
  126. We cover these slides real fast. The detail is not important.
  127. By coincidence, (d Sum)/dx = - (d Product)/dx
  128. Alternatives on a circle could be cost and pages per minute for a laser printer. Alternatives on a straight line could be sharing a pie; pie for you and pie for me. Alternatives on an hyperbola could be various soda pop packages or human muscle.
  129. This sign was unknowingly based on a cartoon by Dana Fradon published in the New Yorker in 1976. Clearview is the font now used by the U. S. Highway administration. This is an approximation of it.
  130. The Sum is simpler if you are going to compute sensitivity functions, because it has fewer interaction terms. The product combining function is often called the Nash Product after Nobel Laureate John Nash who used this function in 1950. It is also called the Nash Bargaining Solution. The following three items are analogous. Risk is the probability of occurrence times the severity or consequences of the event. In the sum combining function we use the input value times the weight. Subjective expected utility is the probability times the utility. Transmission of light in an optical system is the product of the individual optical element transmissions. Probability chains are often multiplicative. For example, the probability of a missile kill is the product of probability of target detection, probability of successful launch, probability of successful guidance, probability of warhead detonation, probability of killing a critical area of the target.
  131. Minimax is not XOR, because it doesn’t alternate between criteria. It chooses just one criterion.
  132. They change the algorithm every year. See www.bcsfootball.org In contrast to this NASCAR uses the first 26 races to narrow down the field. The After the first 26 races the top ten drivers plus any other drivers within 400 points of the leader are selected to compete in the last ten races, which determine the champion.
  133. The next dozen slides will discuss this combining function.
  134. Which athlete has the most championship rings? Yogi Berra, with 10? No, Bill Russell with 11 in the NBA and 2 in the NCAA, all as a player. John Wooden has 12 as a college basketball coach. Joe DiMaggio had 9 as a player. Phil Jackson and Red Auerbach each have 9 NBA rings. Bob Hayes is the only person with an Olympic gold medal and a Super Bowl ring. The Pittsburg Stealers won 4 in the 1970’s.
  135. Use minimin to design a bat for Alex Rodriguez, because he always hits the ball right on the sweet spot. Use minimax for Terry Bahill. The ball wont’ go as far for a perfect hit, but it will not be a disaster for a mishit.
  136. This decision to build on the mountain top is not based on expected values. Assume one violent thunderstorm is expected per decade. The expected loss for the mountain top is $10K/year, whereas the expected loss for the river bank is only $9K/year.
  137. This slide uses the numbers from the previous slide.
  138. Probability density functions are often used to help obtain evaluation data. For instance, for a particular alternative, the average response time may be given by a certain type of a probability density function with a specified mean and variance. In designing system experiments, we could say the system input shall be determined by a certain type of a probability density function with a specified mean and variance.
  139. I don’t recommend using the product combining function for the whole data base. I think it would be appropriate for a criterion of benefit to cost ratio.
  140. In this tradeoff study the Cost and Performance criteria were summed together with weights that totaled 1.0. Weightcost times Cost Score + Weightperformance times Performance Score = alternative rating Weightcost + Weightperformance = 1.0 These functions were derived from simulations. They show that for resources poor packs the single elimination race is the best, whereas for resource rich packs the round robins are best.
  141. These functions were derived from prototype races. They show that for resources poor packs the double elimination race is the best, whereas for resource rich packs the round robins are best.
  142. For a tradeoff study with many alternatives, where the rankings change often, a better performance index is just the alternative rating of the winning alternative, F1. This function gives more weight to the weights of importance.
  143. The most important parameter is S11. Therefore, we should gather more experimental data and interview more domain experts for this parameter: we should spend extra resources on this parameter. The minus signs for S12 and S22 merely mean that an increase in either of these parameters will cause a decrease in the performance index. Note that, for example, because the sensitivity of F with respect to Wt1 contains S11 and S12, there will be interaction terms.
  144. SFS11=20 ((0.735-0.29)-(0.71-0.29))=20 (0.025)=0.5
  145. We have k=2 criteria: cost and quantity and i=8 alternatives. The 3-liter bottle may not look like it is closest to the Ideal Point because the horizontal and vertical scales are not the same.
  146. This table used the modified Minkowski metrics.
  147. You do not have to present all three of these decision tree examples.
  148. The baseball manager must make a decision about his pitcher. He could use a tradeoff study, as illustrated above, or a decision tree as shown in the next slide.
  149. In Abbott and Costello's famous routine “Who’s on First?” Who was the first baseman and the pitcher was Tomorrow, but I’m getting too silly now.
  150. These data are for Barry Bonds. J. P. Reiter, Should teams walk or pitch to Barry Bonds? Baseball Research Journal, 32, (2004), 63-69. J. F. Jarvis, An analysis of the intentional base-on-ball, Presented at SABR-29, Phoenix, AZ, 1999 ( http://knology.net/~johnfjarvis/IBBanalysis.html ) Maybe we should first ask if we are playing in San Francisco’s AT&T park, where the average wind speed is 10 mph from home plate toward right field.
  151. Getting into the decision maker’s head is a segue to the next slide.
  152. Reference for the Myers-Briggs model: D. Keirsey, Please Understand Me II, Prometheus Nemesis Book Company, 1998.
  153. Faced with a decision between two packages of ground beef, one labeled “95% lean," the other “5% fat," which would you choose? The meat is exactly the same, but most people would pick "80% lean." The language used to describe options often influences what people choose, a phenomenon behavioral economists call the framing effect. Some researchers have suggested that this effect results from unconscious emotional reactions.
  154. This is like the Wheel of Fortune. You spin the wheel and see where the arrow points. The black areas on the pie charts are the probabilities of winning: 0.09 and 0.94. The expected values of the two bets are $5.103 and $5.076: this is close enough to be called equal. Lichtenstein and Slovic (1971) reported that, when given a choice, most people preferred the P bet, but wanted more money to sell the $ bet (median=$7.04) than P bet ($4.00). Attractiveness ratings (e.g., 0=very very unattractive to 80=very very attractive) showed an even stronger preference for the P bet. This is stronger than the previous slide on phrasing, because the same subjects are changing their minds depending on the phrasing. Lichtenstein and Slovic (1971). Reversals of preferences between bids and choices in gambling decisions. Journal of Experimental Psychology, 89, 46-55.
  155. You wrote down a lot of criteria, but obviously there were a lot of important ones that you neglected. The stomach test brought them to the surface. You cannot use this test very often. And it only works for really important things. This test comes from Eb Rechtin.
  156. Anywhere I put a use case name I set it in the Veranda font.
  157. Re: the title meta summary Aristotle wrote his treatise on Physics. Then after that he wrote his treatise on Philosophy, which he called Meta Physics or after Physics. Philosophy is at a higher level of abstraction than Physics.