Skip to main content

Full text of "Obtaining consensus probability distributions and the pari-mutual method"

See other formats


Faculty  Working  Papers 


OBTAINING  CONSENSUS  PROBABILITY 
DISTRIBUTIONS  AND  THE  PARI-MUTUAL  METHOD 

Dennis  H.  Pats  and  Ronald  D.  Picur 

#293 


College  of  Commerce  and  Business  Administration 

University  of  Illinois  at  Urbana-Champaign 


FACULTY  WORKING  PAPERS 
College  of  Commerce  and  Business  Administration 
University  of  Illinois  at  Urbana-Caampaign 
December  A,  1975 


OBTAINING  CONSENSUS  PROBABILITY 
DISTRIBUTIONS  AND  THE  PARI-MUTUAL  METHOD 

Dennis  H.  Pats  and  Ronald  D.  Picur 

#293 


OBTAINING  CONSENSUS  PROBABILITY 
DISTRIBUTIONS  AND  THE  PARI-MUTUAL 
METHOD 


Dennis  H.  Pat 2 

and 
Ronald  D.  Picur 


December,  1975 


The  authors  wish  to  express  their  gratitude  to  Mr.  Paul  Reichel,  doctoral 
candidate  in  mathematics  at  the  Illinois  Institute  of  Technology,  and 
Miss  Joanne  Noe,  doctoral  candidate  in  Accountancy  at  the  University  of 
Illinois,  for  their  mathematical  assistance  which  was  essential  to  the 
preparation  of  this  paper. 


Digitized  by  the  Internet  Archive 

in  2012  with  funding  from 

University  of  Illinois  Urbana-Champaign 


http://www.archive.org/details/obtainingconsens293patz 


OBTAINING  CONSENSUS  PROBABILITY 
DISTRIBUTIONS  AND  THE  PARI -MUTUAL  METHOD 

In  1959  Eisenberg  and  Gale  (hereafter  E  &  G)  presented  the  mathematical 
formulation  of  a  mechanical  (as  opposed  to  behavioral)  method  for  aggregating 
individual  subjective  probability  distributions  to  achieve  a  "consensus" 
distribution  —  the  Pari-Mutual  method.   However,  beyond  casual  reference 
to  its  being  an  "important  device"  (Winkler,  1968)  and  "clever"  (Hogarth, 
1975),  the  E  &  G  model  has  not  been  operationalized,  experimented  with  or 
empirically  tested.  We  believe  it  is  the  lack  of  operationalization  of  the 
model  which  has  resulted  in  the  lack  of  testing  or  experimentation.  That  is, 
it  can  be  demonstrated  that  neither  the  problem  addressed  nor  this  particular 
method  lacks  significance.  Hence,  It  is  the  primary  objective  of  this  paper 
to  provide  the  requisite  implementation  vehicle  for  the  Pari-Mutual  method 
and  therefore  facilitate  research  on  this  potentially  useful  aggregation 
technique . 

Section  I  discusses  the  important  position  the  subjective  probability 
aggregation  problem  occupies  within  the  overall  framework  of  decision  making. 
Section  II  is  devoted  to  a  brief  review  of  the  various  theoretical  and  exper- 
imental methods  which  have  been  identified  to  date  in  dealing  with  the  "concensus 
element  of  decision  making  problems.   Section  III  examines  the  E  &  G  model, 
its  characteristics  and  its  advantages  and  disadvantages.  A  relatively  simple 
computer  simulation  model  is  also  presented  which  can  (and  does)  operationalize 
the  E  &  G  model.   Further,  this  rwdel  circumvents  the  reasonably  sophisticated  — 
and  correspondingly  cumbersome  —  mathematical  techniques  which  that  model 
involves.   In  the  final  section  of  the  paper,  Section  IV,  various  categories 
of  research  which  appear  desirable  in  light  of  the  potential  usefulness  of 
the  Pari-Mutual  aggregation  scheme,  are  identified. 


I.   CONSENSUS  IN  DECISION  MAKING 

The  process  of  arriving  at  the  likelihood  of  an  event  occurring,  of 
o   variable  taking  on  one  or  more  values,  or  of  a  particular  state  of  nature 
obtaining,  are  indigeneous  to  most  examples  of  decision  making  problems. 
The  consensus  problem  exists  in  any  decision  problem  where  uncertainty  is 
involved  and  where  it  is  also  deemed  desirable  to  consult  more  than  one 
"expert"  or  opinion.   In  practice,  groups  of  experts  are  routinely  consulted 
or  convened  to  arrive  at  a  decision  involving  uncertainty.   If  one  accepts 
that  "the  most  detailed  and  most  interesting  representation  of  an  expert's 
judgement  pertaining  to  an  uncertain  quantity  is  the  probability  function 
he  assigns  to  it,"  (Morris,  1974,  p.  1234)  then  group  consensus  more  often 
than  not  means  an  aggregate  subjective  probability  distribution.  Thus,  since 
the  general  conclusion  arising  from  research  is  that  composite  distributions 
show  greater  predictive  ability  than  most  single  expert  opinions,  one  finds 
both  a  common  and  important  problem  (Hogarth,   1975,  p.  282;  Gustafson,  et.  al., 
1973,  p.  281;  Winkler,  1971). 

Obviously  there  are  many  different  approaches  to  decision  making  under 
uncertainty.  However,  greater  emphasis  is  being  placed  on  formal  mechanistic 
(versus  hueristic)  modes  of  analysis  or  methods  of  processing  information. 
This  movement  appears  justified  since  the  general  conclusions  from  research  — 
with  respect  to  the  information  processing  (combining)  element  of  decision 
making  —  is  that  the  mechanical  mode  of  combination  is  superior  to  the  clinical 
mode  of  combination  (Einhorn,  1972,  p.  87).   Thus,  recent  times  have  found 
a  proliferation  of  Baysian,  Markovian  and  other  forms  of  analysis  being  offered 
as  applicable  to  numerous  specific  business  and  other  types  of  decisions  and 
problems.   Yet  the  implementation  of  these  stochastic  methodologies  requires 


prior  probability  distributions,  transitional  probability  matrices  and  the 
like.   In  the  absence  of  actuarial  data,  these  probabilities  must  either  be 
postulated  (subjectively)  or  elicited  from  individuals  in  the  form  of  subjective 
probability  distributions.   In  situations  where  the  decision  maker  or  analyst 
has  little  knowledge  of  the  parameters  of  interest,  he  is  almost  forced  to 
consult  n   aucii  er  of  experts  to  construct  the  requisite  probability  distribution. 
Since  only  a  single  distribution  (or  likelihood  ratio)  can  be  input  to  the 
mechanistic  models,  the  aggregation  issue  takes  on  singular  significance. 

Hence,  .hether  a  mechanistic  analysis  of  a  decision  or  other  problem  is 
anticipate],  or  one  is  simply  interested  in  arriving  directly  at  a  decision 

»re  s  group  is  involved,  a  concensus  estimate  is  also  likely  to  be  involved. 
This  consensus  ^akes  the  form  of  a  subjective  probability  distribution.   Several 
cipproaches  hwe  been  used  or  proposed  for  reaching  consensus,  and  each  type  has 
its  advantages  and  disadvantages.   These  are  considered  in  the  following  section 

.  II.   METHODS  OF  ASCERTAINING  CONSENSUS  DISTRIBUTIONS 
As  Rowse  (1974,  pp.  274-5)  suggests,  a  final  group  estimate  may  be  obtained 
either  by  "beha"'  ioral  consensus"  —  where  group  members  interact  with  each  other 
either  verbally  or  by  way  or   correspondence  or  feedback  —  or  by  "mathematical 
con.  tJ-ion  of  Individual  member* s  estimates).  Within  the  "behaviora! 

c  i.  ens/1  group  of  approaches,  one  finds  the  Delphi  technique,  variations  on 
that  theme,  and  c  \}2r   group  methods  which  involve  either  interaction  or  some 

m  of  inter-member  communication  —  e.g.,  the  simple  committee  meeting. 
Within  zhz:   rrathr-matieal  group  fall  various  averaging  techniques,  mathematical 
SKide]      the  Eari-Mutual  method. 

The  Delphi  technique  as  a  generic  form  typically  involves  repeated 
interrogation  cr  questionnaire  inquiry  of  the  experts,  making  it  both  a  costly 
cimfi  consuming  process.   In  its  favor  —  since  it  avoids  confrontation  — 


4 
it  does  not  involve  the  many  restrictive  and  dysfunctional  effects  which  have 
been  associated  with  the  group  dynamics  of  other  behavioral  approaches 
involving  member  interaction  (Dalkey  and  Helmer,  1963,  p.  459;  Gustafson, 
et.al.,  1973,  p.  282;  Rowse,  1974,  p,  275).  Yet  with  those  behavioral 
consensus  approaches  involving  feedback,  there  is  always  the  problem  of  the 
extent  and  form  of  the  feedback.   Specifically,  such  feedback  may  have  to 
take  the  form  of  aggregate  distributions.   Further,  there  is  always  the 
possibility  that  no  consensus  (convergence)  will  occur. 

In  general,  the  mathematical  consensus  approaches  are  the  most  appealing  — 
particularly  in  terms  of  cost,  simplicity,  implementation,  time  consumed  and 
the  number  of  experts  which  can  be  handled  effectively.   Since  no  actual  group 
dynamics  are  involved,  the  aforementioned  potential  problems  are  completely 
avoided.   Also,  most  mathematical  aggregation  techniques  permit  a  differential 
weighting  of  the  individual  opinions  or  distributions  being  combined.   This 
feature  is  important  since  the  level  of  expertise  is  likely  to  vary  among 
the  members  of  any  group.   Such  weightings  might  be  derived  from  self-ratings, 
inner- judge  ratings,  assigned  subjectively  by  the  decision  maker  ultimately 
responsible,  or  may  be  derived  from  applying  "scoring  rules"  to  previously 
assessed  distributions  and  actual  outcomes  (Winkler,  1968,  1969).  Alternatively, 
the  Delphi  technique  does  not  involve  an  identification  (in  the  feedback  process) 
with  respect  to  which  expert  holds  what  opinion.   Finally,  in  the  other  behavioral 
approaches,  there  is  no  guarantee  that  "expertise"  is  being  considered  in  a 
systematic  fashion. 

The  most  commonly  used  aggregation  techniques  are  average  and  weighted- 
average  mathematical  models.   These  models  are  also  the  simplest  and  least 
costly  to  implement.   Moreover,  some  experimental  evidence  exists  which  sug*r"->«-s 
they  work  better  than  behavioral  aggregations  —  perhaps  for  the  very  reason 
that  they  avoid  group  dynamics  (Rowse,  1974).   Other  more  complex  approaches 


tc  aggregation  have  also  been  proposed.  For  example,  DeGroot  (1974)  propose^ 
a  mathematical  model  whose  theoretical  basis  resembles  the  Delphi  technique 
but  which  involves  both  the  weighting  of  individual  opinions  and  the  application 
of  Markov  techniques.  Winkler  (1968),  on  the  other  hand,  has  proposed  a 
"natural  conjugate"  method.   This  technique  involves  successive  application 
of  Bayes*  theorem  to  arrive  at  a  group  consensus  —  in  which  expert  opinions 
are  basically  treated  as  additional  sample  evidence.  Lastly,  it  is  in  this 
"more  complex"  category  of  aggregation  schemes  that  Eisenberg  and  Gale's  model 
can  be  classified.   However,  it  is  not  clear  that  E  &  G  were  anticipating  this 
type  of  classification  when  their  work  was  published. 

III.   THE  PARI-MUTUAL  MODEL 

The  Eisenberg  and  Gale  Pari-Mutual  model  of  consensus  is  probably  most 
analogous  to  DeGroot 's.   In  both  cases  a  mathematical  model  of  a  real -world 
referent  is  involved.       In  the  case  of  DeGroot,  the  empirical  process 
envisioned  resembles  the  Delphi  process.   In  the  case  of  the  E  &  G  model,  the 
process  envisioned  is  the  dynamics  of  the  pari-mutual  horse  race  betting  market. 
Analytically,   this  market  can  be  viewed  as  one  In  which  large  groups  of  bettor's 
subjective  probability  distributions  (across  horses)  are  voted  by  way  of 
betting  decisions  and  aggregated  in  the  form  of  odds  (prices).   As  the  betting 
process  in  horse  racing  is  an  interactive  process,  so  also  is  the  E  &  G  model. 
That  is,  their  model  is  a  mathematical  process  of  step-wise  convergence  to  the 
equilibrium  odds  which  should  obtain  given  that  all  bettors  are  expected 
PKmetary  return  maximizers. 

Though  a  more  comprehensive  treatment  of  the  E  &  G  model  (with  examples) 
may  be  found  in  Appendix  A,  certain  general  characteristics  warrant  attention 
here  —  particularly  with  regard  to  its  appeal  as  a  scheme  for  arriving  at 


consensus  subjective  probability  distributions.   Incorporated  in  the  model 
is  the  view  of  the  market  process  as  one  where:   (1)  each  bettor  has  a 
subjective  probability  distribution  across  horses,  (2)  is  then  exposed  to 
continually  revised  pay-off  distributions  (tote  board  odds) ,  and  (3)  has 
wealth  constraints  limiting  the  size  of  his  bet.   Comparatively  speaking, 
these  elements  correspond  to:   (1)  the  initial  subjective  probability  dis- 
tribution that  a  member  of  a  decision  group  possesses,  (2)  feedback  with 
regard  to  other  group  members  beliefs,  and  (3)  the  wealth  constraint  which 
may  be  variously  translated  as  the  power  or  weight  the  individual  might 
wield  in  the  total  group;  or  the  weight  he,  his  fellow  members,  or  an  analyst- 
aggregator  might  accord  his  opinion.   In  the  model,  or  the  actual  market,  two 
factors  determine  equilibrium  odds  (though  equilibrium  in  a  normative  sense 
may  not  actually  be  reached  in  the  market  since  the  betting  period  is  arbitrar 
cut  off )$   if  a  homogeneity  of  decision  models  is  assumed.   These  includ'- . 
(1)  the  subjective  probability  distributions  held  by  the  bettors,  and  (2)  the 
size  of  their  bets.   Again  speaking  comparatively,  these  are  the  primary 
factors  a  mathematical  consensus  should  reflect. 

Hence,  as  a  method  of  estimating  onsensus  distributions,  the  Pari-mutual 
method  has  all  the  advantages  of  other  mathematical  consensus  models  —  and 
then  some.   It  can  accommodate  any  number  of  expert  opinions  in  probability 
form  and,  as  E  &  G  prove,  will  always  work  to  a  unique  solution.   Weights 
derived  in  any  of  the  methods  described  earlier,  may  easily  be  incorporated 
by  way  of  wealth  constraints.  Naturally,  all  problems  of  group  dynamics  are 
avoided  but  a  feedback  element  is  none  the  less  incorporated  in  the  model. 
Finally,  there  is  reason  to  believe  that  the  Pari-Mutual  method  can  generate 
reliable  and  reasonably  accurate  consensus  distributions  for  the  very  reason 
that  its  referent  is  a  market.   That  is,  we  already  have  considerable  evidence 


that  organized  markets  —  in  particular  securities  markets  which  are  not 
all  that  dissimilar  from  the  pari-mutual  markets  — -  tend  to  be  efficient 
in  the  sense  of  generating  unbiased  estimates  in  the  form  of  prices.   In- 
deed, there  is  even  some  existing  empirical  evidence  that  pari-mutual  markets 
generate  fairly  unbiased  prices  (odds)  in  terms  of  their  mapping  onto  actual 
outcomes  (Griffith,  1949).   The  authors  are  currently  involved  in  an  extensive 
empirical  study  on  this  and  related  phenomena  regarding  the  pari-mutual 
market and  preliminary  results  tend  to  support  this  conclusion.   At  the  least, 
it  is  important  to  note  that  evidence  of  the  accuracy  and  reliability  of  this 
aggregation  scheme  will  not  entirely  have  to  come  from  the  artificial  environ- 
ments of  laboratory  experiments  which  is  the  typical  case. 

The  major  drawback  to  the  Pari-Mutual  method  is  that  the  mathematics 
involved  in  the  Eisenberg  and  Gale  paper  are  fairly  sophisticated.   For  example, 
even  the  three  bettor  (e.g.,  expert),  three  horse  (e.g.,  outcomes,  events, 
states)  case  is  difficult  to  deal  with  computationally.   Appendix  A  describes 
more  fully  (with  an  example)  the  computations  which  are  necessary.  Simply 
phrased,  what  is  involved  is  the  mathematical  searching  of  the  faces  and  edges 
of  an  N-dimensional  space  for  global  optimums.   As  N  becomes  large  this  becomes 
a  quite  tedious  process.   However,  since  there  is  a  real-world  referent,  with 
specifiable  mechanics  and  trading  rules,  a  simulation  approach  is  an  attractive 
alternative. 

Again,  the  flowchart  and  technical  details  of  the  computer  simulation 
which  has  been  developed  have  been  set  forth  in  an  appendix  (Appendix  B) . 
(The  actual  program  can  be  supplied  upon  request.)   However,  like  the  dynamics 
of  the  pari-mutual  market  itself,  the  simulation  is  an  iterative  process. 
Further,  like  the  E  &  G  mathematical  model,  it  assumes  a  homogeneous  market 
of  expected  monetary  return  maximizers.   Of  greater  interest,  perhaps,  is 


8 

that  we  have  computationally  employed  the  E  &  G  model  for  several  cases  —  up 
to  a  four  by  four  situation  (bettors  and  horses)  —  and  under  both  equal  and 
unequal  wealtl  assignment.   In  all  ca'  as  the  output  from  the  simulation  was 
successfully  compared  with  the  E  &  G  oiodel  solutions*   Thus  it  would  appear 
that  the  simulation  constitutes  an  acceptable  vehicle  for  implementation  of 
the  Pari-Mutual  method . 

IV.   RESEARCH  APPLICATIONS  OF  THE  PARI -MUTUAL  METHOD 
Research  addressing  the  relative  accuracy  and  reliability  of  the  Pari- 
**^M<L  method  in  generating  consensus  distributions  appears  necessary  —  both 
in  the  form  of  experimentation  and  direct  empirical  testing.   As  noted  earlier, 
the  very  fact  that  validation  may  be  attempted  with  reference  to  both  "real" 
and  "created"  settings  enhances  the  method's  initial  attractiveness.   The 
market  referent  which  underlies  the  Pari-Mutual  approach  provides  a  readily 
available  data  base  for  large  sample  testing  of  the  accuracy  and  reliability 
of  these  "iarge  group"  subjective  probability  distributions.   The  distributions 
themselves  take  the  form  of  final  odds  which  can  easily  be  converted  to 
probabilities  and  compared  with  actual  outcomes.   This  comparison  can  take 
the  form  of  comparison  across  all  races,  with  the  accuracy  concept  related 
to  repetitive-type  decision  settings.   Alternatively,  one  may  look  at  the 
association  of  individual  race  odds  distributions  and  individual  race  outcomes, 
where  the  accuracy  concept  relates  to  the  single  or  unique  one-time  decision 
setting. 

At  the  same  time,  experimentation  with  "small  group"  consensus  involving 
comparison  of  the  Pari-Mutual  method  with  other  mathematical  consensus  approaches 
and  behavioral  consensus  approaches  is  also  desirable.   For  example,  prior 
studies  could  be  extended  or  replicated  to  include  the  Pari-Mutual  method. 


Winkler  (1971)  studied  football  betting  by  comparing  individual  estimates  of 
point  spreads  and  various  forms  of  consensus  estimates.   If  that  data  remains 
available,  it  vould  be  an  easy  task  to  generate  a  Pari-Mucuai  consensus  for 
additional  comparison.   The  Rowse,  et.al.,  (1974)  study  of  the  accuracy  and 
reliability  of  various  aggregation  techniques  using  firemen  could  also  be 
broadened  to  encompasses  the  Pari-Mutual  method. 

Of  course,  new  experiments  also  are  desirable;  not  only  because  the 
potentially  useful  Pari-Mutual  method  remains  untested,  but  also  because  a 
great  need  exists  overall  for  experimental  work  concerning  probability  assess- 
ments by  groups  (Hogarth,  1975).   For  example,  as  an  accounting  researcher, 
one  can  perceive  the  assessment  of  input  or  output  market  values  by  experts 
(appraisers,  real  estate  agents)  as  a  particularly  attractive  setting— since 
the  discipline  may  well  be  entering  the  replacement  cost  or  current  value  era. 
Both  accountants  and  auditors  may  well  be  concerned  with  aggregating  subjective 
probability  judgements  of  experts  on  value.   However,  any  setting  involving 
uncertainty  and  an  actuarial  data  base  for  accuracy  comparisons  is  a  satisfactory 
setting  fc  tti"   needed  experimental  work.   As  such,  the  simulation  model  identified 
in  this  study  would  appear  to  provide  a  vehicle  for  extensive  future  research. 


10 

APPENDIX  A 

THE  EISENBERG  AND  GALE  MODEL 

Eisenberg  and  Gale,  in  formulating  their  model,  assume  m  individuals 
(bettors)  are  betting  on  a  race  with  n  horses.   After  careful  study,  the 
bettor  determines  his  personal  estimation  of  each  horse's  probability  of 
winning.   These  are  expressed  as  an  m)(h  probability  matrix  (p^). 

After  determining  these  probabilities,  the  bettor  places  his  bets  in 
a  way  which  maximizes  his  subjective  expectation.  The  bettor,  of  course, 
does  not  usually  bet  all  of  his  fixed  wealth,  b^t  on  the  horse  for  which 
his  subjective  probability  is  largest.  Instead,  he  waits  until  the  track 
probabilities  TTj ,  are  announced  and  then  places  his  bets  on  the  horses  for 
which  the  ratio  Pij/f^j  is  a  maximum.  Therefore,  the  tTj  depend  on  the  bets 
"M  the  bets  depend  on  the  TTj . 

To  solve  for  the  Liack  probabilities  and,  hence,  the  individual  bets, 

Eisenberg  and  Gale  define  a  function  $  and  show  that  the  variables  which 

maximize  it  yield  a  unique  solution.   This  function  has  ran  arguments  £jj 

and  is  defined  by  the  rule: 

m        n 

4>  (Sll,...,£mn)  "  Z  bi  lo8  Z  Pij-Sii 

i~l       j-1 

the  variables  %±4   subject  to  the  constraints: 

Sij>0 
m 

£  Sij  -  l 

i"! 

$ 

In  order  to  simplify  this  (J)  function,  consider  the  function  i|)  *  e.   Thus 

m   n 
tp  (£,,     £mn)  -  tt  (  E  p   •£..) 

clearly,  maximizing  ij>  is  equivalent  to  maximizing  <j>  . 

In  particular,  consider  ty   for  a  case  of  three  bettors  with  equal  weal  en 
and  three  horses.   The  probability  matrix  for  the  bettors'  subjective  probability 


is  assumed  to  be; 


'ij 


jl/2     1/2     0 
>  3/4     1/8     1/31 
[3/5     1/5     1/5] 


The  problem  then  reduces  to: 

max  1|>  -   (1/2  ?u  +   1/2  £12>(3M  C21  +  1/8  £22  +  -1/8  ?23}' 

(3/5  £31  +  1/5  ?32  +  1/5  e33) 

subject  to: 
3 

I  €lj  -  1     J  -  1,2,3 
i=l 

The  method  of  LaGrange  Multipliers  gives  contradictory  equations  and,  thus, 

the  maximum  must  be  on  the  boundary  of  the  constrained  region. 

Solving  the  constraints  for  1=3  (note  that  £13  can  be  assumed  to  be 

zero  since  p^3  *  0)  and  substituting  into  the  objective  function  ty   we  have: 
1 
max  <jj  -  W   (Ci1Hl2)-(6C21+522H23)-(5-3£li-3C21-Ci2"522-^23> 


subject  to; 

0Sn+521<i 

°15l2+522i1 

0S23ii 

For  simplicity,  the  constant  1/80  may  be  dropped  during  the  maximization  search, 
Graphically,  the  constraints  are: 


%lf 
1 


X 


U        X  ^11        u       1^12  0        1    ^23 


12 


The  maximum  of  i|;  can  be  found  by  considering  the  maximum  of  ty  for  each 
possible  combination  of  the  edges  of  these  figures.  The  final  maximum 
will  be  the  maximum  of  these  maxima. 

For  example,  consider  the  first  of  the  nine  possible  cases;  £-,-.  ~ 

max  $   -  C12(6^21  +  C23>(5-3C21  <u   <2J 


0, 


subject  to    0<   £21  £1 


0< 

^12 

£1 

'31  ^23  I1 

The  domain  is,  thus, 

23 

i. 

/ 

/ 

/ 

< 

; 

,         ,   * 

1/ 

• 

u 


/ 


£i< 


>C2i 


12 


and  the  maximum  is  on  an  edge  of  this  figure.   Again  reduce  the  problem  by 

individual  consideration  of  each  of  the.  twelve  edges. 

The  maximum  of  these  twelve  subcases  is  C10~l»  Coi ^2/3,  ^^  »0 

±Z  ^-L       23 

Hence,  the  solution  for  case  1  is: 
Tmax 


"£n~°»   Ii?3*1' 


'11 


12 


521 


=2/3,  €22"*0 


^23* 


•0 


Upon  comparison  with  the  other  eight  cases,  this  is  shown  to  be  the 
:inai  maximum.   Solving  for  the  other  C^'s  we  have: 


u 


Substituting  into  tha  equation  for  it  : 


TV.     =       °^X         JbiPjJ__.  u       .    l/3    ¥    i 

S 


wg  find' 


^  «  max  (1/3,  1/2,  1/2)  -  1/2 
tt2  -  max  (1/3,  1/12,  1/6)  -  1/3 
7T3  =  max    (0,    1/12,    1/6)      «     1/6 


14 
APPENDIX  B 

THE  SIMULATION  MODEL 

DECISION  MODEL 

The  simulation  model  employed  within  this  study  replicates  a  pari- 

mutual  market.  All  bettors  utilize  a  decision  model  which  dictates  they 

bet  all  horses  where  the  expected  return [ER]  of  that  bet  is  greater  than 

one.  Symbolically,  this  condition  can  be  stated  as: 

ER>1  (i) 

The  expected  return  can  be  decomposed  thusly: 

ER^SPij-ODDSj  (2) 

Where:        i  =  Bettor 
j  =  Horse 
SP  -   Subjective  probability 
ODDS  =  Equivalent  odds  based  on  all  previous  wagers 

INPUTS 

The  simulation  model — illustrated  in  Figure  One— utilizes  a  series  of 

inputs  which  relates  to  the  specific  events  under  study:   Symbolically, 

JJ  =  Number  of  "Bettors'*  (i.e.,  judges) 

IX   =  Number  of  "Horses"  (i.e.,  events) 

Wj  -  Wealth  of  each  bettor  (i.e.,  amount  of  relative  "influence'5 

of  a  given  judge) 
SP.  ~  Subjective  probability  vector  of  a  given  bettor  with 

respect  to  th  success  of  each  horse. 

The  only  input  requiring  discussion  here  is  Wj .   This  variable  represents  the 
relative  influence  of  each  judge  vis-a-vis  one  another.   For  example,  if  judge 
one  is  assigned  twice  as  much  influence  as  judge  two,  then  the  following  values 
might  be  input  to  the  simulation  model:   W.  -  20,000  and  W£  =  10,000. 
INITIALIZATION 

The  next  major  component  of  the  model  requires  the  initialization  of  the 
model  parameters.   While  several  are  simply  intrinsic  to  the  specific 


FIGURE  ONE 
FLOWCHART  OF  SIMULATION  MODEL 


c 


Start 


J 


READ: 

(1)  ft   of  "judges"-^JJ 

(2)  #  of  "events"-^II 

(3)  "Influence"  of  each 

judge-^Wj 

(4)  Subjective  probability 

of  each  judge  to  each  / 
event  — ^  SP.^  .=         / 


2L. 


Initialize  parameters 
(including  "overnight" 
consensus  probabilities) 
0DDSOjj 


3L. 


Begin  "Market"  Iteration--}  Mj 
where  M=l,  1000 


Begin  "Judge"  Iteration-^  J 
where  J  ■  1,JJ 


* 


Sk. 


Begin  "Event"  Icernation-^I 
where  1=1,  II 


jL 


Calculate  "Expected  Return"  (ER) 
for  each  event  and  judge 
ERi.j  -  SPi.j  x  ODDS^i^ 

where:  ODDS  =  current  value 

of  market  determined 
consensus  probability 
(stated  as  odds) 


Calculate  "total  bet"  (TB)  for  Mth 
"market"  iteration  for  judge  "i" 
TBj  -  Wj*X 

1000 
where:   X  =  M/   Z  M. 
i*l  *- 


L 


3£ 


Determine  number  of  events 
(Kj)  where  ERifj  >  1.0 


3E 


Calculate  amt.  ac   individual 
"bet"  (Bj)->Bj=TBj/K. 


& 


15 


'Eet"  on  all   events  where  ER^   •}  >i 


jk 


Calculate  total  "bet"  to  dat 


(across  all  judges)  on  even 


t  "i" 


_i£fi_ 


-<q events  w 

^1R4   4>1 


Caiculate  remaining  "influence" 


for  1 


uage 


Wm,j  *  V-1J  *  TBj 


Calculate  total  bets  (across  all 
judges)  through  iteration  "m" 

TPm  .  I  POOL^ 

5*1 

where  TB  =  total  bets 


fr 


A 


Calculate  consensus  probability 
(ODDS)  for  each  event  "j" 

odds^  -r   — 5,^ — 7 — / 


Calculate  &  print  final  (TT^) 
consensus  probabilities  for 
all  events. 


±L 


END 


16 


computer  program  written,  one  which  was  not  relates  to  "overnight  odds". 
That  is,  in  a  pari-mutuai  market  a  series  of  odds  are  determined  by  handi- 
cappers  which  are  placed  on  the  tote  board  prior  to  actual  betting.  Regard- 
less of  whether  or  not  overnight  odds  impnct  upon  the  subjective  probabili- 
ties of  "real  world"  bettors^  the  simulation  result  indicated  they  had  no 
impact  upon  the  final  consensus  probabilities.   (Random  sets  of  overnight 
odds  were  employed  for  several  runs  of  the  program  with  the  final  results 
all  being  identical.)  Hoxvever,  given  the  bettors   decision  model,  an 
initial  set  of  odds  is  required  to  begin  the  process. 
ITERATIONS 

Upon  completion  of  parameter  initialization,  a  series  of  iterations 
begin.  The  first- -termed  "market"  iteration  —  encompasses  a  complete  cycle 
of  the  entire  process.  The  program  utilized  one  thousand  market  iterations. 
While  this  number  was  arbitrarily  selected,  it  was  chosen  with  the  rationale 
that  a  large  number  was  required  in  order  for  this  market  to  reach  an 
equilibrium  point. 

The  second  iteration  --  termed  "judge"  iteration  --  simulates  the 
entire  decision  process  (including  wagering)  for  a  given  bettor.   Several 
phases  were  included  within  this  iteration,  First,  each  bettor  calculates 
an  expected  return  for  horses  based  on  the  formulation  in  equation  2.  Second, 
the  better  then  calculates  his  total  bet  within  the  given  market  iteration. 
This  calculation  is  based  upon  the  following  formulation: 

TBm,j=wm,jx  CS) 

Where:   m  *  Market  iteration  number 

TB  -    Total  bet 

W  =  Remaining  wealth 

X  =  Proportion  of  remaing  wealth  bet  on  this  round 


17 

The  variable  "X"  is  expressed  as  follows: 

1000 
Xsm  /  £   m-  (4) 

i*2 

This  betting  scheme  basically  states  that  in  each  market  iteration  the  bettor 
will  wager  an  infinitesimal ly  larger  portion  of  his  remaining  wealth.   In 

total --over  all  market  iterations—he  wagers  his  entire  initial  wealth.   (It 
should  be  emphasized  that  other  wagering  schemes  were  attempted.  However, 
all  produced  the  same  final  unique  set  of  consensus  probabilities.) 

The  third  phase  of  the  "judge"  iteration  required  the  bettor  to  bet 
an  equal  share  of  his  total  wager  (i.e.,  TB  from  equation  3)  on  all  horses 
where  his  expected  return  was  greater  than  one  (i.e.,  the  condition  expressed 
in  equation  1).  The  fourth  phase  entailed  updating  various  registers  to 
maintain  cumulative  totals  of  all  wagers  (across  all  bettors)  on  each  horse. 
Finally,  the  bettor's  remaining  wealth  was  adjusted  to  reflect,  his  total 
wagers  in  that  particular  "market"  iteration. 

Once  all  bettors  have  made  their  wagers  for  a  particular  market  iteration^ 
a  new  set  of  odds  are  calculated.   This  process  is  completely  analogous  to 
the  method  found  at  a  race  track  (excluding  "cuts"  to  taxing  bodies  and  track 
commissions)  and  can  be  stated  as  follows: 

TPn,  -  POOL™  -i  ~! 


m   rw^m,i  [ 
ODDS  .  __ —  j  ♦  1  (5) 

Where:    TP  =  Total  bets  across  all  horses 

POOL  *  Total  bets  on  an  individual  horse 
These  calculated  odds  are  then  used  in  the  next  "market"  iteration.  Two 
points  should  then  be  made  with  respect  to  equation  5.  First,  the  number 
stated  parenthetically  is  increased  by  one  to  represent  the  return  of  capital -- 
as  in  the  "real  world"  pari-mutual  market.  Second,  odds--rather  than  probabilities- 


18 

are  calculated  for  purposes  of  the  expected  return  decision  model .  These 
odds  are  later  converted  to  probabilities  by  simply  taking  their  reciprocal. 
Finally,  upon  completion  of  the  1000  "market"  iterations,  the  final  set  of 
consensus  probabilities — 7T*,  per  E  &  G's  notation. — are  output. 
SENSITIVITY  ANALYSIS 

While  a  true  sensitivity  analysis  of  the  model  is  not  included  within 
this  presentation  the  underlying  process  represented  in  this  program  is 
relatively  stable.  That  is,  different  aspects  of  the  model  were  changed  with 
no  variation  in  the  final  consensus  probabilities.  These  aspects  included: 
(1)  different  methods  of  initializing  the  overnite  odds,  (2)  different 
wagering  strategies-- in  terms  of  calculating  the  amount  bet,  and  (3)  different 
numbers  of  market  iterations.  As  such,  it  would  appear  the  key  elements 
within  the  model--the  expected  return  decision  model  and  the  calculation  of 
market  odds --are  the  factors  which  drive  the  model  to  converge  upon  the 
unique  consensus  probabilities.  Moreover,  it  is  these  properties  which 
Eisenberg  and  Gale  employed  as  essential  components  to  their  mathematical 
proof  of  the  unique  set  of  final  probabilities.  As  such  this  simulation 
appears  to  capture  the  essence  of  their  analysis. 


19 

REFERENCES 


DeGroot,  Morris  H.  ,  "Reaching  a  Consensus,"  Journal  of  the  American 
Statistical  Association  69,  345  (March  1974),  118-21. 

Dalkey,  Norman  and  Helmer,  Olaf,  "An  Experimental  Application  of  the 
Delphi  Method  to  the  Use  of  Experts,"  Management.  Science  9,  3 
(April  1963),    458-467. 

Einhorn,  Hillel  J.,  "Expert  Measurement  and  Mechanical  Combinations" 

Organizational  Behavior  and  Human  Performance,  7,  1  (February  1972), 
86-106. 

Eisenberg,  Edmund  and  Gale,  David,  "Consensus  of  Subjective  Probabilities: 
The  Pari- Mutual  Method,"  Annals  of  Mathematical  Statistics,  30,  1 
(1959),  165-68. 

Gustafson,  David  H. ,  Shukla,  Ramish  K. ,  Delberg,  Andre  and  Walster,  G. 

William,  "A  Comparative  Study  of  Differences  in  Subjective  Likelihood 
Estimates  Made  by  Individuals,  Interacting  Groups,  Delphi  Groups, 
and  Nominal  Groups,"  Organizational  Behavior  and  Human  Performance, 
9,  2  (April  1973),  280-291. 

Griffith,  R.  M. ,  "Odds  Adjustments  by  American  Horse-Race  Betters," 
American  Journal  of  Psychology,  62,  (1949),  290-294. 

Hogarth,  Robin  M. ,  "Cognitive  Processes  and  the  Assessment  of  Subjective 
Probability  Distributions,"  Journal  of  the  American  Statistical 
Association,  70,  350  (June  1975),  271-289. 

Morris,  Peter  A.,  "Decision  Analysis  Expert  Use,"  Management  Science 
20A,  9  (May  1974),  1233-1241. 

Rowse,  Glenwood  L. ,  Gustafson,  David  H.  and  Ludke,  Robert  L. ,  "Comparison 
of  Rules  "or  Aggregating  Subjective  Likelihood  Ratios."  Organization 
Behavior  and  Human  Performance,  12,  2  (October  1974),  274-285. 

Raiffa,  Howard,  Decision  Analysis:   Introductory  Lectures  on  Choices  under 
Uncertainty,  Reading,  Massachusetts:   Addison-Wesley  Publishing  Co., 
1968. 

Winkler,  Robert  L. ,  "The  Consensus  of  Subjective  Probability  Distributions," 
Management  Science,  B15,  2  (October  1968),  61-71. 

Winkler,  Robert  L, ,  "Scoring  Rules  and  the  Evaluation  of  Probability  Assessors," 
Journal  of  the  American  Statistical  Association  64,  327  (September  1969), 
1073-8, 

Winkler,  Robert  L. ,  "Probabilistic  Prediction:   Some  Experimental  Results," 
Journal  of  the  American  Statistical  Association,"  66,  336  (December, 
1971),  675-685.