Skip to main content

Full text of "Toward a descriptive model of post-implementation evaluation"

See other formats


BOOKSTACKb 


BEBR 


FACULTY  WORKING 
PAPER  NO.  89-1552 


Toward  a  Descriptive  Model  of 
Post-Implementation  Evaluation 


Dan  N.  Sto7ie 


College  of  Commerce  and  Business  Administration 
Bureau  of  Economic  and  Business  Research 
University  of  Illinois  Urbana-Champaign 


BEBR 


FACULTY  WORKING  PAPER  NO.  89-1552 

College  of  Commerce  and  Business  Administration 

University  of  Illinois  at  Urb ana -Champaign 

April  1989 


Toward  a  Descriptive  Model  of  Post- Implementation  Evaluation 

Dan  N.  Stone,  Assistant  Professor 
Department  of  Accountancy 


Sincere  thanks  to  Janis  Carter  for  thoughtful  comments  on  an  earlier 
draft  of  this  paper. 

Presented  at  the  International  Conference  on  Organizations  and 
Information  Systems.  Bled,  Yugoslavia,  September  13-15,  1989. 


Post-Implementation  Evaluation 

2 


Toward  a  Descriptive  Model  of 
Post-Implementation   Evaluation 


Abstract 


Strategies  for  evaluating  computer-based  information  systems  (CBISs)  recommended  in  the 
information  systems  (IS)  literature  are  generally  based  upon  formal,  quantitative  models  of 
evaluation.  However,  evidence  suggests  that  IS  professionals  frequently  omit  formal,  quantitative 
evaluation  of  CBISs  and  rely  instead  on  informal,  qualitative  evaluation.  If  formal,  quantitative 
models  of  CBIS  evaluation  are  of  value,  why  are  they  infrequently  used  by  their  intended 
beneficiaries? 

Distinguishing  between  uncertainty  and  equivocality  provides  insight  into  why  IS 
professionals  might  omit  formal,  quantitative  evaluation.  Uncertainty  is  the  absence  of  information, 
while  equivocality  is  information  that  is  unclear,  conflicting  or  paradoxical.  Evaluation  designed  to 
reduce  uncertainty  uses  formal  processes  and  methods,  defined  organizational  roles  and 
responsibilities,  quantifiable  criterion,  and  objective  data.  Evaluation  designed  to  reduce 
equivocality  uses  informal  processes  and  methods,  negotiated  roles  and  responsibilities,  qualitative 
criterion,  and  subjective  data.  One  explanation  why  IS  professionals  frequently  omit  formal, 
quantitative  evaluation  of  CBIS  may  be  that  such  procedures  are  not  helpful  in  reducing 
equivocality. 


Digitized  by  the  Internet  Archive 

in  2011  with  funding  from 

University  of  Illinois  Urbana-Champaign 


http://www.archive.org/details/towarddescriptiv1552ston 


Post-Implementation  Evaluation 

3 

Toward  a  Descriptive  Model  of 
Post-Implementation  Evaluation 

An  organization  implements  a  computer-based  information  system  (CBIS).  Later,  someone 
asks  if  the  CBIS  is  a  success  or  failure.  Does  it  contribute  to  organizational  goals?  Should  it  be 
maintained,  expanded,  replaced,  or  abandoned? 

Evidence  suggests  that  the  inability  to  measure  and  evaluate  productivity  gains  is  a  major 
obstacle  to  investment  in  CBISs  (Blacker  and  Brown,  1988;  Strassman,  1985).  Controversy  over 
measuring  productivity  contributions  from  new  technology  has  resulted  in  increasing  skepticism 
regarding  the  benefits  of  CBISs  (Bowen,  1986;  Business  Week,  1988).  One  approach  to 
understanding  the  controversy  over  productivity  measurement  is  to  reexamine  the  methods  and 
assumptions  of  existing  CBIS  evaluation  models. 

Researchers  have  long  recognized  the  importance  and  complexity  of  evaluating  CBISs.  As 
a  result,  a  number  of  formal,  quantitative  methods  for  evaluating  CBISs  have  been  suggested 
(e.g.,  King  and  Schrems,  1978;  King  and  Epstein,  1983;  Piepta  and  Anderson,  1987; 
Schwuchow,  1977).  However,  evidence  suggests  that  formal,  quantitative  methods  (e.g.,  cost- 
benefit  analysis)  for  evaluating  CBISs  are  relatively  infrequently  used  (Greiner,  Leitch,  and 
Barnes,  1979;  Hogue  and  Watson,  1984),  and  are  considered  of  dubious  value  by  many 
information  systems  (IS)  researchers  and  practitioners  (Keen,  1981;  Hirschheim  and  Smithson, 
1988;  Zmud  and  Apple,  1989).  The  schism  between  the  recommendations  for  evaluation  found  in 
the  IS  literature  and  descriptions  of  evaluation  practice  suggests  an  obvious  question.  If  formal, 
quantitative  methods  for  CBIS  evaluation  are  of  value,  why  are  they  infrequently  used  by  their 
intended  beneficiaries?  This  purpose  of  this  paper  is  to  develop  a  model  that  provides  insight  into 
this  and  related  questions. 

CBIS  evaluation  is  herein  defined  as  the  process  of  determining  how  a  CBIS  impacts  and  is 
impacted  by  an  organization.  Several  assumptions  are  implicit  in  this  definition.  First,  that  a  CBIS 
has  been  implemented,  meaning  that  evaluation  is  a  post-implementation  activity.  Evaluation  is 
therefore  identified  as  distinct  from  feasibility  analysis  (e.g.,  Caddell,  1985)  and  a  priori 
justification  of  CBISs  (e.g.,  Bozcany,  1983).  Second,  it  is  assumed  that  organizations  both  create 
and  are  created  by  CBISs  (Markus,  1984).  Evidence  suggests  that  implementing  a  CBIS  can 
trigger  complex,  often  unanticipated  chains  of  events  in  organizations  (Barley;  1986;  Markus  and 
Robey,  1988).  These  chains  of  events  ultimately  mean  that  organizations  shape  and  are  shaped  by 
CBISs.  Finally,  it  is  assumed  that  evaluation  can  be  either  formal  (e.g.  cost-benefit  analysis)  or 
informal  (e.g.  a  conversation  between  two  IS  managers  over  lunch).  Relaxing  the  typical  definition 
of  evaluation  as  a  formal,  quantitative  process  permits  building  a  more  descriptive  framework  that 
recognizes  both  planned  and  unplanned,  and  formal  and  informal  evaluation. 


Post-Implementation  Evaluation 

4 

This  paper  develops  a  model  that  explains  why  IS  professionals  frequently  omit  formal, 
quantitative  CBIS  evaluation,  relying  instead  on  less  formal,  qualitative  methods.  The  presentation 
of  this  model  is  organized  as  follows.  First,  uncertainty  is  distinguished  from  equivocality,  and 
expected  differences  in  uncertainty-reducing  and  equivocality-reducing  CBIS  evaluation  are 
identified.  Second,  a  descriptive,  contingency  model  of  CBIS  evaluation  is  proposed  that  relates 
the  usefulness  of  uncertainty  and  equivocality-reducing  evaluation  to  relationships  between 
organizational  actors.  The  paper  concludes  with  a  discussion  of  the  implications  of  the  model  for 
CBIS  evaluation  research. 

Uncertainty  and  Equivocality-Reducing  CBIS  Evaluation 

Uncertainty  and  Equivocality 

A  useful  dichotomy  in  considering  CBIS  evaluation  is  the  distinction  between  uncertainty 
and  equivocality.  Uncertainty  is  the  absence  of  information  (Miller  and  Frick,  1949;  Daft  and 
Lengel,  1986).  As  information  increases,  uncertainty  decreases.  The  game  of  20  questions 
illustrates  uncertainty  and  uncertainty  reduction.  A  questioner  receives  yes-no  answers  to  questions 
intended  to  identify  an  unknown  object  as  either  animal,  vegetable,  or  mineral  (Taylor  and  Faust, 
1952;  Bendig,  1953;  Daft  and  Lengel  1986).  Uncertainty  is  eliminated  when  the  object  is  correctly 
identified.  In  management  tasks  characterized  by  uncertainty,  managers  are  able  to  ask  questions, 
and  get  answers  that  permit  problem  solving.  Organizational  processes  can  be  structured  to  reduce 
uncertainty  through  the  use  of  rules  and  regulations  and  through  the  creation  of  formal,  structured 
IS  (Daft  and  Lengel,  1986). 

In  contrast,  equivocality  involves  interpreting  data  that  is  unclear,  conflicting,  or 
paradoxical  (Daft  and  Macintosh,  1981;  Weick,  1979).  The  sentence,  "I  saw  the  man  on  the  hill 
with  the  telescope,"  (Simon,  1982,  p.  93)  is  equivocal:  multiple  interpretations  are  possible.  Do  I 
have  the  telescope,  or  does  the  man  on  the  hill?  Is  the  telescope  merely  on  the  hill  and  not  in  the 
man's  hand?  Managers  deal  with  'men  on  hills  with  telescopes'  regularly,  and  must  make  sense  of 
such  equivocality.  Daft,  Lengel,  and  Trevino  (1987,  p.  357)  observe  that  in  equivocal 
environments,  "Managers  are  not  certain  what  questions  to  ask,  and  if  questions  are  posed  there  is 
no  store  of  objective  data  to  provide  an  answer." 

Fundamental  to  the  process  of  managing  equivocality  is  "sense-making,"  which  involves 
exchanges  between  managers  intended  to  reduce  equivocality  and  create  a  shared  interpretation  that 
can  direct  future  events  (Weick,  1979).  When  facing  equivocality,  managers  build  a  shared 
interpretation,  and  "enact"  a  solution,  rather  than  relying  on  data  gathering  activities  to  direct  events 
(Daft  and  Weick,  1984).  The  processes  of  sense-making  and  enactment  involve  exchanging 
subjective  opinions,  managing  multiple  perspectives,  and  proactively  shaping  environments 


Post-Implementation  Evaluation 

5 

(Smircich  and  Stubbart,  1985).  Equivocality  is  reduced  by  managing  and  generating  both  events 
and  interpretations  of  events. 

CBIS  Evaluation  as  Uncertainty-reducing  Activity 

Approaches  to  CBIS  evaluation  contain  underlying  assumptions  as  to  whether  evaluation 
processes  should  be  designed  primarily  to  reduce  uncertainty  or  equivocality.  Existing  research  on 
assessing  CBIS  impact  generally  views  evaluation  as  an  uncertainty-reducing  process  based  upon 
formal,  objective  data  collection  and  information  processing  (e.g.  Hamilton  and  Chervany,  1981a, 
1981b).  Viewing  CBIS  evaluation  as  a  data  collection  activity  leads  to  an  evaluation  process 
focused  on  gathering  and  processing  data  to  reduce  and  eliminate  uncertainty  about  CBIS  impact. 
CBIS  evaluation  methods  based  upon  formal,  objective  data  collection  include  cost-benefit  analysis 
(e.g.,  Emery,  1982;  King  and  Schrems,  1978),  user  surveys  (e.g.,  Miller  and  Doyle,  1987; 
Rushinek  and  Rushinkek,  1983),  measures  of  computer  usage  (e.g.,  Ferrari  1978;  Hiltz  and 
Turoff,  1981),  and  the  use  of  "objective"  data  sources  external  to  the  implementing  organization 
(e.g.,  Banker  and  Kauffman,  1988). 

Uncertainty-reducing  CBIS  should  be  recognizable  by  the  existence  of  organizational 
processes  oriented  towards  formal  data  gathering  and  information  processing.  Figure  1  describes 
organizational  processes  that  should  exist  when  CBIS  evaluation  is  perceived  as  an  uncertainty- 
reducing  activity.  In  general,  uncertainty-reducing  CBIS  evaluation  should  rely  on  formal 
evaluation  processes  and  procedures,  should  utilize  the  defined  authority  structure  of  the 
organization,  should  rely  on  quantifiable  measures  of  the  system,  and  should  be  based  upon 
objective,  verifiable  data.  In  addition,  CBIS  evaluation  designed  to  reduce  uncertainty  is  likely  to 
focus  on  measuring  expected,  anticipated  effects,  rather  than  exploring  unplanned,  unexpected 
effects  of  CBISs. 


Insert  Figure  1  about  here 


CBIS  Evaluation  as  Equivocality-reducing  Activity 

An  alternative  evaluative  perspective  is  to  view  assessing  IS  impact  as  a  sense-making 
process  (Weick,  1985).  Viewing  CBIS  evaluation  as  sense-making  suggests  that  determining  the 
impact  of  an  information  system  requires  interpreting  conflicting,  ambiguous  information,  and  may 
involve  building  and  enacting  shared  interpretations  of  events  to  resolve  equivocality. 
Consequently,  CBIS  evaluation  as  sense-making  leads  to  an  evaluation  process  largely  focused  on 
exchanging  subjective  opinions  and  beliefs,  rather  than  gathering  formal,  objective  data. 


Post-Implementation  Evaluation 

6 

Figure  2  describes  organizational  processes  that  should  exist  when  CBIS  evaluation  is 
conducted  as  an  equivocality-reducing  activity.  In  general,  equivocality-reducing  CBIS  evaluation 
should  rely  on  informal  (rather  than  planned)  meetings  and  discussions,  on  negotiated  (rather  than 
assigned)  roles  and  responsibilities,  on  qualitative  (rather  than  quantitative)  dimensions  of  system 
success,  and  on  subjective  (rather  than  objective)  data.  In  addition,  equivocality-reducing  CBIS 
evaluation  is  more  likely  to  consider  unplanned  impacts  (e.g.  changes  in  social  relationships). 


Insert  Figure  2  about  here 
A  Model  of  CBIS  Evaluation 


When  are  organizations  likely  to  undertake  uncertainty-reducing  versus  equivocality- 
reducing  CBIS  evaluation?  Two  characteristics  that  may  be  useful  in  predicting  the  evaluative 
approach  used  are:  (1)  the  extent  of  agreement  among  organizational  actors  as  to  CBIS-related 
goals,  and  (2)  the  extent  of  agreement  as  to  whether  an  implemented  CBIS  achieves  system-related 
goals  (i.e.,  whether  the  CBIS  provides  the  means  for  achieving  goals).  The  extent  of  agreement 
on  means  and  goals  is  likely  to  influence  the  importance  of  uncertainty-reducing  versus 
equivocality-reducing  activites,  and  to  thereby  influence  CBIS  evaluation. 

Organizational  actors  may  have  differing,  conflicting  goals  with  respect  to  an  implemented 
CBIS  (Kling,  1987;  Kling,  1980).  Actors  may  value  CBISs  as  a  means  of  achieving  functional 
objectives  (e.g.,  reducing  costs),  as  a  symbol  of  the  importance  of  an  individual  or  group  within 
the  organization,  or  as  a  signal  of  commitment  to  particular  organizational  ideologies  (Feldman  and 
March,  1981;  Robey  and  Markus,  1984).  Agreement  among  actors  on  goals  reduces  equivocality, 
since  objectives  can  be  assumed,  and  need  not  be  constructed  through  sense-making  and  enactment 
processes. 

When  organizational  actors  disagree  as  to  CBIS-related  goals,  equivocality  will  be  high.  As 
organizational  actors  move  towards  disagreement  on  goals,  CBIS  evaluation  will  likely  move 
towards  equivocality-reducing  processes.  Consequently,  evaluation  processes  are  likely  to  assume 
the  characteristics  of  equivocality-reducing  evaluation  stated  in  Figure  2. 

Actors  may  also  disagree  as  to  whether  an  implemented  CBIS  achieves  desired  goals  (i.e., 
does  the  CBIS  provide  the  means  for  achieving  goals?).  For  example,  actors  may  agree  that 
reducing  costs  is  desirable,  but  may  disagree  as  to  whether  an  implemented  CBIS  has  achieved 
cost  savings.  Agreement  among  actors  on  means  for  achieving  goals  (e.g.,  the  system  does  reduce 
costs)  decreases  uncertainty. 

When  organizational  actors  disagree  on  means  for  achieving  CBIS-related  goals, 
uncertainty  will  be  high.  As  organizational  actors  move  towards  disagreement  on  means,  CBIS 


Post-Implementation  Evaluation 

7 

evaluation  will  likely  move  towards  objective  data  gathering  and  information  processing  designed 
to  reduce  uncertainty  about  CBIS  impact.  Consequently,  as  organizational  actors  move  toward 
disagreement  on  means,  evaluation  processes  are  likely  to  assume  the  characteristics  of  uncertainty- 
reducing  evaluation  stated  in  Figure  1. 

Figure  3  is  a  descriptive  model  of  CBIS  evaluation  that  summarizes  the  hypothesized 
relationships  between  agreement  on  goals  and  means,  and  CBIS  evaluation  processes.  When 
agreement  on  both  goals  and  means  for  achieving  goals  is  high  (cell  1),  evaluation  is  trivial,  since 
actors  agree  both  as  to  goals,  and  as  to  whether  the  CBIS  achieves  agreed-upon  goals.  When 
agreement  on  CBIS-related  means  is  low,  but  agreement  on  goals  is  high  (cell  2),  uncertainty  will 
be  high,  and  evaluation  will  be  constructed  primarily  to  reduce  uncertainty,  resulting  in  formal, 
quantitative  evaluation.  When  agreement  on  CBIS-related  goals  is  low,  but  agreement  on  means  is 
high  (cell  3),  equivocality  will  be  high,  and  evaluation  will  be  undertaken  to  reduce  equivocality, 
resulting  in  informal,  qualitative  evaluation.  When  agreement  on  both  goals  and  means  is  low  (cell 
4),  uncertainty  and  equivocality  are  high,  and  CBIS  evaluation  is  likely  to  employ  both  uncertainty- 
reducing  and  equivocality-reducing  evaluation  processes.  In  such  cases,  evaluation  is  likely  to 
employ  both  formal  and  informal  processes,  defined  and  negotiated  roles  and  responsibilities,  and 
qualitative  and  quantitative  criteria. 


Insert  Figure  3  about  here 

Discussion  and  Conclusion 

Most  existing  research  views  CBIS  evaluation  as  an  uncertainty-reducing  activity. 
However,  Such  a  perspective  does  not  explain  the  infrequent  use  of  formal  data  collection  activities 
evidenced  in  surveys  of  evaluation  practice.  One  explanation  why  IS  professionals  often  omit 
formal  CBIS  evaluation  and  rely  instead  on  informal,  subjective  interpretations  of  system  impact 
may  be  that  formal  CBIS  evaluation  is  of  little  value  in  reducing  equivocality. 

Ultimately,  the  goal  of  investigating  CBIS  evaluation  is  to  offer  prescriptions  for  improving 
productivity  measurement.  However,  existing  methods  for  evaluating  CBISs  largely  ignore 
evaluation  as  an  equivocality-reducing  process.  The  intent  of  this  paper  is  to  legitimize  informal, 
unplanned,  equivocality-reducing  evaluation,  by  recognizing  that  informal  evaluation  may  be  of 
greater  value  than  formal  evaluation  under  certain  circumstances.  Ideally,  legitimizing  equivocality- 
reducing  evaluation  will  lead  to  normative  and  prescriptive  evaluation  approaches  that  take 
seriously  the  subjective,  informal,  impressionistic  evaluations  considered  largely  irrelevant  and 
uninformative  by  previous  CBIS  evaluation  research. 


Post-Implementation  Evaluation 

8 


References 


Banker,  R.D.  and  R.J.  Kauffman,  1988,  "Strategic  Contributions  of  Information  Technology:  An 
Empirical  Study  of  ATM  Networks,"  Proceedings  of  the  Ninth  International  Conference  on 
Information  Systems.  November  30-December  3,  1988,  Minneapolis,  MN. 

Barley,  S.R.,  1986,  "Technology  as  an  Occasion  for  Structuring:  Evidence  from  Observations  of 
CT  Scanners  and  the  Social  Order  of  Radiology  Departments,"  Administrative  Science 
Quarterly.  31. 78-108. 

Bendig,  A.W.,  1953,  "Twenty  Questions:  An  Information  Analysis,"  Journal  of  Experimental 
Psychology,  46,  345-348. 

Blacker,  F.  and  C.  Brown,  1988,  "Theory  and  Practice  in  Evaluation:  The  Case  of  the  New 

Information  Technologies,"  in  Bjorn-Andersen  N.  and  G.B.  Davis  (Eds.),  Information 
Systems  Assessment:  Issues  and  Challenges.  North-Holland  Publishing  (Amsterdam). 

Bowen,  W.,  1986,  "The  Puny  Payoff  From  Office  Computers,"  Fortune  (May  26),  20-24. 

Bozcany,  W.J.,  1983,  "Justifying  Office  Automation,"  Journal  of  Systems  Management.  34,  No. 
7,  15-19. 

Business  Week.  1988,  "The  Productivity  Paradox  -  A  Special  Report,"  (June  6),  100-1 14. 

Caddell,  D.D.,  1985,  "The  Limited  Computer  Feasibility  Study:  A  Cost-saving  Approach," 
Journal  of  Accountancy  (April).  122-126. 

Daft,  R.L.,  and  R.H.  Lengel,  1986,  "Organizational  Information  Requirements,  Media  Richness 
and  Structural  Design,"  Management  Science.  32,  5  (May),  554-571. 

Daft,  R.L.,  and  N.B.  Macintosh,  1981,  "A  Tentative  Exploration  into  the  Amount  and 

Equivocality  of  Information  Processing  in  Organizational  Work  Units,"  Administrative 
Science  Quarterly.  26,  207-224. 

Daft,  R.L.,  and  K.E.  Weick,  1984,  "Toward  a  Model  of  Organizations  as  Interpretation  Systems," 
Academy  of  Management  Review.  9,  2,  284-295. 

Daft,  R.L.,  R.H.  Lengel  and  L.K.  Trevino,  1987,  "Message  Equivocality,  Media  Selection,  and 

Manager  Performance:  Implications  for  Information  Systems,"  MIS  Quarterly  (September), 
355-366. 

Emery,  J.C.,  1982,  "Cost/Benefit  Analysis  of  Information  Systems,"  in  Couger,  J.D.,  Colter, 
M.A.  and  Knapp,  R.W.  (Eds.),  Advanced  System  Development/Feasibility  Techniques. 
John  Wiley  and  Sons,  New  York. 

Feldman,  M.S.  and  J.G.  March,  1981,  "Information  in  Organizations  as  Signal  and  Symbol," 
Administrative  Science  Quarterly.  26,  2  (June),  171-186. 

Ferrari,  D.,  1978,  Computer  Systems  Performance  Evaluation.  Prentice-Hall  (Englewood  Cliffs, 
NJ). 

Greiner,  L.E.,  D.P.  Leitch,  and  L.B.  Barnes,  1979,  "Putting  Judgment  Back  into  Decisions," 
from  Harvard  Business  Review,  reprinted  in  Harvard  Business  Review  on  Human 
Relations:  Classic  Advice  on  Handling  the  Manager's  Job.  Harper  and  Row,  New  York. 

Hamilton,  S.  and  N.L.  Chervany,  1981a,  "Evaluating  Information  System  Effectiveness  -  Part  1: 
Comparing  Evaluation  Approaches,"  MIS  Quarterly  (September),  55-69. 


Post-Implementation  Evaluation 

9 

Hamilton,  S.  and  N.L.  Chervany,  1981b,  "Evaluating  Information  System  Effectiveness  -  Pan  1: 
Comparing  Evaluator  Viewpoints,"  MIS  Quarterly  (December),  79-86. 

Hiltz,  S.R.,  and  M.  Turoff,  1981,  "The  Evolution  of  User  Behavior  in  a  Computerized 
Conferencing  System,"  Communications  of  the  ACM.  24,  739-751. 

Hirschheim,  R.,  and  S.  Smithson,  1988,  "A  Critical  Analysis  of  Information  Systems 

Evaluation,"  in  Bjorn-Andersen  N.  and  G.B.  Davis  (Eds.),  Information  Systems 
Assessment:  Issues  and  Challenges,  North-Holland  Publishing  (Amsterdam). 

Hogue,  J.,  and  Watson,  H.,  1984,  "Current  Practices  n  the  Development  of  Decision  Support 
Systems,"  Proceedings  of  the  Fifth  International  Conference  on  Information  Systems. 
Tucson,  Arizona,  November  28-30,  1984,  117-127. 

Keen,  P.W.,  1981,  "Value  Analysis:  Justifying  Decision  Support  Systems,"  MIS  Quarterly 
(March),  1-14. 

King,  J.L.  and  B.J.  Epstein,  1983,  "Assessing  Information  System  Value:  An  Experimental 
Study,"  Decision  Sciences.  14,  34-45. 

King,  J.L.  and  E.L.  Schrems,  1978,  "Cost-Benefit  Analysis  in  Information  Systems  Development 
and  Operation,"  ACM  Computing  Surveys.  (March),  19-34. 

Kling,  R.,  1987,  "Defining  the  Boundaries  of  Computing  Across  Complex  Organizations,"  in  R.J. 
Boland,  Jr.  and  R.  A.  Hirschheim  (Eds.),  Critical  Issues  in  Information  Systems 
Research.  John  Wiley  (New  York). 

Kling,  R.,  1980,  "Social  Analyses  of  Computing:  Theoretical  Perspectives  in  Recent  Empirical 
Research,"  Computing  Surveys.  12,  1,  61-110. 

Markus,  M.L.,  1984,  Systems  in  Organizations:  Bugs  and  Features.  Pitman  (Marshfield,  MA). 

Markus,  M.L.,  and  D.  Robey,  1988,  "Information  Technology  and  Organizational  Change:  Causal 
Structure  in  Theory  and  Research,"  Management  Science.  34,  583-598. 

Miller,  G.A.  and  F.C.  Frick,  1949,  "Statistical  Behaviorists  and  Sequences  of  Responses," 
Psychological  Review.  56,  6  (November),  31 1-324. 

Miller,  J.,  and  B.A.  Doyle,  1987,  "Measuring  the  Effectiveness  of  Computer-Based  Information 
Systems  in  the  Financial  Services  Sector,"  MIS  Quarterly  (March),  107-124. 

Pieptea  D.R.,  and  E.  Anderson,  1987,  "Price  and  Value  of  Decision  Support  Systems,"  MIS 
Quarterly  (December),  515-528. 

Robey,  D.,  and  M.L.  Markus,  1984,  "Rituals  in  Information  Systems  Design,"  MIS  Quarterly 
(March),  5-15. 

Rushinek,  A.,  and  S.F.  Rushinek,  1983,  "An  Evaluation  of  Mini/Micro  Systems:  An  Empirical 
Multivariant  Analysis,"  Database  (Summer),  37-47. 

Schwuchow,  W.,  1977,  "The  Economic  Analysis  and  Evaluation  of  Information  and 

Documentation  Systems,"  Information  Processing  and  Management.  13,  267-272. 

Simon,  H.A.,  1982,  The  Sciences  of  the  Artificial.  MIT  Press,  Cambridge,  Mass. 

Smircich,  L.,  and  C.  Stubbart,  1985,  "Strategic  Management  in  an  Enacted  World,"  Academy  of 
Management  Review.  10,  4,  724-736. 

Strassman,  P.A.,  1985,  Information  Pavoff.  Free  Press  (New  York). 


Post-Implementation  Evaluation 

10 

Taylor,  D.W.,  and  W.L.  Faust,  1952,  "Twenty  Questions:  Efficiency  in  Problem  Solving  as  a 
Function  of  Size  of  Group,"  Journal  of  Experimental  Psychology.  44,  360-368. 

Weick,  K.E.,  1979,  The  Social  Psychology  of  Organizing.  Addison-Wesley,  Reading,  Mass. 

Weick,  K.E.,  1985,  "Cosmos  vs.  Chaos:  Sense  and  Nonsense  in  Electronic  Contexts," 
Organizational  Dynamics  (Fall)  51-64. 

Zmud,  R.W.,  and  L.E.  Apple,  1989,  "Measuring  Information  Technology  Infusion,"  unpublished 
working  paper,  Florida  State  University. 


Figure   1 
Characteristics  of  Uncertainty-reducing  CBIS  Evaluation 

*  Emphasis  on  formal  processes  and  formal  evaluation,  e.g.  formal  reports,  formal 
meetings. 

*  Use  of  formal,  defined  organizational  authority  structure  to  manage  evaluation,  e.g. 
official  titles  and  roles. 

*  Emphasis  on  quantifiable  and  direcdy  measurable  aspects  of  system,  e.g.  costs,  system 
usage,  etc.. 

*  Use  of  objective,  verifiable  data  sources  and  methods,  e.g.  user  surveys,  cost-benefit 
analysis,  system  logs,  etc.. 

*  Emphasis  on  planned  CBIS  impact,  e.g.  cost  savings,  administrative  convenience. 


Figure  2 
Characteristics  of  Equivocality-reducing  CBIS  Evaluation 


*  Emphasis  on  unplanned  processes  and  informal  evaluation,  e.g.  unplanned  meetings, 
informal  discussion. 

*  Use  of  informal  organizational  authority  structure  to  manage  evaluation,  e.g.  negotiation 
of  roles  and  responsibilities. 

*  Use  of  subjective  opinions,  impressionistic  "evidence"  and  experiential,  interpretive 
"methods,"  e.g.  in-depth  interviews,  systematic  reflection. 

*  Consideration  of  unplanned  CBIS  impact,  e.g.  social  relationships,  unexpected 
consequences. 


Figure  3 
A  Descriptive  Model  of  CBIS  Evaluation 


Agreement  on  Means 


High 


Agreement 
on  Goals 


Low 


High 

Low 

Cell  1 

Cell  2 

equivocality:  low 

equivocality:  low 

uncertainty:  low 

uncertainty:  high 

evaluation  :  trivial 

evaluation: 

uncertainty- 

reducing 

Cell  3 

Cell  4 

equivocality:  high 

equivocality:  high 

uncertainty:  low 

uncertainty:  high 

evaluation: 

evaluation: 

equivocality- 

equivocality 

reducing 

and  uncertainty- 

reducing 

HECKMAN 

BINDERY  INC. 

JUN95 

„      .  ,    „,     V  N.  MANCHESTER 
Bound  -To  .WeuS"   |ND|ANA  46962