(navigation image)
Home American Libraries | Canadian Libraries | Universal Library | Community Texts | Project Gutenberg | Children's Library | Biodiversity Heritage Library | Additional Collections
Search: Advanced Search
Anonymous User (login or join us)
Upload
See other formats

Full text of "Evaluation management report : lessons learned, 1983-1986"



• 



- ' ''" C - ■■'-': 




EVALUATION MANAGEMENT REPORT 
LESSONS LEARNED 1983-1986 



EVALUATION TECHNOLOGIES INCORPORATED 

2020 N. 14th Street, Sixth Floor • Arlington, Virginia 22201 • (703)525-5818 



I 



NEA DC 85-5 



EVALUATION MANAGEMENT REPORT 
LESSONS LEARNED 1983-1986 



Prepared for 

National Endowment for the Arts 

Harold Horowitz, Director of Research 



Prepared by 

Evaluation Technologies Incorporated 

Barbara J. Waite, Project Manager 



May 1986 



I. INTRODUCTION 



A. PROJECT HISTORY 

Evaluation as a management support function was institutionalized at the 
National Endowment for the Arts in October 1971 with the establishment of the 
Endowment's Evaluation Office. By 1979 three types of evaluation were in 
place. "In the past, evaluation has been used largely to pre-screen appli- 
cants so that panels can make grant recommendations based on the applicant's 
quality and potential ability to achieve the objectives specified in the 
guidelines. The second most frequent type of evaluation undertaken has been 
an assessment of the performance of the grantee after receipt of the grant 
award. This is known as 'grantee specific' evaluation. This gives the Endow- 
ment an idea of how effectively the grantee is using the grant. The third 
type of evaluation is done to measure program category effectiveness. In 
other words, it provides information about whether the funding category is 
actually meeting its objectives and contributing to the attainment of the 
Endowment's overall goals." Also in 1979, it was anticipated that "As 
operational planning is undertaken, the programs will be identifying measur- 
able and/or observable objectives. Once done, evaluation can be pegged to 
these. Assessment methods appropriate to arts support can be improved as a 
result." 2 

The Office of Evaluation operational ized the Endowment's position that "Evalu- 
ation, therefore, is important both as an implementation tool and as an aid to 

3 
planning." By 1980, however, the evaluation function had been subsumed, 

with decreased emphasis, under the overall responsibilities of the Research 
Division. 



1 General Plan, 1980-1984. National Endowment for the Arts, Office of 
Policy and Planning, April 1979, page 146. 

2 Ibid, page 147 

3 Ibid, page 147 



I 
I 
I 
I 
I 



With a change in leadership, the Endowment responded to the New Federalism by 
placing increased emphasis on improved management procedures and accountabil- 
ity. Policy changes were implemented to reflect this new emphasis; for ex- 
ample, the submission of Final Descriptive and Statistical Reports are now a 
formal prerequisite for consideration for award of subsequent grants. In 
1982, the Research Division issued Program Solicitation 82-1, Technical Assis- 
tance for a Pilot Program of Evaluation Studies, which acknowledged the En- 
dowment's intention "to resume, by means of this pilot effort, the support of 
program evaluation studies... It is expected that these studies will provide 
the necessary experience to develop program evaluation studies into a contin- 
uing activity." 

Three Endowment offices volunteered to participate in the pilot study and 
identified target subject areas: 

o Office of the Deputy Chairman for Management: Analysis and Use of 
Final Descriptive Reports from Grantees 

o Literature Program: Literary Magazines and Small Presses Category 

o Design Arts Program: Design Demonstration Category 

Evaluation Technologies Incorporated (ETI) was awarded a contract to provide 
evaluation technical assistance in March 1983. 

B. PURPOSE OF THIS REPORT 

The primary purposes of this report are to chronicle the evaluation design and 
implementation activities performed by ETI over the course of two contracts 
and the periods March 1983 - August 1984 and June 1985 - May 1986 ; to describe 
the evaluation processes applied; and to summarize the effects of the effort 
and to offer ETI ' s insights on the potential for evaluation applications at 
the Endowment. 



I 

I 

I 

I 
I 



II. CONTRACT REQUIREMENTS 



A. ENDOWMENT GOALS 

The implicit goals of the contracted assignment were to test the application 
of evaluation, e.g., 

o Is evaluation feasible for the Endowment given the highly subjec- 
tive and non-quantifiable nature of art? 

o Can program activities be defined within an evaluation framework 
without infringing upon or threatening subjective and expert panel 
judgements? 

o Can evaluation activities be performed by Endowment staff within 
the context of their current grant-cycle responsibilities? 

o Can evaluation results be a useful tool for managers and panels? 

The intent of this report is to support a resounding yes to each of these 
questions. 

B. CONTRACT OBJECTIVES 

The initial contract effort was directed toward defining and planning evalua- 
tion studies, to, through conferences with program staff, assess specific 
needs and offer appropriate methods for the formulation of approaches tailored 
to meet those needs. ETI was tasked with the following: 

o Set appropriate quantifiable goals 

o Find methods to economically collect data to measure achievements 
toward reaching those goals 



I 
I 
I 
I 
I 
I 
I 



o Develop procedures for analyzing data and integrating the results 
into the decision-making process 

o Present project results for agency implementation. 

ETI understood that, in addition to developing program evaluation study de- 
signs, we were to work closely with program staff members throughout the pro- 
cess so as to facilitate an internal staff evaluation design capability. That 
is, to show Endowment staff how to design evaluation studies through demon- 
stration and encouraging their participation throughout the process. Further- 
more, ETI was to design the studies and provide guidance on their implemen- 
tation so as to allow the research and analysis to be performed by in-house 
staff. 

Following a one-year period of no internal action on the evaluation plans pre- 
pared by ETI, ETI was again contracted to implement the evaluation studies for 
each program, with their assistance in the collection of grantee data. ETI 
was requested to collect, process, and analyze evaluation data, and provide 
written reports on findings. ETI, in the process, also established automated 
data bases for the programs and made recommendations to the programs regarding 
revised program and grantee data collection schemes which would enhance future 
evaluation efforts. The automated systems were prepared so as to allow con- 
tinued, expanded use by program staff. ETI, in essence, had developed histo- 
rical data bases which allowed for easy updates as grant awards are made each 
year, and thus created the internal ability to have on-going program evalua- 
tion. 

C. PROGRAM PARTICIPANTS 

Initially, three Endowment offices volunteered to participate in the project: 

o Office of the Deputy Chairman for Management 
o Literature Program 
o Design Arts Program. 



Later in the contract period, a fourth office was included. The Inter-Arts 
Program was substituted when the Deputy Chairman for Management left the 
agency. 



III. EVALUATION DESIGN 



A. GOAL-BASED EVALUATION DESIGN PROCESS 

The procedure ETI followed in defining evaluation objectives and developing 
the evaluation plans for each office was fairly standard, and is summarized 
below: 

o Review of program literature regarding current activities, pre- 
vious studies, program documentation, and grant application 
guidelines 

o Repeated conferences with program staff to discuss program/ 
category activities and possible evaluation issues for study, to 
define evaluation objectives, and to identify existing and poten- 
tial data sources 

o Preparation of draft and final goal statement matrices ; staff 
reviews and input 

o Preparation of evaluation strategy papers ; staff reviews and input 

o Development and staff reviews of draft and final evaluation plans 

o Submission of detailed work plans, instructing program personnel 
on the implementation of the evaluation plans. 

ETI began its work with each program by conducting a series of conferences 
with Endowment staff in each assigned office. Our first task was to discuss 
the differences between evaluation and research. We then, together, explored 
various evaluation issues of importance to the planning and development of the 
office, including how the evaluation findings would be used and by whom. 

With a clear definition of an evaluation question, ETI then prepared a goals 
matrix. The purpose of the matrix was to visually portray the relationships 

6 



between Endowment, program, and category goals and to further illustrate how 
the program category activities can be defined for evaluation purposes. The 
matrices contained the following: 

o Endowment goals 

o Program goals 

o Category goals 

o Goal appraisal factors 

o Indicators/measures (pre-grant and post-grant) 

o Data sources 

o Data analysis plans 

o Hypotheses and assumptions 

Upon program staff approval of the matrix, ETI prepared an evaluation strategy 
paper which further defined the proposed evaluation framework by discussing: 

o Evaluation focus , the category goals and evaluation objectives to 
be addressed 

o Uses of evaluation information , identification of the audiences 
and uses of evaluation outcomes 

o Plans for the evaluation design , description of the goal -based 
approach. 

The evaluation strategy paper outlined the type of evaluation (i.e., formative 
or summative) to be designed, the types of information to be generated (e.g., 
project achievement at the individual grantee level) , the measurement points 
(e.g., comparison of pre- and post-grant indicators for each project), and the 
anticipated uses of the evaluation findings (e.g., as input for short-range 
program management decisions and panel/funding/policy decisions; program 
advocacy! . 

Upon staff review and concurrence with the proposed evaluation strategy, ETI 
proceeded to prepare the evaluation plan. The evaluation plans included the 
following basic information: 

7 



o Background information on the identification and definition of the 
evaluation objectives 

o An overview of the evaluation framework and methodology 

o Detailed data collection and analysis plans for each objective 

o An outline for preparation of a final evaluation report. 

ETI also prepared separate work plans as an accompaniment to the evaluation 
plan. The work plan described the nature, scope, and sequence of tasks in- 
volved in the implementation of the evaluation plan, including step-by-step 
procedures and potential pitfalls. 

B. DESIGN PROCESS OUTCOMES 

ETI's work with each of the participating offices was thoroughly documented, 
with each office receiving no less than six complete documents as described 
above. In addition to the production of those materials, and the individu- 
alized technical assistance provided throughout their development, certain 
other benefits were realized. ETI concluded its work with the following 
insights: 

o It was demonstrated to each of the participating offices that 
program activity which is often considered "artistic and non- 
quantifiable" can, in fact, be defined within an evaluation 
framework without infringing upon or threatening subjective and 
expert panel judgements. 

o Participating programs found the evaluation design process partic- 
ularly useful in defining program information needs and purposes, 
and in identifying information sources and gaps. 

o Participating programs suggested that evaluation data will benefit 
their programs in a number of ways, including: 

8 



-- Program management: monitoring program activities, developing 
funding priorities, providing guidance to applicants and 
grantees 

-- Assistance to panels: providing information on the state-of- 
the-field, information on indicators of success, assistance in 
determining funding priorities and performance standards, 
assistance in reviews of program goals, policies, etc. 

— Advocacy: identification of trends in the field and of out- 
standing projects, and general information on how the category 
is doing overall. 

o Program and category goals are frequently not expressed in measur- 
able terms, may not be applied during the application review pro- 
cess, and may not be related to funding priorities. Related per- 
formance expectations or standards have not been consistently 
established. 

For example, ETI found that, frequently, Endowment program goal 
statements incorporate words such as "innovative" and "of highest 
quality." For evaluation purposes, terms such as these must 
either be reworded or defined by quantitative performance indi- 
cators to allow meaningful measurement of goal achievement. As a 
case in point, the Services to the Arts Category of the Inter-Arts 
Program expressed one objective in terms of providing innovative 
business practices to artists and arts organizations. "Innova- 
tive" was defined, for evaluation planning purposes, in terms of 
the accessibility of the service to the arts community, its re- 
duced cost to arts users, and/or its primary focus on the unique 
characteristics and needs of the arts users. 

o Currently, there is little Endowment-provided incentive for pro- 
grams to evaluate goal achievement. 



That is, there currently exists no Endowment-wide policy regarding 
program evaluation. Existing evaluation efforts reflect individ- 
ual program and even personal desires, efforts, and needs for 
evaluative feedback and information. And, in fact, no formal in- 
centive exists to examine programs' histories of achievements when 
planning for future thrusts and activities, as for example in pre- 
paring the Congressionally-requested five-year plan. 



10 



IV. IMPLEMENTATION OF EVALUATION STUDIES 



A. THE IMPLEMENTATION PROCESS 

The procedure used to implement the evaluation design was relatively straight- 
forward. It entailed the development of data recording sheets; data collec- 
tion, including reviews of the grant files, meetings with Endowment personnel, 
and telephone follow-ups for missing information; data analysis; and finally, 
report preparation. 

The development of the data recording sheets was based on the goals statement 
matrix which had been generated during the evaluation design process. The 
data recording sheets were developed in a spreadsheet format to facilitate 
data entry and subsequent computer analysis, and included both pre-grant and 
post-grant information. 

Data collection was a lengthy process. Endowment programs maintain extensive 
files on their grantees, which presuppose a well-developed vocabulary of terms 
and usages particular to the disciplines and the specific Endowment programs. 
Much of the information contained in the files was supplementary to the pur- 
poses of the evaluation. There was a pronounced learning curve with each of 
the program's files, to comprehend the arrangement and composition of the 
files, the language usage, as well as the location (or probable location) of 
information to be compiled on the data recording sheets. 

The grant files for one program represented unique events and were grouped by 
chronological year. The files for another program could also have been 
grouped chronologically, but, because most of these grantees represented 
organizations which had been funded for several years successively, were 
instead arranged by those funded organizations. This latter arrangement 
permitted a somewhat more historical approach to be taken. 

As the files were reviewed, it became apparent that the comparison built into 
the evaluation design (i.e., that of pre-grant and post-grant) could not be 
made with the information contained in the grant files. Either the dissimi- 

11 



larities between the Endowment's pre-grant and post-grant information require- 
ments were too great, or else the evaluation design posed specific questions/ 
concerns which had not previously been addressed by the programs evaluated. 

To offset these gaps in information found following the review of the files, 
telephone interviews with grantees were undertaken to collect missing infor- 
mation. It was decided that the likelihood of response would be greater if 
these were either conducted, or at least initiated, by Endowment personnel, 
rather than the contractor. 

Many of the grantees contacted used this opportunity to voice concerns about 
Endowment procedures, such as the time lag between applying for the grant and 
being awarded one. This adds to the intangibility of planning for arts ser- 
vice organizations, as funding situations may change drastically over the 
intervening period. 

Perhaps the most revealing aspect of the entire implementation process for the 
data collection phase was the scope and magnitude of the files, and the corre- 
sponding magnitude of the learning curve. 

The data collection efforts were significantly enhanced by the extensive input 
and assistance from the Endowment personnel assigned to the evaluation and 
also from their colleagues. For example, for the Design Arts evaluation, 
information collection took place primarily at the Endowment: space was pro- 
vided for the evaluator, as were various support services. A program profes- 
sional was directly assigned to the evaluation effort and undertook all of the 
telephone interviews with the grantees in order to collect missing informa- 
tion. Close coordination with regards to the types of data needed forestalled 
much confusion as to the specifics requested. In addition, this individual 
was dedicated to the evaluation effort, having been hired in support of this 
project. This luxury facilitated a close collaboration, whereby questions on 
the references used in data collection could 'be clarified by the contractor, 
and questions on outcomes and procedures used in competitions could be readily 
explained by that individual. 



12 



The analysis plans were refined during the course of the implementation. Not 
only were significant numbers of responses missing, but program priorities 
(with regards to the some of the proposed analyses) also shifted. Realign- 
ments were therefore necessary and consisted, for the most part, of excising 
most of the comparative studies between pre-grant and post-grant, and of 
shifting the focus of the study to a more contextual and processual one, 
rather than conclusory. 

B. IMPLEMENTATION PROCESS OUTCOMES 

There were three principal outcomes for the implementation phase. The first 
concerns the statistical utility of the analyses, the second concerns auto- 
mation of the data, and, the third, the focus of the evaluation reports them- 
selves. 

Statistical utility depends, for these reports, very much on where one sits. 
In terms of strict research, none of the analyses performed are truly analyti- 
cal, past the basic descriptive mode. The statistics used were of an excep- 
tionally basic nature, and because of missing data and shifted priorities, 
most of the second level analyses planned in the evaluation design could not 
be carried out. In terms of application, however, the basic descriptive sta- 
tistics used present, for probably the first time, overviews of some of the 
grant programs, broken out by component parts. To put it into artistic terms, 
what these reports provide is the preliminary sketch for a painting: the com- 
position and intent are evident, but the fullness and richness of the entire 
painting is not yet visible. Given different questions, and/or different time 
parameters with regards to collecting missing data, it is possible that such a 
painting might in time be developed. But for the immediate purposes of the 
Endowment, it is more useful to have the sketch, as it is at that phase of 
development that changes can be made most easily. 

There are two additional statistical constraints: the size of the populations 
analyzed, and the audience for the evaluation report. Most of the data files 
created for evaluation purposes were of a size sufficient for most statistical 
analyses. Most of these data files were, however, subsequently divided into 
smaller units, which rendered much statistical analysis inappropriate due to 

13 



constraints of size. In addition, the people at the Endowment who read the 
reports are not analysts. They are not statisticians. They are, for the most 
part, individuals with particular talents in the field of the arts or of arts 
service. Numbers and statistical analyses have much less meaning for them 
than narrative descriptions, and qualitative analyses are preferred almost to 
the exclusion of quantitative ones. 

A second particular outcome stems from the mechanics of automation. None of 
the information was on computers of any sort. Records are kept, and kept 
well, in vertical files. This evaluation effort represented one of the first 
attempts to format and analyze the data. It is clear that some type of data 
base management package would be of particular utility to the individual pro- 
grams in terms of tracking the flow of information, of monitoring project per- 
formance and of maintaining an institutional memory that is not dependent on 
any one individual. It creates a factor of accessibility and immediacy not 
present with vertical files alone, and creates the ability to retrieve cri- 
tical information and issues by and for Endowment staff, for the benefit of 
students of the arts, and for accountability to other government entities. 

The third and final outcome concerned the focus of the reports generated for 
the Endowment. For example, for Design Arts personnel, the evaluation report 
has enhanced their ability to respond to information requests by clarifying 
the nature and scope of tasks required to operate and manage a design compe- 
tition. The presentation of a holistic overview of the program for the last 
several years has also greatly facilitated ongoing training workshops in com- 
petition design and management. Feedback from Design Arts personnel concern- 
ing the usefulness of the report includes its application as a cross-reference 
tool for a recent publication on design competitions. The evaluation is 
credited with causing the program staff to establish more complete files for 
those competitions already held, and to establish particular criteria for 
monitoring ongoing and/or future ones. In addition, summaries of the report 
will be provided in the panelists' packages for this year's examination of 
grant applications. Design Arts personnel have not only their own, inside, 
perspective of the relative processes underlying a successful or unsuccessful 
competition, but now also have an outside, relatively unbiased, point of view 
of the organization and controls which create that process. As part of the 

14 



project monitoring function, the report has provided the staff with more 
insight on what types of questions need to be asked at the various phases of 
the competition, and what particular efforts could best be encouraged. 

A parallel concern also emerged with the focus of the reports, and refers back 
to the overall statistical utility of these documents. As most of the Endow- 
ment personnel have little or no familiarity with statistics, the writing of 
reports which are so strongly based in statistics must be altered radically to 
minimize the use of jargon, to present the findings in standard English, and 
to relate the analytical findings as closely as possible with the actual cases 
examined in the grant files. It is not so much literary style, but rather 
that the presentation of the data must be as informal as possible, with the 
maximum use of description of what the various results mean, and far less 
emphasis on the results themselves. In essence, the higher levels of analysis 
of the data are generated in the writing of these reports: synthesizing the 
data into a useful format for the non-statistical reader forces the 
development of hypotheses and trends in a coherent picture. 



15 



V. SUMMARY 



Is evaluation feasible for the Endowment given the highly subjective and non- 
quantifiable nature of art? 

Over a decade ago, the Endowment itself recognized the value of self-evalua- 
tion as a management and planning tool. It was also recognized that evalua- 
tion was routinely, yet informally, applied throughout the granting process. 
Through the two referenced contracts with Evaluation Technologies Incorporated 
(ETI), Endowment programs participating in this pilot effort were shown how to 
formalize that effort for greater utility. 

Discussions with Endowment staff early in the evaluation design phase high- 
lighted the fact that programs have identified many questions about the ef- 
fects of their work, their impact on the field, their constituents, and other 
more specific issues of importance to program planning, development, and man- 
agement. How are we doing? What have we learned? Are we being responsive to 
or influencing changes in the arts field? Evaluation research can contribute 
to internal learning and fostering the Endowment's valuable public relation- 
ships. 

This pilot evaluation effort has demonstrated that by altering and refining 
the information management systems employed by each program, through, for 
example, the use of more targeted information collection instruments and 
automation of the data files, these types of questions can readily be answer- 
ed. The evaluation effort identified the existing and potential sources of 
information, and demonstrated their usefulness in evaluation and information 
research. 

Can program activities be defined within an evaluation framework without 
infringing upon or threatening subjective and expert panel judgements? 

The types of evaluation studies requested by the programs we worked with did 
not put the Endowment in the position of judging the quality of the artistic 
endeavors pursued by grantees, but rather measured quality in terms of the 

16 



arts community's response to the grant project. The evaluation designs also 
examined the grantees' abilities to effectively manage the project and the 
grant funds. As mentioned earlier, Endowment goals regarding such issues as 
"innovativeness" and work of "highest quality" were easily interpreted within 
the context of program category activities. 

In fact, program staff members found the task of clearly defining and matching 
program/category goals and the information required to assess goal achievement 
a yery useful exercise. We suggest that all programs undertake this type of 
activity whether or not a full evaluation is to be performed. It enhances the 
program's understanding of what information needs to be collected from appli- 
cants and grantees, and for what purpose(s). Grant application forms, supple- 
mental information sheets, site visit records, and interim and final descrip- 
tive and financial report requirements can then be revised with the knowledge 
that only necessary information is being collected, while ensuring that criti- 
cal information is not left uncaptured. These activities will serve to reduce 
the reporting burden of grantees and the information management activities of 
program staff. We further recommend that the same forms be used before and 
after the grant project is conducted to allow direct comparisons of planned 
and actual activities and expenses. 

Can evaluation activities be performed by Endowment staff within the context 
of their current grant-cycle responsibilities? 

The initial evaluation design and implementation tasks performed by ETI were 
unusually time consuming due to a number of anticipated factors which relate 
to the start-up of any new project, including the initial participant learning 
curve and the establishment of working relationships. More specifically, in 
this case, efforts to reconcile years-old grant records and frequent changes 
in grantee reporting requirements with the information requirements of the 
evaluation design also created some delays in the evaluation implementation 
phase. 

Program personnel generally possess backgrounds and expertise in the arts, not 
in management sciences or research methodologies. However, ETI consistently 
found Endowment staff members receptive to and interested in expanding their 

17 



capabilities in these disciplines, although at times appeared somewhat intimi- 
dated by their new skills. It is clearly evident that all Endowment staff 
could, with minimal coaching, perform evaluation design and implementation 
functions at the same level at which ETI has performed. 

Furthermore, once the historical data bases have been compiled, as they now 
have in Design Arts and Inter-Arts, the task of maintaining them can become 
routine, and will even decrease the amount of time it currently takes for 
program specialists to file, maintain, and retrieve specific records. Their 
ability to respond to panelist and grant applicant inquiries will also be 
greatly facilitated. 

What is missing is the incentive for programs to change their approach to in- 
formation management -- the personal interest among program specialists is 
there and the capability to automate grant records exists, but there is no 
mandate or management initiative to do so. It is therefore recommended that 
an in-house evaluation technical assistance capability be established, or, at 
the y/ery least, a how-to manual be prepared for Endowment-wide distribution. 
Additionally, training on evaluation methods should be offered for program 
personnel. 

Can evaluation results be a useful tool for managers and panelists? 

As program budgets are curtailed, the importance of truly evaluating program 
performance is heightened. Emphasis should be placed on identifying areas 
where cuts can be made while maintaining optimum program effectiveness. 

In a more narrow focus, evaluation outcomes can be used for program-specific 
purposes, including: 

o Definition of information requirements 

o Identification of grantee-specific accomplishments and problems 
for consideration by panels tasked with making funding 
recommendations 



18 



o Program planning, policy making, and advocacy 

o Preparation of a lessons learned compendium 

o Preparation of best practices handbooks and seminars for grantees 
and other constituents. 

As outlined in Chapters III and IV, it is recommended that the Endowment ex- 
pand its use of evaluation techniques, and, perhaps more importantly, consider 
the implementation of more targeted and less time consuming information col- 
lection and management systems within each program office. 



19