Set the Stage for your Evaluation

Stage

Preparing for an evaluation is a lot like preparing for other important activities. You have to determine what information you need, lay out a plan, and make some decisions about how you will proceed.

Gather Background Information

Doing interviews, reviewing reports, searching in dusty archives—whatever it takes—read everything you can to learn about all the aspects and the various nuances of your project. (Escape route: This step is only necessary if you’re starting from the beginning as an evaluator. If you already have a program, initiative, or program in place, then you’ve got the information you need.)

The background information you need is often in the hands of other people—especially if you are an external evaluator— someone who is conducting an evaluation of a project that he or she did not design.

If you have not been integrally involved with the design of the project or program, you should read everything, from reports to memoranda and meeting notes. In fact, review any documentation that helps you better understand the project. And don’t forget to talk to the program designers and other key stakeholders. They’re an invaluable source of information that can help you better understand the project. They will also help shape questions, identify credible sources, and provide encouragement and critical feedback. Finally, they will render the incalculable service of helping you review and interpret your findings.

Together, the information you pull together from these sources will help you develop a conceptual framework or logic model. Many program designers find it very useful to do this, because then you can view at a glance—and discuss with ease—the project’s motivation and intentions, components, strategies, and desired outcomes.

Develop a Logic Model

A Logic Model is used to conceptualize a single intervention, project, initiative, or program. While it is helpful to build one to understand the relationships among the implementation steps of a project, program, or initiative and the intended outcomes and impact of those activities, it works best if all the activities fall within one initiative. For example, if the goal is to implement a new literacy program in a school, then a logic model can depict the activities that will be undertaken to get that literacy program in place.

For a more detailed explanation of logic models, click here. For more information about developing a logic model, go to Lab 1: Logic Model.

Design Plan
  -
  -

 

No doubt you have a lot invested in gaining approval or funding or whatever was needed to plan and implement an project or program, and you initiated it based on a belief that it will work and will benefit the target audience—children, teachers, coaches, instructional aides, parents—whoever it might be. Just getting the pieces in place and “pushing the boat off the shore” and then waiting to see whether the wind comes up or what people say about the experience is not enough to determine its value.

How will you know whether the project or program you have initiated is making a difference?

Remember the 3 BIG questions?
(mouse-over to view them)
Question1
Question2
Question3
These are the high-level questions you need to keep in mind, but before you can decide what data or information to collect, you have to develop an evaluation plan that will make capturing it possible. This is arguably the most critical phase of the process. Clear thinking and careful planning will save headaches later as the data begin rolling in.
Select Methods
  -
  -
  -
    >
    >
    >
    >
    >
    >
 

One way to grasp the distinction between these two approaches is to think about the difference between numbers (quantitative) and words (qualitative). Surveys and assessments, for instance, are typically used to collect information that’s easily quantifiable, such as the number of teachers who implement strategies that they’ve learned or the number of teachers who improve their score on a test of content knowledge. Interviews and focus groups, on the other hand, are ideal for gathering qualitative data, such as rich, descriptive information regarding how or why teachers incorporate what they’ve learned into their classroom.

Some examples of quantitative and qualitative data collection methods:
Quantitative
Qualitative
  • Surveys or questionnaires
  • Statewide assessment data
  • Teacher content assessments
  • District assessments
  • School or classroom assessment or student work or grades
  • Non-cognitive data such as attendance and discipline referrals
  • In-depth interviews
  • Focus groups
  • Concept maps
  • Student work
  • Teacher observations
  • Open-ended questionnaires or reflection forms
  • Teacher journals
  • Lesson plans

Quantitative methods use instruments that can be administered to a large number of respondents at once, therefore making collecting data from many respondents more reasonable.

Qualitative methods, such as interviews and observations, are often time-intensive, making them difficult to use with a very large number of respondents.

As a result of this difference, these two approaches typically offer different levels of generalizability, and this can be an important distinction depending on what you want to know.

If, for example, you want to know how teachers in a district are reacting to a series of professional development sessions they’re participating in, then conducting one or two focus groups with 4-5 teachers in each does not really give you generalizable data. In this case, you might need to administer a survey to all (or a representative sample of) the teachers.

Examples: A survey of teachers who participated in a professional development event can yield representative and broadly generalizable information. This is extremely valuable in assessing the quality of your events, as well as the impact on teachers. On the other hand, interviews with a small group of teachers can help you to understand in detail why the professional development program was effective and how it could be made more effective. Qualitative methods can provide in-depth information to help you interpret your quantitative findings.
Analyze Data
 
-
 
-
 
>
 
>
 
>
 
>
 
>
 
-
 
-
Analysis of
Qualitative Data
 
>
 
>
 

Analyze DataYou’ve developed an evaluation design. You’ve determined the best means to collect your data, and then you’ve done it—collected all of it. Now you’re at the point of making sense of all the data available to you. But how do you do it?

Analyses of both qualitative and quantitative data can yield a rich pool of information, but pulling it out of the raw data requires that you follow a few basic steps—carefully. Presenting your findings in a clear and convincing way is the final step in this phase of your evaluation.

What will you do with the information?

This is where the whole evaluation process was leading from the beginning. What decisions can you make based on the data collected? What actions can you—should you—take? This is where you decide what changes you and others want to make to improve the program or what steps to take to initiate a new one. Recall the questions you identified in the beginning.
How did you answer them?
What did you learn?
How will you use this information?
Choice
What did you find?

In some ways, this is the most satisfying stage of your evaluation. After preparing for and designing your evaluation, after collecting and analyzing your data, you can now tell the world (or at least the people important to your project) what you found.

ReportA good evaluation report is clear, concise, and provides adequate evidence for claims and enough explanation to make sure the reader understands your interpretation of the data. It is sometimes tempting to include too much information. When you have collected stacks and stack of surveys and reams of field notes, it is difficult to know “when to say when.” You’ve become invested in each of your data tables, but if the data don’t show anything, leave them out. Be clear about how the evaluation was conducted as well. Everyone involved in developing the school improvement plan or who is affected by the school improvement plan will be interested both in the methods and the outcomes of the evaluation.

Taking action means implementing specific strategies to accomplish your goals—to “put the rubber on the road,” as they say. Develop an action plan based on the data collected. Changes or improvements may focus on the content of the program, format, delivery, staffing, follow-up strategies, activities, setting, resources, and on and on. It all depends on what your data tell you. And all the decisions do not need to be made at one point in time. Collecting data to make course corrections should occur in an iterative way—one change leading to another after the results of the first change are assessed.

Tips on Making Decisions and Taking Actions
  • Consider whether you think the findings are valid for your program. Validate the data by looking for support for one set of data in another set. Do the findings clearly apply to your situation?
  • Determine what actions/decisions are suggested by the findings. Focus on areas to address, but you don’t try to address everything at once.
  • Determine whether possible actions are feasible. The data may suggest changes that are not really possible for you to make—given resources, time, or other constraints.
  • You may need to do additional research or information-gathering on particular strategies or program adjustments. Don’t jump into something without knowing enough about it to know whether it is likely to work for your set of circumstances.
  • Determine how you will know whether the changes/improvements are working. Put a monitoring plan in place that will allow you to watch implementation carefully. Don’t forge ahead without examining how well things are going as you proceed.