Set the Stage for your Evaluation


Preparing for an evaluation is a lot like preparing for other important activities. You have to determine what information you need, lay out a plan, and make some decisions about how you will proceed.

Gather Background Information

Doing interviews, reviewing reports, searching in dusty archives—whatever it takes—read everything you can to learn about all the aspects and the various nuances of your project. (Escape route: This step is only necessary if you’re starting from the beginning as an evaluator. If you already have a program, initiative, or program in place, then you’ve got the information you need.)

The background information you need is often in the hands of other people—especially if you are an external evaluator— someone who is conducting an evaluation of a project that he or she did not design.

If you have not been integrally involved with the design of the project or program, you should read everything, from reports to memoranda and meeting notes. In fact, review any documentation that helps you better understand the project. And don’t forget to talk to the program designers and other key stakeholders. They’re an invaluable source of information that can help you better understand the project. They will also help shape questions, identify credible sources, and provide encouragement and critical feedback. Finally, they will render the incalculable service of helping you review and interpret your findings.

Together, the information you pull together from these sources will help you develop a conceptual framework or logic model. Many program designers find it very useful to do this, because then you can view at a glance—and discuss with ease—the project’s motivation and intentions, components, strategies, and desired outcomes.

Develop a Logic Model

A Logic Model is used to conceptualize a single intervention, project, initiative, or program. While it is helpful to build one to understand the relationships among the implementation steps of a project, program, or initiative and the intended outcomes and impact of those activities, it works best if all the activities fall within one initiative. For example, if the goal is to implement a new literacy program in a school, then a logic model can depict the activities that will be undertaken to get that literacy program in place.

If you would still like to know more about developing a Logic Model, click here.

For further explanation of Logic Models, click here.
Design Plan

Identify Evaluation Questions for each Activity or Project Component

A successful evaluation is built on solid evaluation questions. They must be framed to ask about all or most aspects of your goals and implementation steps. They must be clear and measurable. Think about it. If you have developed a logic model, you should use it to frame questions. Think about questions you want to ask about the inputs, the activities, and the outcomes.

The first step in the design of an effective evaluation is identifying key questions that will guide the evaluation.

Ask yourself: What do you want to know?

This sounds obvious, but it is critical to ask yourself this question throughout the entire evaluation process. It is easy to get off course when developing your data collection instruments or analyzing the large amount of data you will have acquired by the end. Asking yourself: What do you want to know? is the best way to stay focused on the critical issues. A set of clearly specified evaluation questions will guide you through this process as well.

So, what is it that you want to know? For most situations, the THREE BIG QUESTIONS we ’ve been talking about can help you determine the sub-questions that are directly related to your plan.

Are you doing what you said you would do?

These are examples of evaluation questions—you will have to write ones that are appropriate for the goals and intended outcomes of your project or program.

  1. Did the committee complete their research on potential providers of the professional development in literacy?
  2. Which grade level teachers participated in the professional development? In which and how many sessions did they participate?
  3. Have the suggested changes in the schedule been approved and implemented?
  4. Are the professional study groups meeting as planned?
  5. Has the finance committee reviewed, revised and/or approved recommended reallocations?
  6. Has the curriculum resource specialist investigated the costs of the new program?
  7. Has a curriculum committee been formed and convened to work on revising the math curriculum?
  8. Have the new assessments to be used for ongoing monitoring been reviewed and distributed?
  9. Are students participating in the council meetings?
  10. Have students provided input to the “Plan for Better Learning”?

How well is it (the project, program, or initiative) going?

Again, the following are examples of evaluation questions—you will design your own.

  1. What did teachers learn from the professional development?
  2. Are teachers changing/improving their practice as a result of participating in the professional development?
  3. Are people able to accommodate the schedule changes? What specific reactions have staff or students had to the change in schedule?
  4. What impact do teachers report from participating in study groups?
  5. How far has the curriculum committee gotten in revising the math curriculum?
  6. Is the program being implemented effectively? As intended?
  7. Are the changes that the committee suggested being implemented?
  8. Are students more engaged in learning?
  9. Is the quality of instruction improving?
  10. How have the meetings with parents that you planned been going?

Is what you’re doing making a difference or having an impact?

The subquestions under this major question would be considered summative evaluation questions. Examples might be:

  1. Have teachers changed their classroom practices based on the professional development for the new literacy program? Which teachers? To what degree have they changed their practices?
  2. Has student learning improved as a result of participating in the new math program?
  3. Has the school climate changed to be more conducive to and supportive of academic accomplishment?
  4. Have the individuals responsible changed what they are doing in response to the recommendations of the parent committee?
  5. Do classrooms reflect an emphasis on literacy?
  6. What evidence do we have that student study habits have improved?

In selecting and prioritizing evaluation questions, several criteria should be considered:

  • Who wants to know this information?
  • Will the information confirm what you have hunches about?
  • Who will care about the information that is collected?
  • Are there sufficient resources to collect the information needed to answer the questions?
  • And can they be addressed in the time you have? Making the evaluation doable is the first step toward getting it done.
There are many, many potential research questions to consider. You will no doubt want to get more specific than the questions above. It’s a good idea to have a brainstorming session with project stakeholders and develop a solid list of evaluation questions. It’s better to start with a complete list and pare it down than to realize you missed something halfway through your data collection!
Select Methods

The next step in designing the evaluation is determining how you will go about measuring your constructs and answering your research questions. You need to select the right set of methods to answer the questions you have specified.Tape Measure

After you’ve determined the questions you hope to answer (and perhaps the constructs you will need to measure), you can begin to consider how best to find the data you need to answer those questions. You can collect data in a variety of ways. You can design surveys or questionnaires, conduct interviews, hold focus groups, or combine them together to capture more information. These types of data collection methods fall into two general categories: quantitative and qualitative methods. Which one is right to monitor your implementation, or should you use a combination of methods? These days it is very common to use the latter—referred to as a “mixed methods” approach because it provides the richest information about how things are going.

Analyze Data
Analysis of
Qualitative Data

Analyze DataYou’ve developed an evaluation design. You’ve determined the best means to collect your data, and then you’ve done it—collected all of it. Now you’re at the point of making sense of all the data available to you. But how do you do it?

Analyses of both qualitative and quantitative data can yield a rich pool of information, but pulling it out of the raw data requires that you follow a few basic steps—carefully. Presenting your findings in a clear and convincing way is the final step in this phase of your evaluation.


What will you do with the information?

This is where the whole evaluation process was leading from the beginning. What decisions can you make based on the data collected? What actions can you—should you—take? This is where you decide what changes you and others want to make to improve the program or what steps to take to initiate a new one. Recall the questions you identified in the beginning.
How did you answer them?
What did you learn?
How will you use this information?
What did you find?

In some ways, this is the most satisfying stage of your evaluation. After preparing for and designing your evaluation, after collecting and analyzing your data, you can now tell the world (or at least the people important to your project) what you found.

ReportA good evaluation report is clear, concise, and provides adequate evidence for claims and enough explanation to make sure the reader understands your interpretation of the data. It is sometimes tempting to include too much information. When you have collected stacks and stack of surveys and reams of field notes, it is difficult to know “when to say when.” You’ve become invested in each of your data tables, but if the data don’t show anything, leave them out. Be clear about how the evaluation was conducted as well. Everyone involved in developing the school improvement plan or who is affected by the school improvement plan will be interested both in the methods and the outcomes of the evaluation.

Taking action means implementing specific strategies to accomplish your goals—to “put the rubber on the road,” as they say. Develop an action plan based on the data collected. Changes or improvements may focus on the content of the program, format, delivery, staffing, follow-up strategies, activities, setting, resources, and on and on. It all depends on what your data tell you. And all the decisions do not need to be made at one point in time. Collecting data to make course corrections should occur in an iterative way—one change leading to another after the results of the first change are assessed.

Tips on Making Decisions and Taking Actions
  • Consider whether you think the findings are valid for your program. Validate the data by looking for support for one set of data in another set. Do the findings clearly apply to your situation?
  • Determine what actions/decisions are suggested by the findings. Focus on areas to address, but you don’t try to address everything at once.
  • Determine whether possible actions are feasible. The data may suggest changes that are not really possible for you to make—given resources, time, or other constraints.
  • You may need to do additional research or information-gathering on particular strategies or program adjustments. Don’t jump into something without knowing enough about it to know whether it is likely to work for your set of circumstances.
  • Determine how you will know whether the changes/improvements are working. Put a monitoring plan in place that will allow you to watch implementation carefully. Don’t forge ahead without examining how well things are going as you proceed.