Evaluation 101 Banner

Set the Stage for your Evaluation


Preparing for an evaluation is a lot like preparing for other important activities. You have to determine what information you need, lay out a plan, and make some decisions about how you will proceed.

Gather Background Information

Doing interviews, reviewing reports, searching in dusty archives—whatever it takes—read everything you can to learn about all the aspects and the various nuances of your project. (Escape route: This step is only necessary if you’re starting from the beginning as an evaluator. If you already have a program, initiative, or program in place, then you’ve got the information you need.)

The background information you need is often in the hands of other people—especially if you are an external evaluator— someone who is conducting an evaluation of a project that he or she did not design.

If you have not been integrally involved with the design of the project or program, you should read everything, from reports to memoranda and meeting notes. In fact, review any documentation that helps you better understand the project. And don’t forget to talk to the program designers and other key stakeholders. They’re an invaluable source of information that can help you better understand the project. They will also help shape questions, identify credible sources, and provide encouragement and critical feedback. Finally, they will render the incalculable service of helping you review and interpret your findings.

Together, the information you pull together from these sources will help you develop a conceptual framework or logic model. Many program designers find it very useful to do this, because then you can view at a glance—and discuss with ease—the project’s motivation and intentions, components, strategies, and desired outcomes.

Develop a Logic Model

A Logic Model is used to conceptualize a single intervention, project, initiative, or program. While it is helpful to build one to understand the relationships among the implementation steps of a project, program, or initiative and the intended outcomes and impact of those activities, it works best if all the activities fall within one initiative. For example, if the goal is to implement a new literacy program in a school, then a logic model can depict the activities that will be undertaken to get that literacy program in place.

If you would still like to know more about developing a Logic Model, click here.

For further explanation of Logic Models, click here.
Design Plan


No doubt you have a lot invested in gaining approval or funding or whatever was needed to plan and implement an project or program, and you initiated it based on a belief that it will work and will benefit the target audience—children, teachers, coaches, instructional aides, parents—whoever it might be. Just getting the pieces in place and “pushing the boat off the shore” and then waiting to see whether the wind comes up or what people say about the experience is not enough to determine its value.

How will you know whether the project or program you have initiated is making a difference?

Remember the 3 BIG questions?
(mouse-over to view them)
These are the high-level questions you need to keep in mind, but before you can decide what data or information to collect, you have to develop an evaluation plan that will make capturing it possible. This is arguably the most critical phase of the process. Clear thinking and careful planning will save headaches later as the data begin rolling in.
Select Methods

Assessments... or tests have only one general purpose: To systematically gather information.
When gathering information about students, the bottom line in selecting and using any assessment should be whether it helps them. So it’s important to consider who needs the information, what kind of information is needed, and when it is needed.

Assessments can be objective or subjective. Objective assessment is a form of questioning that has a single correct answer. Subjective assessment is a form of questioning that may have more than one current answer (or more than one way of expressing the correct answer).
What are important considerations when selecting assessments?

Assessments should be selected or developed based on a consideration of: 

  • Audience: General public or press; administrators; parents; teachers; students
  • Purpose:
    1. Judge effectiveness of school
    2. Judge effectiveness of curriculum
    3. Monitor progress of child
    4. Plan instruction, activities, strategies
    5. Identify strengths, areas to address
  • Frequency: Annually, or by term/semester; periodically or 5-6 times a year; daily, or as often as possible


What are different types of assessments?

Formal assessments are often referred to as standardized measures because they have been tried out on appropriate groups and have statistics to support the conclusions. The data are mathematically computed, and scores such as percentiles, stanines, or standard scores are commonly reported. They are used to assess overall achievement, to compare a student's performance with others at their age or grade, or to identify comparable strengths and weaknesses with peers.

Informal assessments are sometimes referred to as criterion-referenced measures or performance-based measures and should be used to inform instruction. Students are rated against specific standards. Although these have been considered more “informal” than standardized measures, they are common formats for statewide assessments that are standards-based. The purposes of criterion-referenced assessments may be to determine a level of functioning, mastery of a curriculum, or use of specific strategies. Performance-based assessments are designed to examine the student’s actual performance on a structured task—one that presents a situation that will elicit the appropriate skill level of a student.

Work samples can be examined to show actual functioning level in authentic classroom-based tasks. Such samples may be compiled in a portfolio or container that holds evidence of an individual’s skills, ideas, interests, and accomplishments. They provide evidence of learning over time and enable teachers and parents to assess student growth and progress.

Observations and interviews can also provide important information related to the accomplishment of specific learning objectives. Observations provide opportunities to record a student’s learning style, patterns in behavior, approaches to learning situations, persistence in efforts, problem-solving skills, and so on. Interviews can be used for self-reporting of interest, motivation, learning style, approach to problems, preferences in instructional approaches and the like.

In selecting a test or an assessment, the most important thing to do is find one that matches as closely as possible the learning objectives of your project or program. If, for example, the goal of the project is to increase reading speed while maintaining comprehension, it will be important to measure both speed and comprehension. Testing word recognition skills would be less important in this case. A combination of assessments—multiple measures—is most often recommended as the best strategy to use to bring pieces of the puzzle together to give a more precise picture of the student’s learning. A single score provides only a snapshot at one point in time—a point that is affected by many internal and external factors.

Analyze Data
Analysis of
Qualitative Data

Analyze Data

You’ve developed an evaluation design. You’ve determined the best means to collect your data, and then you’ve done it—collected all of it. Now you’re at the point of making sense of all the data available to you. But how do you do it?

Analyses of both qualitative and quantitative data can yield a rich pool of information, but pulling it out of the raw data requires that you follow a few basic steps—carefully. Presenting your findings in a clear and convincing way is the final step in this phase of your evaluation.

What will you do with the information?

This is where the whole evaluation process was leading from the beginning. What decisions can you make based on the data collected? What actions can you—should you—take? This is where you decide what changes you and others want to make to improve the program or what steps to take to initiate a new one. Recall the questions you identified in the beginning.
How did you answer them?
What did you learn?
How will you use this information?
What did you find?

In some ways, this is the most satisfying stage of your evaluation. After preparing for and designing your evaluation, after collecting and analyzing your data, you can now tell the world (or at least the people important to your project) what you found.

ReportA good evaluation report is clear, concise, and provides adequate evidence for claims and enough explanation to make sure the reader understands your interpretation of the data. It is sometimes tempting to include too much information. When you have collected stacks and stack of surveys and reams of field notes, it is difficult to know “when to say when.” You’ve become invested in each of your data tables, but if the data don’t show anything, leave them out. Be clear about how the evaluation was conducted as well. Everyone involved in developing the school improvement plan or who is affected by the school improvement plan will be interested both in the methods and the outcomes of the evaluation.

Taking action means implementing specific strategies to accomplish your goals—to “put the rubber on the road,” as they say. Develop an action plan based on the data collected. Changes or improvements may focus on the content of the program, format, delivery, staffing, follow-up strategies, activities, setting, resources, and on and on. It all depends on what your data tell you. And all the decisions do not need to be made at one point in time. Collecting data to make course corrections should occur in an iterative way—one change leading to another after the results of the first change are assessed.

Tips on Making Decisions and Taking Actions
  • Consider whether you think the findings are valid for your program. Validate the data by looking for support for one set of data in another set. Do the findings clearly apply to your situation?
  • Determine what actions/decisions are suggested by the findings. Focus on areas to address, but you don’t try to address everything at once.
  • Determine whether possible actions are feasible. The data may suggest changes that are not really possible for you to make—given resources, time, or other constraints.
  • You may need to do additional research or information-gathering on particular strategies or program adjustments. Don’t jump into something without knowing enough about it to know whether it is likely to work for your set of circumstances.
  • Determine how you will know whether the changes/improvements are working. Put a monitoring plan in place that will allow you to watch implementation carefully. Don’t forge ahead without examining how well things are going as you proceed.