7. Data & Evaluation Toolkit: Overview of Evaluations

Overview of Evaluations

How to Use This Section: Learn about common use cases and examples of evaluation in legal aid.

How do evaluations fit in to data analysis work? 

In the legal aid sector, the term “evaluation” is often used as a stand-in for “assessment”, covering a broad range of use cases and activities such as: 

  • Monitoring case types and volume in a period
  • Collecting feedback from end users on a new tool 
  • Conducting staff performance reviews 
  • Reporting on case outcomes and financial benefits secured 
  • Tracking demographic trends of client populations 
  • Identifying the service needs of a given community 

Each of the above activities can be informed and bolstered by collecting and analyzing relevant data, including through methods described in earlier sections of this toolkit. In the research community, the term “evaluation” has a narrower meaning used to describe a specific set of activities designed to determine the effectiveness and efficiency of a program, as described in the Centers for Disease Control and Prevention’s framework for program evaluation. The phrase “program evaluations” will be used throughout this section to denote this particular meaning. 

Program evaluations are used most often to demonstrate a successful implementation, confirm the positive impact of a program, or learn from negative or unexpected results. These evaluations aim to generate information that can be used to ultimately improve a program. Program evaluators make use of a specific set of social science research and analysis methods, including advanced statistical analysis, to produce rigorous results. Program evaluations make use of data analysis, following the steps outlined in this toolkit of data collection, preparation, analysis, presentation, and learning. 

What is program evaluation?

Some program evaluations take place before or during program implementation, to inform its design and delivery (I.e., formative evaluations). Others take place towards or at the end of the program to assess outcomes and impact (I.e., summative evaluations). Two of the main types of evaluation are: 

Process Evaluations: This formative evaluation type is used to determine whether the program is being implemented according to plan. It involves a detailed examination of how the program is being carried out, whether the intended program activities are taking place as expected, and what obstacles are occurring that impact program operations. Process evaluations also assess if designated resources are being used, target communities reached, and outputs produced as expected. These evaluations help to identify issues in program service delivery, so that appropriate adjustments can be made while the program is ongoing.   

Impact Evaluations: This summative evaluation type is used to determine whether the program successfully achieved its intended outcomes. It assesses the causal relationship between the program and any measured outcomes. This is addressed through two questions:

  1. What were those measured outcomes for program participants?
  2. Did the program cause these outcomes or were they a result of other factors?

To answer these questions, evaluators seek a counterfactual: what outcomes would have occurred if the program did not happen? Crafting an estimated counterfactual (or control) and determining the program’s causal effects typically involves experimental or quasi-experimental research methods. It is important for this work to be done by those trained in these methods to avoid producing inaccurate, biased, or incomplete results. 

These two types of evaluation go hand-in-hand. For example, it is important to determine whether a program was implemented as intended (i.e., through a process evaluation), so that impact evaluation results can be attributed to the actual program activities that occurred (i.e., through an impact evaluation). If the program was implemented differently than intended, then the program impact stems from however the actual implementation occurred and not as it was originally planned. 

Why would an organization do a program evaluation and how do they start it? 

As with any other type of data analysis project, it is important to start with the why: why is a program evaluation needed? Taking this a step further, it is also necessary to consider how results will be used. Common reasons for program evaluations include:

  • To improve a program by understanding what is and is not working well 
  • To secure additional funding or justify program expansion
  • To comply with reporting requirements from funders
  • To generate knowledge around a program’s impact 

For an organization interested in program evaluation, a good starting point would be to articulate a theory of change (“TOC”) and create a logic model for a particular program. A TOC reflects why a program is expected to achieve its intended outcomes. A logic model captures the how of a project, documenting the inputs and activities that comprise the program, the anticipated outputs, and the intended short and long term outcomes. Both process and impact evaluations are based around a program’s underlying TOC. More on TOCs and logic models can be found in LSNTAP’s Project Management toolkit. 

Starting with a TOC and logic model will help an organization lay the groundwork for what should be evaluated and what measures are most appropriate, either for a process or impact evaluation. The TOC and logic model may also spur questions and ideas for what, if anything, needs to be evaluated. Beyond this, organizations will need to take other factors into consideration including: 

  • Timing: Both process and outcome evaluations could be planned for before a program starts, but in general occur at specific times in the program lifecycle. Process evaluations typically happen at the start or during program implementation, so results can be used to adjust the program before it concludes. Impact evaluations typically occur after the program has ended to assess the short and/or long-term program impact.  
  • Cost: Program evaluations can be time and resource intensive efforts to draw accurate conclusions. Cost will vary significantly depending on the size and scope of the evaluation effort. 
  • Staffing: Because of the complexity involved in this type of research, program evaluations are often administered by individuals trained in this subject. Organizations should consider reaching out to academics, researchers, or evaluation consultants to see if they might engage with your organization for program evaluations. One such resource is the Harvard Law School’s Access to Justice Lab. Even if an outside entity is leading the efforts, it is also important to consider what internal staffing resources will be needed to inform and support the evaluation.
  • Feasibility and Relevance: Whether a program evaluation is feasible or relevant is important to consider at the onset, particularly for impact evaluations. For example, the program participant size might be too small to be able to draw accurate statistical conclusions, there may be ethical concerns with a potential study, or it might not be possible to generate results in time for them to be of use. 

This toolkit section has introduced important concepts and considerations for program evaluation. To learn more, check out these evaluation resources from Innovations for Poverty Action, the International Organization for Migration, and the US Office of the Administration for Children and Families

What are Technology Innovation Grant (TIG) evaluations?

As part of their central role in funding dozens of legal aid organizations across the country, the Legal Services Corporation (LSC) operates the Technology Initiative Grant (TIG) program to support a wide range of initiatives that leverage technology to improve, expand, and facilitate delivery of legal services by existing LSC grantees. In addition to the traditional larger Technology Innovation Grants, the TIG program also offers smaller grants (up to $35,000) for Technology Improvement Projects (TIPs) to enable planning for technology needs. Like other grant opportunities, TIGs and TIPs come with reporting requirements, some of which guide the recipient in evaluating the project. These reports also enable LSC to understand how the funds were spent and learn about the methods used and the resulting outcomes. 

For the purposes of the TIG program, LSC categorizes data by whether it is obtained by administrative means, collected in surveys, or gathered through direct engagement with participants in the target system. In addition, LSC distinguish between qualitative and quantitative data. This distinction can inform selection of sources according to the needs of the intended evaluation:

  • Administrative data is the easiest to obtain, particularly in high volumes, but most limited in scope and depth;
  • Direct participant engagement is more costly but is more flexible and can offer a much richer understanding. 

Organizations should use a mix of data sources to balance complexity and cost while ensuring they will adequately assess their intended evaluation targets. 

For organizations preparing for a TIG or TIP evaluations, consider these LSC resources that provide support and ideas for: 

TIG and TIP Final Reports follow a standardized format to encourage reflective analysis by the grantee. These reports include components from the Evaluation Plan which is required as part of the application process. Successful reports will often include appendices with additional data, analysis, and/or sub-reports. Many organizations contract with outside consultants to evaluate a TIG or TIP project and include those findings in the final evaluation report.  

Given its role in funding a broad range of legal aid organizations, LSC requests multiple components of the analysis that will inform future projects, including impacts on a broader population of those served, if applicable, and suggestions for standards usable by those undertaking similar efforts. 

Grantees describe challenges that arose during the project, strategies for addressing them, and overall lessons and recommendations for replication or expansion of the effort undertaken. The diversity of projects undertaken through TIGs makes the specialized knowledge that is developed in this manner all the more important.

Last updated on .

Table of Contents

    NEWS

    News & publications

    The news about recent activities for needed peoples.

    More News

    24 Mar 2023

    Billboard with stage lights shining on the UpToCode project.

    Project Spotlight: UpToCode

    Because everyone has a right to a safe home, Northeast Legal Aid (NLA) is…

    Continue Reading

    28 Feb 2023

    Member spotlight of Josh Lazar featuring superhero comic imagery

    Member Spotlight: Josh Lazar

    We are heading south to Florida today to meet community member Josh Lazar, the…

    Continue Reading

    Our Partners