Program Impact Attribution

There are practical options for assessing what portion of any measured behavior changes resulted from your program and what portion resulted from other influences. These options can also be used to attribute the effects of your program on a wide range of related variables such as resources used, pollutants released, accident rates and health status.

Experimental Designs, also called Randomized Control Designs (RCDs) and Randomized Control Trials (RTSAs)

Are you able to randomly assign some groups of people to receive your program now, and others to serve as a control group (who may get your program later)? For example - could you introduce your program first with certain cities, neighborhoods, buildings, floors, departments, or tenants? If so, you may be able to use what is called an "experimental design" without adding much effort or cost to your work.

Let's say, for example that you have identified up to 12 different groups that can be randomly assigned in this way. You would divide them randomly into two groups. Half would receive your program (the intervention group) and half would not (the control group). Because the selection process is random, the two groups are considered statistically equivalent, so any differences that you measure between the two groups over time are assumed to be a result of your program.

                  

Randomized Encouragement Designs (REDs)

Randomized Encouragement Designs (REDs) are becoming increasingly popular for situations where those who are assigned or offered an intervention will not comply with or accept their assignment, and in other situations where it is not possible to randomly assign people into control and intervention groups. REDs involve selecting a subset of eligible people or households, dividing them into treatment and control groups and then actively encouraging (hence the name of the design) households in the treated group to undertake the intervention. This approach helps account for free riders (people who would have changed their behavior even without your intervention.) Note that, as compared to RCTs in which all households comply with their treatment assignment, the number of households required to obtain a given level of statistical power in REDs increases by a factor of c squared (c x c) where c denotes the share of households that will participate in the program when encouraged.

Adapted from US DOE and Berkeley Labs (2010)

Quasi-Experimental Designs

Are you able to get comparison data from a carefully matched group? As with experimental designs, any differences noted between the two groups over time can be assumed to be due to your program. The more significant the differences between your target audience and the comparison group, the less reliable the attribution is.

Staggered Baseline Designs

Staggered Baseline designs are further approaches to attribution. One of the key benefits of these time-series approaches is how simple they are to apply and to talk about with supervisors and other stakeholders. They can also be helpful when you can't randomly assign a control group or find a comparison group.

With Staggered Baseline design you must be able to divide your target audience into two or more groups that receive your campaign at different times, and your time frame must allow for ongoing measurement with all of the groups. You should see changes occurring in only one group at a time - corresponding to when you are running your campaign with each group.

                  

If there were three locations or groups, the picture might look like the illustration below. Data are collected from all three groups and since only one group gets the program at a given time, you should see changes occurring in only one group at a time. 

             

Dose Response

Will you be tracking awareness levels or other measures of exposure to your campaigns? If so, you can test for a correlation between your exposure data and impact data. While this wont demonstrate cause-effect, a strong correlation between exposure and response indicates some clear relationship between your work and the outcomes you are measuring.

                  Dose Response

The following are some additional measures of both reach and depth of campaign exposure.

  1. Seeing campaign messages in emails
  2. Seeing campaign messages on posters
  3. Interpersonal discussion about the campaign
  4. Seeing or hearing about particular elements of the campaign, and
  5. Participation in particular elements of the campaign

Reference

Randomized Encouragement Design: US DOE and Berkeley Labs, 2010. U.S. Department of Energy Smart Grid Investment Grant ­ Technical Advisory Group Guidance Document #7.