Evidence-based programs are those for which a significant foundation of research has been conducted that provides scientific evidence of effectiveness for defined audiences and within stated environments. In all cases, these programs can only be presumed to be effective for others who adopt them if they are implemented in precisely the manner in which they were tested, also called implementing with fidelity.
Fidelity can be assessed with tools that are designed for specific programs. Several fidelity assessment forms were developed for the California S3 program evaluation and have been made available here for anyone wishing to conduct a self-assessment of these named programs. These assessments are based on interviews with one or more program staff that are fully aware of how the program was implemented, or can be used as a self-assessment by an individual running one of these programs.
Please note: Fidelity assessments are in no way an endorsement of an individual program by CDE or WestEd.
The Fidelity Rating form asks the user to assign a rating of low, moderate or high for one or more “fidelity elements” across the six fidelity dimensions listed below. Each rating has a detailed rubric to help determine the correct rating for each fidelity element. There are no scores to calculate. The intention is that users will identify elements without the highest fidelity and make changes in future implementation leading to higher fidelity.
- Audience Category and Characteristics: A Category is a general population identifier such as parents, students, teachers, adult volunteers, bus drivers, or others. Specific characteristics of each Category might include:
Students: at-risk suspended; general population; abused substances; student leader; etc.
Parents: parents of students who have been suspended, parent volunteers, etc.
As a rating example, if the evidence-based program proved its success with at-risk students selected through a well-identified screening procedure, it should not be used with general population students, or with students screened by some other approach.
- Setting Size: Individual service or certain group sizes. If the program were designed for groups of 5-7 students, it would have Low adherence if delivered in a classroom setting with 30 students. If a program is specifically designed for an afterschool program, it would be a departure from the model to use it during the regular school day.
- Provider Characteristics: This refers to the pre-existing characteristics of the individuals who will deliver the program prior to receiving any program training. Examples include: certificated teacher, certificated counselor, 12th grade student mentor, community adult, Registered Nurse, etc. An example showing a mismatch would be a program model tested with community volunteers, but implemented by the school’s classroom teachers.
- Provider Training: This refers to the specific training the provider was to receive; content, hours, provider, etc.
- Topic Content: Content refers to the recommended curriculum, guides, or other written material provided by the program. This component includes lesson or session topic content (stories, vignettes, readings, assessments, etc.). Topic Content can generally be assumed to have been covered if a Provider uses the full number of lesson plans or guides as designed, taking care to cover all topics.
- Dosage: Dosage refers to time on task for the Topic Content pieces described above. If a curriculum has 20 sessions designed to last one class period each (about 45 minutes), but a Provider presented only 10 sessions at 45 minutes, or tried to cover the 20 session content in five hours, this would be Low fidelity to the intended design.
Fidelity Rating Rubrics
High: The element as implemented was a precise match to the program element described, or varied in a small way that could be reasonably interpreted to match the general intent of the program designers. An example is a program designed for drug user intervention directed to drug users; or, for numerical elements (number of lessons, sessions, time on task, etc.) the program was within 10% of the recommendation.
Moderate: The element as implemented was somewhat different from the program element described. An example is a program designed for drug user intervention directed to groups with both drug users and nonusers; or, for numerical elements (number of lessons, sessions, time on task, etc.) the program was between 50% and 90% of the recommendation.
Low: The element as implemented was very different from the program element described. An example would be a program designed for drug user intervention directed instead to general population students; or, for numerical elements (number of lessons, sessions, time on task, etc.) the program was below 50% of the recommendation.
Fidelity Rating Forms
- Botvin’s Life Skills Program (pdf)
- Breaking Down the Walls Program (pdf)
- Challenge Day Program (pdf)
- Check & Connect Program (pdf)
- Every 15 Minutes Program (pdf)
- IMPACT Program (pdf)
- Link Crew Program (pdf)
- Love & Logic Program (pdf)
- Olweus Bullying Prevention Program (pdf)
- Positive Behavioral Intervention & Supports (PBIS) Program (pdf)
- Peer Leaders Uniting Students (PLUS) Program (pdf)
- Project Toward No Drugs (Project TND) (pdf)
- Ripple Effects Program (pdf)
- Response to Intervention (RTI) Program (pdf)
- Signs of Suicide (SOS) Program (pdf)
- Safe School Ambassadors Program (pdf)
- START on Time (Tardy Sweeps) Program (pdf)