Agency Program Evaluations - Revision 1
FSIS directive
1045.1
Series Type
1000 Series: FSIS Infrastructure
Issue Date
May 23, 2024
Full Directive
- PURPOSE
This directive outlines the policy, evaluation process, and guiding evaluation principles that FSIS is to follow to conduct program evaluation activities. This directive also defines and delineates the roles and responsibilities of the Office of Planning, Analysis, and Risk Management (OPARM), which is responsible for managing evaluation activities. FSIS is revising this directive in its entirety to reflect changes in the evaluation process. - CANCELLATION
FSIS Directive 1045.1, Agency Program Evaluations, 10/5/17 - BACKGROUND
- FSIS implements many activities to meet its public health mission, as outlined in its strategic and annual plans and required functions. Evaluations are important to assess whether these activities are operating or are being implemented effectively and efficiently. An evaluation is an assessment using systematic data collection and analysis of one or more programs, policies, processes, and organizations intended to assess their effectiveness and efficiency.
- Evaluations are to be practical, independent, feasible, reflect FSIS priorities (such as those stated in the FSIS strategic and annual plans), and use time and resources appropriately. Evaluations are to be conducted in an objective and ethical manner, and produce accurate findings, conclusions, and recommendations that aim to modify or improve activities and operational performance.
- Evaluations directly support the FSIS enterprise governance process and Agency decision-making by providing information about how programs, policies, processes, or changes are or are not achieving desired results, and about how programs, policies, and processes may be optimized to achieve desired results. Any time a new program, policy, process, or organization is implemented, an evaluation should be considered.
- FSIS is committed to using performance measurement, data analysis, and evaluation to achieve greater accountability and the most effective and equitable program outcomes. FSIS evaluation principles outlined in this directive align with the Foundations for Evidence-Based Policymaking Act of 2018 (Evidence Act), Departmental Regulation (DR) 1230-001 (Evaluation Policy), and DR 1074-001 (Scientific Integrity).
- PURPOSE OF EVALUATIONS
- The Evidence Act defines evaluation as "an assessment using systematic data collection and analysis of one or more programs, policies, and organizations intended to assess their effectiveness and efficiency."
- Evaluations can be done before, during, or after a program, policy, process, or organizational change is implemented and may address the following:
- Before implementation: assesses whether a program, policy, process, or organizational change - or some aspect of these - is feasible, appropriate, and acceptable before it is fully implemented;
- During implementation: assesses effectiveness or impact of specific strategies for or used by a program, policy, process, or organizational change; or
- After implementation: assesses the short and long-term effects of a program, policy, process, or organizational change and determines whether there is a positive change, a negative change, or no change.
- Evaluations can provide important information to inform decisions about current and future programming, policies, and organizational operations. They are crucial for learning and improvement purposes, as well as accountability.
- EVALUATION PROCESS
- OPARM is to develop an evaluation schedule, for the Office of the Administrator's (OA) approval, to determine which evaluations are to be conducted each fiscal year. The evaluation schedule is to be developed from evaluation proposals that are submitted to OPARM. Evaluations may be proposed before, during, or after a new program, policy, or process is implemented to assess feasibility or effectiveness. Proposals for evaluations are to be submitted as follows:
- Any individual within FSIS can propose an evaluation. This can be done by completing FSIS Form 1360-17, Evaluation Intake Form, and submitting it to OPARM per the form's instructions. While any individual may complete the form, an applicable Assistant Administrator (AA) signature is required;
- OPARM may issue an Agencywide data call, in which program areas have an opportunity to submit evaluation proposals; or
- Once an evaluation is approved, OPARM is to work with the program areas being evaluated to identify an evaluation sponsor. This sponsor is to be in an executive-level or supervisory position and be a subject-matter expert on the topic being evaluated. The sponsor is responsible for working with OPARM to define the evaluation scope; provide direction and input throughout the evaluation; and coordinate with applicable program areas as needed.
- OPARM is to conduct initial research on the evaluation topic to identify available data sources, resource needs, limitations, and risks. Through this research, OPARM is to work with the sponsor and program areas being evaluated to identify the scope, objectives, and questions the evaluation aims to answer. OPARM is to reach out to the AAs, or equivalent, who are to identify Agency subject-matter experts who may be asked to contribute to the evaluation. The scope, objectives, and evaluation questions are to be shared with applicable program areas as needed and the Enterprise Steering Board (ESB) for informational purposes and to promote transparency. Once the evaluation has commenced, if at any point the sponsor or stakeholders would like to modify the agreed upon scope of the evaluation, OPARM is to achieve final approval for scope change through OA.
- OPARM is to develop an evaluation plan and data analysis plan. These documents are to identify the evaluation scope, objectives, questions, stakeholders, subject-matter experts, methodology (including data collection and analysis methods), and timeline.
- OPARM is to hold a kick-off meeting with the sponsor, impacted program areas, and subject-matter experts to align the project team and provide an opportunity to address questions prior to initiating the evaluation. OPARM is to then conduct the evaluation and provide updates to the sponsor and program areas being evaluated. OPARM and the sponsor are to determine the frequency of these updates. During these updates, the sponsor and program areas are to have an opportunity to ask questions and provide feedback.
- Once the evaluation is complete, OPARM is to present preliminary findings and recommendations to the sponsor and program areas being evaluated. OPARM is to work with the sponsor and program areas being evaluated to determine which recommendations are feasible to implement. OPARM is to then draft the final report, share with applicable program areas so they have an opportunity to provide feedback, clear the final report, and share the final report with all stakeholders.
- OPARM and the sponsor are to coordinate to provide informational briefings to applicable stakeholders, ESB, OA, and Management Council, as needed, on the findings and recommendations. OPARM is to then document all findings and recommendations through a tracking tool, which can be shared upon request. Program area AAs, or appropriate designees, are responsible for implementing evaluation recommendations pertaining to their program area, including tracking and reporting progress to OA.
- OPARM is to develop an evaluation schedule, for the Office of the Administrator's (OA) approval, to determine which evaluations are to be conducted each fiscal year. The evaluation schedule is to be developed from evaluation proposals that are submitted to OPARM. Evaluations may be proposed before, during, or after a new program, policy, or process is implemented to assess feasibility or effectiveness. Proposals for evaluations are to be submitted as follows:
- FEDERAL PROGRAM EVALUATION STANDARDS
- M-20-12, Program Evaluation Standards and Practices, reflects the core values from the Office of Management and Budget and each standard requires the integration of all other standards. Evaluators need to practice these standards in their work for Federal evaluations to have the credibility needed for full acceptance and use.
- The evaluation standards are:
- Relevance and Utility: Evaluations are to address questions of importance and serve the information needs of stakeholders to be useful resources. Evaluations are to present findings that are actionable and available in time for use;
- Rigor: Evaluations are to produce findings that stakeholders find reliable. Evaluations are to have the most appropriate design and methods to answer key questions, while balancing its goals, scale, timeline, feasibility, and available resources;
- Independence and Objectivity: Evaluators are to be objective in the planning and conducting of evaluations. The interpretation and dissemination of findings are to avoid conflicts of interest, bias, and other partiality;
- Transparency: Evaluators are to be transparent in the planning, implementation, and reporting phases to enable accountability and help ensure that aspects of an evaluation are not tailored to generate specific findings; and
- Ethics: Evaluations are to be conducted to the highest ethical standards to protect the public and maintain public trust. Evaluations should be planned and implemented to safeguard the dignity, rights, safety, and privacy of participants and other stakeholders.
- QUESTIONS
Refer questions regarding this directive to OPARMSPEB@usda.gov.