Background 6.1 According to the 2004/2005 Main Estimates, the expenditure budget for the Province is in the range of $5.7 billion, much of which is disbursed through program spending. The Prescription Drug Program, psychiatric services, regional development, Crown land management, and hundreds of other diverse programs vie for the limited public resources available. Legislators, senior government officials, program managers and staff are called upon regularly to make decisions about these programs. 6.2 The programs delivered by provincial departments and agencies do not remain static. Government policies and priorities change. New programs are created. Old programs are restructured or discontinued. Funding levels for individual programs are changed. Pilot programs are undertaken and evaluated. 6.3 In this era of tight budgets and limited resources, departments are being called upon to do “more with less.” And it appears these pressures will continue to increase. For example, an aging population will almost certainly require more emphasis on healthcare areas in the future. 6.4 Citizens rightly expect that the programs funded by their tax contributions are producing publicly desirable outcomes (e.g. improving the health, lifestyle, and economic wellbeing of New Brunswick citizens). Further, government has a responsibility to be a good steward of the resources entrusted to it. Given the current reality, it is vitally important that programs that are funded are relevant, successful in achieving their objectives and cost-effective. Decision-makers must make wise choices to ensure that funded programs are really “worth the money.” A failure to do this may mean that other, more publicly valuable, programming opportunities may never be pursued. 6.5 Among the program-related decisions that legislators, government, and departmental management and staff have to make are the following: • Should a new program be created? • Is an existing program still relevant to its target clients or should it be discontinued or have its focus changed? • Should a pilot program be extended, expanded or discontinued? • What level of resources should be committed to a particular program in the coming year? 6.6 Underlying all these questions is the primary objective of providing the best possible programs for New Brunswick citizens. Therefore, an additional important question might be added: • How do we “fix” a program that is not providing the most relevant, successful, and cost-effective services for New Brunswick citizens? 6.7 In answering these questions, decision-makers must attempt to draw together information that will help them make informed judgments. While anecdotal evidence and operating information can provide important insights, they do not offer sufficient information to serve as the basis for sound decision-making. Objective, verifiable evaluative information about program relevance, cost-effectiveness and success in achieving objectives is also needed. The major function of program evaluation is to provide such information. 6.8 Program evaluation is also necessary because government, in most situations, is the sole provider of a particular service or program. Very seldom do consumers of government services have choices. In the private sector the value of a service or product is made clear by consumer decisions. Government must have an evaluation process that compensates for the absence of a competitive market. Scope 6.9 Our objective for this project was: To determine the approach to program evaluation employed by provincial departments. 6.10 In completing this work, we sent a program evaluation survey to eighteen government departments. Completion of the survey and submission of responses to our Office was coordinated by the Executive Council Office. 6.11 Responses to our survey were tabulated and summarized and are presented in this chapter. We provide no recommendations, only information. It should also be noted that we did not attempt to audit or otherwise verify the departmental responses received. 6.12 In performing this work, we completed a significant amount of research including looking at program evaluation literature, and some best practices followed in other jurisdictions. Some of that information and our analysis is presented in this chapter to set the context for our survey. Results in brief 6.13 Program evaluation is not a panacea. However, regular evaluations of programs can provide program decision-makers with credible evidence on program relevance, cost-effectiveness, and success in achieving established objectives. This is information to which decision-makers may not otherwise have access. And access to this information will increase the probability that optimal program-related decisions will be made. 6.14 Based on survey responses, we can make the following observations about program evaluation, as practiced by departments in the Province of New Brunswick. • The two key factors in program decision-making appear to be financial information and the degree of linkage between the program and departmental/government-wide strategic plans. • Effectiveness information (i.e. actual versus targeted results and the results of formal program evaluations) is not as readily available to decision-makers as more traditional forms of program-related information (i.e. numerical reports, narrative reports, and financial reports). Perhaps as a result, information relating to program effectiveness was selected less often as a key factor in program decision-making. • There is a lack of formal program evaluation guidelines that specify standard departmental approaches to program evaluation. • There appears to be an imbalance in program evaluation capabilities between departments. • Resource limitations appear to be restricting the ability of departments to improve their program evaluation processes. • There appear to be a number of program evaluation “best practices” evident from survey responses, particularly among those departments with internal program evaluation units. 6.15 In light of the valuable decision-making information that program evaluation can provide, and the potential for improvements indicated by responses to our survey, our Office plans to do additional work in this area. The next step in our work in relation to program evaluation will be to look at how specific programs administered by the Department of Health and Wellness are evaluated. Programs 6.16 A program is an organized and directed accumulation of resources that are used to conduct an activity or series of activities in order to achieve one or more preset objectives. Implicit in the creation of a program is that a significant need of a segment of the population can be cost-effectively satisfied by that program. 6.17 There is a logical flow that must be achieved for any program in order for it to be successful. An example of such a flow is shown in Exhibit 6.1 below. We have used the flow for a significant program in our own Office, that being the conducting of value-for-money audits. This is an extract from our full logic model available on our website. 6.18 First, the resources assigned to a program must be arranged so that they can carry out the activities that program designers feel will lead to the achievement of the ultimate program objective. In the exhibit, human, financial, physical and information resources are used to conduct value-for-money audits. Exhibit 6.1 Program logic model 6.19 Second, this activity of conducting value-for-money audits produces certain outputs. In our example the outputs are reports and recommendations. 6.20 Third, the production of these outputs is expected to lead to the achievement of some short-term outcomes. In the exhibit these short-term outcomes include departments and agencies accepting and implementing our recommendations. This in turn creates the intermediate outcomes of improved systems and practices for those departments and agencies. 6.21 Finally, these intermediate outcomes are expected to contribute to the achievement of the long-term outcomes of the program. So, as Exhibit 6.1 shows, achievement of the intermediate outcomes is expected to result in increased public awareness and government being made more effective and accountable. 6.22 This is how all programs work in theory. However, there are a lot of things that can go wrong with a program, thereby precluding it from being as relevant, successful in achieving its objective(s), and cost-effective as possible. Some potential problems may include: • The needs of the target client population are not well understood and therefore the program does not address priority needs (e.g. an existing program is no longer needed, but continues to be funded and delivered). • The ultimate objective of a program is unclear making it difficult to evaluate results achieved. • Planned program outcomes and objectives do not flow logically from program activities and outputs (i.e. there are flaws in the design of the program). • The social costs of providing for needs through the program exceed the social benefits. • There are inadequate resources being provided to carry out the prescribed program activities. • Program activities are not carried out in a cost-effective manner (i.e. resources are being wasted). • There are alternative activities that would result in more effective or efficient achievement of program objectives. • Observed changes in outcomes would have occurred with or without the program being in place (i.e. the program had no effect on the achievement of the objective). In such cases program expenditures are being wasted. • Delivery of the program has no impact on its ultimate objective. • The measurement of results for performance reporting purposes is not accurate, thereby providing faulty information to decision-makers. 6.23 Many of these problems may not be apparent by simply looking at financial and operating reports. Evaluative information is needed. The value of program evaluation 6.24 Program evaluations can address: • the needs of the target clients of a program (i.e. program relevance); • the logic of the program’s design; • the efficiency and effectiveness with which program activities are being carried out and services delivered; and • the extent to which the program has achieved its objectives (i.e. by focusing both on measurement of results and the degree to which those results can be attributed to the program). 6.25 Program evaluations can identify deficiencies in a program that may reduce the program’s relevance, cost-effectiveness, and/or success in achieving its objectives. Such information is very important for decision-makers, and often not readily available. Information provided by program evaluations can also be used by senior management, legislators, and the public in holding decision- makers to account for the achievement of positive, equitable results with resources provided to them. For example, the provincial annual report policy requires that certain evaluative information be included in departmental annual reports. … departments and agencies should give a clear account of goals, objectives and performance indicators. The report should show the extent to which a program continues to be relevant, how well the organization performed in achieving its plans and how well a program was accepted by its client groups. 6.26 The federal document, Family Violence Project Evaluation: A Guide, contains an excellent definition and description of the roles of program evaluation. Program evaluation is the independent, systematic gathering and analysis of verifiable information to determine the continued need for a program, its success in meeting its objectives, its results both intended and unintended, and its cost-effectiveness compared with alternative means of program delivery. Specifically program evaluation should provide essential information on three issues of interest: • Relevance: Does the program continue to be consistent with department and government-wide priorities and to realistically address an actual need? • Success: Is the program effective in meeting its objectives, within budget and without resulting in significant unwanted outcomes? • Cost-effectiveness: Is the program the most appropriate and efficient means for achieving the objectives, relative to alternative design and delivery approaches? Specifically, the roles of program evaluation are to: • foster and support policy development; • provide guidance as to how to modify programs to increase productivity or services and more effectively employ resources, and to market needed improvements to the quality of services; • define, measure, demonstrate and document program performance, and help managers develop a viable set of indicators to monitor and improve performance; and • determine client satisfaction with program delivery. 6.27 The Treasury Board of Canada Secretariat, in the document, Program Evaluation Methods: Measurement and Attribution of Program Results, has also acknowledged the importance of program evaluation. Evaluating program performance is a key part of the federal government’s strategy to manage for results. 6.28 Program evaluation is not a panacea. However, regular evaluations of programs can provide program decision-makers with credible evidence on program relevance, cost-effectiveness, and success in achieving objectives. This is information to which decision-makers may not otherwise have access. And access to this information will increase the probability that optimal program-related decisions will be made. Formal program evaluations 6.29 Program evaluations can be performed on an informal basis by program managers and other staff members using information produced by established data systems and anecdotal evidence. They can also be done more formally by departmental program evaluation staff who are independent of program delivery, or externally-contracted consultants. Formal program evaluations generally require research to be completed and additional data to be gathered. They usually result in written reports identifying problems and suggestions for improvement. 6.30 There is a significant risk involved in relying solely on informal program evaluations. • Informal evaluators may lack the time to perform comprehensive evaluations. In particular, program managers have other duties (e.g. ensuring service is delivered and day-to-day problems are resolved) that may preclude them from concentrating their efforts on program evaluation. • Informal evaluators may lack evaluation expertise and experience. For example, informal evaluators may focus on outputs and not consider the extent to which a program is producing tangible outcomes. This is in part because they may lack the technical skills needed to clearly establish the link between program inputs, activities, outputs, and outcomes. • Informal evaluators may lack independence and objectivity if they are directly involved in the day to day operations of the program being evaluated (e.g. program managers may have a vested interest in the status quo). 6.31 In general, the performance of formal program evaluations by dedicated evaluation staff or external consultants can address all of these limitations because it: • involves staff or consultants with the time to perform comprehensive evaluations; • involves staff or consultants with adequate training and experience in program evaluation; • involves staff or consultants who are independent of programs being evaluated and who can therefore provide objective evaluative information; • can clearly focus the evaluation on the outcomes produced by the program, rather than its outputs. Trained, experienced program evaluators can provide for better measurement of outcomes achieved and better analysis of the real contribution a program is making to those outcomes; and • results in the collection of additional, verifiable data about the program that is needed by evaluators in order to make objective judgements about the program. 6.32 We would caution however, that program management and staff must be consulted regularly throughout the completion of a formal program evaluation. Otherwise, findings and recommendations may not reflect the realities of the program. How is the Office of the Auditor General involved in program evaluation? 6.33 The Auditor General Act states: 13(2) Each report of the Auditor General under subsection (1) shall indicate anything he considers to be of significance and of a nature that should be brought to the attention of the Legislative Assembly including any cases in which he has observed that ... (f) money has been expended without due regard to economy or efficiency; (g) procedures have not been established to measure and report on the effectiveness of programs, where, in the opinion of the Auditor General, the procedures could appropriately and reasonably be used; or (h) procedures established to measure and report on the effectiveness of programs were not, in the opinion of the Auditor General, satisfactory. 6.34 In other words, our Office has a clear, legislative mandate to indicate whether appropriate effectiveness reporting systems are in place. In recent Reports, we have identified deficiencies in the procedures in place to measure and report on the effectiveness of a number of programs. These have included: • child day care facilities; • salmon aquaculture; • absenteeism management; • environmental inspections; and • employment development programs. 6.35 We continue to believe that improvements are needed in departmental effectiveness reporting. We also feel that enhanced departmental program evaluation has the potential to contribute to such improvements. 6.36 Program evaluation is complementary to the activities of our Office. It does not duplicate our work. Therefore, we feel it is important that we ensure that this important function is being adequately performed by provincial departments. Responses to our departmental program evaluation survey 6.37 The following sections summarize the responses we received to our departmental program evaluation survey. Of the eighteen responses received, twelve departments responded on the basis of overall departmental operations. The other six departments responded from the perspective of one or more specific programs administered by the department. As the survey responses did not vary based on the basis of completion selected by the departments, we have chosen to aggregate the feedback of all eighteen departments. 6.38 In reading the survey summary, it should be kept in mind that departmental responses are a general approximation of the way evaluative information is produced and used. Practices within departments for specific programs may vary. It should also be noted that survey respondents for a few of the departments indicated that improvement initiatives are ongoing in the area of program evaluation. Responses provided reflect the situation as of October 2004. Evaluation of ongoing programs 6.39 Most existing programs delivered by provincial departments fall under the category of ongoing programs. In a lot of cases these programs have been in place for many years. However, the length of time a program has been in place is not an indicator of how effective it is. It is very important that ongoing programs be evaluated periodically to ensure they continue to be relevant, cost-effective, and successful in achieving their objectives. 6.40 We asked departments to comment on the evaluation of ongoing programs. Departments were first asked what information is normally produced in relation to those programs. A summary of their responses is presented in Exhibit 6.2. Exhibit 6.2 Information produced for ongoing programs 6.41 From these responses, it appears that a wide variety of information is produced by departments in relation to ongoing programs that can be referred to by decision-makers. 6.42 We next asked departments if they evaluate the effectiveness of ongoing departmental programs on a regular basis. Thirteen of the eighteen departments indicated that they do. Some responses noted that departments delivering federal-provincial cost-shared programs are usually required to periodically evaluate the effectiveness of those programs and report their findings to the Government of Canada. 6.43 Four of the five departments that do not regularly evaluate programs indicated that they do some evaluation of programs. Those departments indicated that a lack of resources available for the function, and/or departmental staff having higher priorities that take available time, preclude the regular evaluation of programs. One department indicated that there was no funding available for the function “until recently”. Comments from these five departments included: Although program evaluation is a valuable tool to measure performance it requires additional resources that the Department does not currently have. Staff shortages … are such that programs can only be evaluated on a periodic basis. … 6.44 Departments were then asked to identify which three types of information produced for ongoing programs are used most often in evaluating the effectiveness of ongoing departmental programs. They were also asked to provide some rationale for those choices. Exhibit 6.3 presents a summary of their responses. All eighteen departments responded to this question. Exhibit 6.3 Information relied upon most heavily in evaluating program effectiveness - ongoing programs 6.45 The following is a sample of the rationale provided by departments for their choices. Numerical Activity Reporting is important because it provides quantitative measures and validates narrative reporting. Information can also be used for trending, benchmarking, and comparing programs in different regions. Budgetary efficiency is viewed as a good indicator of overall efficiency. The ability to stay within budget is also considered to be a good indicator of successful planning. Narrative or qualitative reporting is valuable; it promotes on-going communication and allows us to be kept abreast of the current state of the program. Most importantly it is able to flag potential pressures and challenges so that they may be addressed prior to becoming major challenges and/or a provincial issue. Reports comparing actual and targeted results: Assuming appropriate targets have been set this is an accurate reflection of “success”… Client surveys are useful since they provide a relatively continuous indication of effectiveness and they are relatively cost-effective. Formal evaluations are the most useful since they tend to provide the most objective, comprehensive information to management. 6.46 Departments were also asked to identify which three of the types of information produced for ongoing programs are used most often in determining whether or not to continue an existing program. They were again asked to provide some rationale for those choices. Exhibit 6.4 presents a summary of their responses. Not all departments responded to this question and a few of those that did provided less than three choices. Exhibit 6.4 Information relied upon most heavily in determining whether to continue a program 6.47 Cost-budget comparisons proved to be the most popular choice for departments. The following is a sample of the rationale provided by departments for their selections. Cost/budget comparisons. Every dollar spent on programs is a scarce resource; therefore, in delivering programs it is essential that the cost of the program not exceed the budget allotment. Numerical activity reports provide objective quantitative information that, in conjunction with other qualitative information, may help assess the on-going need for a program. Client acceptance/satisfaction information. This provides information about the popularity of a given program by the general public or client group and is a good measure of overall performance. … Comparing actual to targeted results allows senior management to address the efficiency of a program from a policy perspective. Narrative activity reports. Narrative style reports are best able to capture the real-world consequences of a program cut. They are concise and easy to interpret. A decision to discontinue a program would normally be made on the basis of the results of a formal evaluation. ... The report not only has conclusions, it contains recommendations which have to be addressed by a management body and then approved by the departmental senior management committee. 6.48 One Department also provided the following comment in relation to this question. Although not applicable, it should be noted that the most important considerations when determining whether to keep a program or cut it during a budget crunch has little to do with performance, but rather whether or not there is a legislated requirement to deliver the program. … 6.49 We asked departments to indicate, in their opinion, the two most important factors that are considered in deciding upon the level of resources to be provided for individual programs during the budget process. Exhibit 6.5 summarizes departmental responses to that question. All eighteen departments responded to this question, although one department provided only one choice. Exhibit 6.5 Key factors in allocating funding to programs 6.50 From the responses, it is clear that departments in general focus heavily on the departmental budget and linkage with the strategic plan. Fewer departments chose performance-related factors such as actual expenditures, recommendations from formal program evaluations or other performance-related factors. Comments from departments relating to this question included: It should be noted that the Senior Management Committee of the Department plays a significant role in interpreting and prioritizing various programs and initiatives. The priorities in the strategic plan provide valuable context for such decisions. The departmental strategic plan is tied to the government’s prosperity plan and government’s stated policy objectives. Therefore, it is a critical element in evaluating all departmental programs. … Evaluation of potential new programs 6.51 Perhaps one of the most difficult tasks associated with providing good programming to New Brunswick residents is creating effective new programs. There are many factors that must be taken into account and steps that must be taken to ensure that new programs are effective in achieving what was envisaged for them. 6.52 We asked departments to indicate the tasks that are completed when developing, designing and implementing a new program. A summary of their responses is shown in Exhibit 6.6. Exhibit 6.6 Tasks completed in developing, designing and implementing new programs 6.53 Other items described by the departments included: • determination of the wishes of government and government approval; and • preparation of communication materials such as brochures and bulletins to advise the general public about new programs. 6.54 The survey then asked what departmental staff is assigned responsibility for new program development, design and implementation. Departments provided the following feedback. Note that in some departments, responsibility is shared between more than one group. • Program directors/management are assigned responsibility in fifteen departments. • The departmental planning branch is assigned responsibility in five departments. • Senior management is assigned responsibility in four departments. • A departmental team is assigned responsibility in one department. The department described their unique approach as follows: Program Monitoring and Development Division is responsible for leading new program design but the department operates on a team based approach. A program design team would include members from the regions, and the finance, information technology, policy, and planning branches. They also say: The department has added a number of new programs in the past few years as a result of federal/provincial agreements. Program evaluation is a component of program design and the program evaluators participate in all program designs. Evaluation of pilot programs 6.55 An alternative means of testing program ideas without going to the expense of full implementation of a program is to carry out a pilot program. Intrinsic in the use of pilot programs for program decision-making though, is the need to set clear objectives, capture data that will allow for the assessment of effectiveness of the pilot, and carry out that evaluation once the pilot program has been completed. In many cases a decision whether or not to continue or expand the pilot program and even fully implement the program on a global basis will be based almost entirely on the results of the pilot. 6.56 We asked departments if they had undertaken pilot programs in the last three years. Nine of the departments indicated that they had. Those nine departments reported that approximately forty pilot programs have been undertaken in the past three years, some of which are still ongoing. The Departments of Education, Tourism and Parks, and Family and Community Services have used pilot programs most often during the past three years. 6.57 We also asked departments what information is normally generated in relation to pilot programs. Exhibit 6.7 summarizes their responses. Exhibit 6.7 Information produced for pilot programs 6.58 Departments were then asked which three of these types of information are considered most useful when determining whether to continue, expand, defer, or discontinue a pilot program. A summary of their responses can be seen in Exhibit 6.8. One department provided four choices, all of which were included in our tabulation. Exhibit 6.8 Information relied upon most heavily in evaluating a pilot program 6.59 The following is a sample of the rationale provided by departments for their choices: Cost budget comparisons – At the pilot stage, the ability of a program to proceed within budget is a major consideration. Numerical activity reports – These provide quantitative, statistical information about the performance of the program. Formal evaluations form the basis for decisions to roll-out pilot projects. Formal program evaluations 6.60 As we indicated earlier in this chapter, we feel that formal program evaluations provide a depth of information relating to program effectiveness that may not be available elsewhere. Consequently, the survey asked departments some questions about the use of formal program evaluations as a means of obtaining program decision-making information. For the purposes of this section, formal program evaluations should be defined as program evaluations resulting in reports with comments/recommendations relating to program effectiveness and accountability. 6.61 Of the eighteen departments surveyed, fourteen indicated that they complete formal program evaluations and four indicated that they do not. 6.62 Departments that do not carry out formal evaluations indicated that they are either not resourced for this function, or that alternate sources of evaluative information are considered sufficient. One department also made a valid point that has been a criticism of formal program evaluations in the past. It is critical for the public service to provide timely information for the decision-making process. Formal evaluations often do not allow for this timely production of key data. … In addition, formal evaluations tend to be time and resource intensive, require “technical” expertise that is not readily available, often take a long time to complete, and have not proven to be highly useful as timely decision-making tools. Evaluators are generally not particularly familiar with programs and often do not have the necessary credibility with program managers and staff re objectivity, environmental awareness/sensitivity, etc. 6.63 This comment points to the need to have program management involved throughout the process when formal program evaluations are being undertaken and to clearly establish up front the expected outputs of formal program evaluations and the deadline by which those outputs are needed. 6.64 Of the fourteen departments that indicated that they carry out formal program evaluations, three currently have program evaluation units within their departments. Units exist within the Department of Family and Community Services, the Department of Education, and the Department of Training and Employment Development. Among those units, one has recently added staff while the other two have had staff cuts. One other department had a program evaluation branch in the past, but it was eliminated during a recent restructuring exercise. That department’s program evaluation responsibilities have been reassigned to its internal audit unit. 6.65 The following comments are from some of the eleven departments that do formal program evaluations but do not have a program evaluation unit. Staff have evaluation responsibilities along with other responsibilities. Evaluations are managed as projects and staff are assigned to the project based on subject knowledge. While we do not have staff that is dedicated solely to the task of “formal program evaluation”, three or four of our staff members perform this function as driven by business requirements. The role of the … Division has been fundamentally changed as a result of budget decisions thus reducing significantly the [department’s] capacity to do formal program evaluations either for internal use or for departmental clients. 6.66 All fourteen of the departments that said they carry out formal program evaluations indicated they have hired third party consultants to perform formal evaluations. On average each of the fourteen departments has hired four to five third party consultants over the last three years. Evaluation guidelines 6.67 We asked departments whether there is a departmental policy or framework for the evaluation of programs managed by their departments. Four indicated that such a document exists for their department and the other fourteen indicated that it does not. Some, but not all, of these frameworks are driven by the desire of the Government of Canada to have cost-shared programs evaluated. Completion of periodic evaluations is often a requirement to receive federal funding under these programs. For example, one department stated: … The evaluation framework is included in the Implementation Agreement between the Government of Canada and the Government of New Brunswick … 6.68 Various authorities recognize that having a comprehensive and effective program evaluation framework is very important in ensuring that programs are relevant, successful, and cost-effective. For example, the Treasury Board of Canada Secretariat has developed the document Program Evaluation Methods: Measurement and Attribution of Program Results, previously referred to in this chapter, to provide guidance to federal departments and agencies. General departmental comments 6.69 We asked departments to rate the overall effectiveness of program evaluation as currently practiced in their departments. Four departments rated departmental effectiveness in evaluating programs as excellent; eleven rated it as being at an acceptable level; and three rated it as needing improvement. 6.70 Some examples of the comments made by departments in rating their program evaluation effectiveness follow. The Department as a whole values program pilots, evaluation and ongoing monitoring. … Program evaluation at [the Department] is a formalized activity. An Audit and Evaluation Committee meets bi- monthly to develop/review annual evaluation work plans, to consider evaluation reports and management responses. This ensures that evaluations results are taken seriously in program design and delivery. Program evaluation is a mandatory component of program or pilot design. … For those programs that are evaluated by the Department, the process used is considered effective. With additional resources, the Department would likely have a greater opportunity to broaden its evaluation practices. … Resource limitations and the lack of formal guidance on how to conduct program evaluation have been identified as the limiting factors for such programs. Currently, program evaluation is irregular, inconsistent and not part of regular management activities. Our formal internal process is in draft form and untested. 6.71 Departments were also asked what improvements they would like to see in the way programs are evaluated within their department. Here are some of the comments they provided. Resources (human or financial) to carry out more regular evaluations would enhance capability to assess effectiveness and adjust programming on a regular basis. … It would be helpful for a departmental evaluation model to be developed. Increased publication of the Department’s programs and their outcomes, including the release of evaluation information. A departmental guidance document for managers on how to properly conduct program evaluation. A greater focus on client needs. … Ideally, a dedicated internal audit/evaluation group within the department would be able to provide a high quality evaluation service. However, with current budget pressures, this option is not feasible. … In principle, an evaluation capacity should be built into every program. This means having clearly stated goals/ objectives; and specified program performance measures and indicators that are collected, compiled, analyzed, monitored and reported on a regular, timely basis in a standard format to program and senior department managers. … 6.72 Departments also provided other comments related to program evaluation. For example, one department summed up the trade-off implicit in any decision to provide more resources for the program evaluation function in government as follows: A more formal evaluation process would require the reassignment of existing resources from program delivery to program evaluation. Due to the impact on clients, it would be difficult to justify such a reallocation of resources. … 6.73 Several departments also indicated that while departments are involved in delivering programs, it is ultimately government that decides which programs are to be delivered. Comments included: ... many program decisions are based on changes to [program] goals and objectives by central government. … Determining if and when to implement changes identified/ recommended as being warranted is generally the prerogative of the elected government, not the public service … … elected governments determine what programs are started and continued, while the public service determines how they are designed and operated … 6.74 The implication was that departments generally do not decide which programs they will deliver or what the objectives of those programs will be. However, we believe that departments are in a position to influence those decisions by providing accurate and timely evaluative information that will be used by governments in making programming decisions. As one department stated: Making the best use of public sector resources is certainly an important and “timeless” concern, and identifying opportunities to improve effectiveness, efficiency and quality is an essential function of public sector managers and employees. … It is the responsibility of the public service to provide elected officials with the timely, accurate and comprehensive information they need to make sound decisions on public policy and programs.