In Brief | Program Evaluation in the Public Sector

Program evaluation - the systematic assessment of the appropriateness, effectiveness, efficiency and/or sustainability of a program or its parts -  is one of the critical tools available to assess program performance. As such, it is a valuable tool for governments, external decision-makers and other stakeholders.This In Brief introduces some resources on program evaluation in the public sector.

If you have any feedback, please contact us here.


The purpose of government is to provide the parameters for everyday behavior for citizens, protect them from outside interference, and often provide for their well-being and happiness. However, policy makers can get it wrong, be ineffective or fail to foresee unintended consequences. There is often considerable debate about whether government action has actually led to an improvement and, if so, the extent of the gains.

In their March 2019 research paper for the Independent Review of the Australian Public Service Matthew Gray and J. Rob Bray quote the UK Treasury's  Magenta Book Guidance for Evaluation: "Evaluation examines the actual implementation and impacts of a policy to assess whether the anticipated effects, costs and benefits were in fact realised. Evaluation findings can identify ‘what works’, where problems arise, highlight good practice, identify unintended consequences or unanticipated results and demonstrate value for money, and hence can be fed back into the appraisal process to improve future decision-making".

In a 2010 presentation to the Canberra Evaluation Forum, then Department of Finance (DoF) Secretary David Tune emphasises that program evaluation "is built on the premise that better quality decisions around, program relevance and appropriateness, will be made if the process is informed by robust evidence." 


Program evaluation in government aims to:

  • assess the continued relevance and priority of program objectives in the light of current circumstances, including government policy changes (that is, appropriateness of the program);
  • test whether the program outcomes achieve stated objectives (that is, its effectiveness); and
  • ascertain whether there are better ways of achieving these objectives (that is, its efficiency).

Doing evaluation well includes defining desired outcomes and how to monitor progress, including KPIs at the outset; cultivating capabilities and commitment to evaluation; and communicating results and impact.

Effective program evaluation contributes to improved accountability to the Parliament and the public; provides a better information base to assist managers in improving program performance; assists government decision making and setting priorities, particularly in the Budget process; and is of considerable value to agency managers, external decision makers and other stakeholders. It is also a critical tool in assessing performance and in this way contributes to sound management practice. See the Australian Public Service Commission's Program evaluation in the public sector page for some ideas and thoughts about the reasons for, and mechanics of, program evaluation.


Until the 80’s, processes for the assessment and improvement of public policies were relatively ad-hoc and focused more on desk-top policy review.  They then moved to a centralised, comprehensive evaluation approach driven by the Department of Finance during implementation of the Program Management and Budgeting (PMB) reforms. The key changes included the introduction of program budgeting and subsequent financial and personnel management reforms, giving line agencies and program managers greater control and responsibility for program performance. As part of this, there was a focus on the collection of performance information and program evaluations. 

These reforms were introduced with the specific aim of making the APS more responsive to client needs and more efficient, effective and accountable. A subsequent DoF survey, the results of which were analysed  in the Australian Journal of Public Administration, showed that evaluation findings had provided better information to inform Cabinet deliberations, thus indicating one of the key benefits of timely and effective program evaluation.

Further budget reform under the auspices of the Public Governance, Performance and Accountability Act 2013 (PGPA Act) placed renewed emphasis on demonstrating the value created when public resources are used. This reform created opportunities for evaluators to use their tools and thinking to frame questions about what counts as meaningful performance information in particular circumstances, and what this information says about the extent to which an entity or company is achieving its purposes.The enhanced Commonwealth Performance Framework under PGPA aims to improve the line of sight between what was intended and what was delivered. Commonwealth entities and companies are required to prepare corporate plans at the beginning of the reporting cycle and to produce annual performance statements at the end of the reporting cycle.


Various reforms to the budget process contributed to the demise of DoF's central role in program evaluation, which had essentially devolved to individual agencies. Agency led reviews (as evidenced by lapsing program reviews) were of variable quality and usually not visible. This lack of useful evaluation information across the APS was mirrored by an uneven evaluation capability within individual agencies. Research commissioned by the Independent Review of the APS has found that the government's current approach to evaluation is piecemeal both in scope and quality.

This reiterates sentiment in a 2017 submission to the Review of the PGPA Act  by the Australasian Evaluation Society which identified the incorporation of evaluation findings into performance measurement and reporting as an area requiring improvement. Subsequently, Recommendation 4 of the 2018 report into the Review of the PGPA ACT recommended that "The Secretaries Board should take initiatives to improve the quality of performance reporting, including through more effective and informed use of evaluation, focusing on strategies to improve the way Commonwealth entities measure the impact of government programs."

In their 2018 article Evaluators and the Enhanced Commonwealth Performance Framework Brad Cook and Dave Morton conclude that there is a clear role for evaluators in helping entities adjust to the requirements of the PGPA ACT and to contribute to building the capability of ‘performance professionals’ across the public sector. In particular  the authors identify that effective understanding and use the evaluators’ toolbox – for example, program theory and qualitative analysis – will improve the quality of published performance information available to the Commonwealth’s stakeholders.


Program evaluation and performance auditing are sometimes seen as interchangeable, but there are differences, as this article in Public Administration Review discusses. The Australian National Audit Office (ANAO) performance audit activities involve the independent and objective assessment of all or part of an entity’s operations and administrative support systems. See the ANAO website for more on performance auditing.

The Parliament's Joint Committee of Public Accounts and Audit and ANAO reports are often critical of the availability of useful evaluation information, which in turn makes performance management and policy and budget-decision-making and prioritisation more difficult. Importantly, a de-emphasis on evaluation is a risk to skills and expertise. See the ANAO's 2015 Performance Audit into Defence Testing and Evaluation as an example of a report which identifies risk associated with skills expertise in evaluation. Former Auditor-General Ian McPhee explores the differences and similarities in audit and evaluation in the Australian context in this 2006 address to the Canberra Evaluation Forum.

Going back to 1997, the ANAO published a general report Program Evaluation in the Australian Public Service which looked at approaches to evaluation planning, the conduct of individual evaluations, the quality of evaluation reports and the impact of evaluations. The report recommended that "Agencies should examine the report, particularly the issues covered and suggestions for good practice, to determine what mix of approaches best suit their own agency’s situation and program management." This was re-emphasised more than a decade later in the Ahead of the Game: Blueprint for Australian Government Administration report which identified that agencies still had a clear need to build and embed a stronger evaluation and review culture to underpin reform agendas.


Program evaluations can be conducted internally or externally, ranging from evaluation by those directly responsible for implementation to centralised within agencies, undertaken by a different agency within government, through to the appointment of external, usually private sector or academic evaluators. Policy review and public inquiries are part of the spectrum of evaluation activities. See the Department of Prime Minister and Cabinet's Implementation Toolkit for some tools, techniques and checklists for effective performance evaluation.

This evaluation guide from Western Australia outlines the role of evaluation as a key component in the policy cycle, the  principles of good evaluation practice, a strategic approach to evaluation, different types of evaluation and when they might be used, how to conduct an evaluation, and the use of findings from an evaluation for better decision-making. A starting principle for the guide is the creation of an evaluation culture to ensure the best possible economic and social returns.

For those who want to go into evaluation tools more deeply, this page lists free resources for program evaluation and social research methods. ANZOG's Evidence and Evaluation Hub is a centre of expertise developed to strengthen the capacity of the public and not-for-profit sectors to use evaluation and other evidence to support decision-making and practice. See  also this beginner's guide from the US for a general introduction to program evaluation and this DIY Program Evaluation Toolkit from Grosvenor Advisory which has a public sector focus, including key legislation, policies and guidance in Australian jurisdictions.


Randomised controlled trials (RCTs) – common in medical trials – are sometimes considered to be the gold standard of evaluation tools. MP and former economics professor Andrew Leigh would like to see more randomised trials in policy development because of their ability to compare the effectiveness of new interventions against what would happen if nothing had changed. Read his 2003 Paper Randomised Policy Trials for more, including examples where policy knowledge has been advanced through RCTs.

One of the policy areas where RCTs are routinely used is in behavioural economics. Behavioural economics help us understand how people behave in practice – that humans have cognitive limits and don’t always make rational, economically-sound choices. It is increasingly being applied by governments to design more effective public policies that are based on actual, and not assumed, behaviour. Read our In Brief on Behavioural Economics for more.


Evidence-based policy
 refers to policy decisions which are informed by rigorously established objective evidence. The term was popularised in the United Kingdom, which wanted to end the ideologically-based decision making for policy making and which included evaluation techniques to track progress, make necessary adjustments and assess the effectiveness of a government action while learning lessons for the future.

In his 2009 report for the Productivity Commission, Gary Banks notes that most policies are experiments and that "without evidence, policy makers must fall back on intuition, ideology, or conventional wisdom — or, at best, theory alone. And many policy decisions have indeed been made in those ways." He also argues that all government programs should be designed and funded with future evaluation and review in mind. Targeted evaluation and outcome monitoring will determine whether programs are achieving desired results and evaluate programs that lack strong evidence of effectiveness.


On 4 April 2019, Dr Nicholas Gruen presented at the Canberra Evaluation Forum on evidence-based policy, exploring the intellectual and organisational requirements of an evidence based culture and the approach to establishing an Evaluator-General, which he has proposed in his submission to the Review of the Australian Public Service. Read his earlier articles for The Mandarin on evidence-based policy and evaluation here and here.

There are some great resources on the Canberra Evaluation Forum page on the IPAA ACT website, including presentation materials from past events.The Australasian Evaluation Society (AES) which exists to improve the theory, practice and use of evaluation in Australasia also has useful resources on its site including the Evaluation Journal of Australasia (subscription required).


Contact IPAA

ABN: 24 656 727 375
Phone: (02) 6154 9800
Unit 4A, 16 National Circuit,
Barton ACT 2600

Postal Address

PO Box 4349
Kingston ACT 2604

Subscribe to IPAA

Subscribe to our mailing list to receive information about upcoming events, initiatives and activities.