Evidence-based (Voc. Ed. Training) policy development – instead of experimentation at public expense

demand-evidence-and-think-critically-13

Guided reading and extracts from:  Challenges of evidence-based policy-making, Gary Banks AO, http://bit.ly/1rxjmvw

The fundamental point of the desirability of market forces in VET has almost always been resolved simply by assertion, often with reference back to a report which had previously made the same act of faith. ‘Skills for Australia,’ a discussion paper issued by then education minister John Dawkins in 1987, quoted in his next document, ‘A Changing Workforce’ in 1988, set off this self-referential chain.  (Robin Ryan, Evidence free policy, Campus Review, 17 November 2008, 11)

A contemporary example: The Australian Vocational Education and Training System.

There are undoubted political benefits that come from avoiding policy failures or unintended ‘collateral damage’ that can rebound on a Government, and from enhancing the credibility of reformist initiatives.

Much policy analysis, as anyone in the public service will know, actually occurs behind closed doors. A political need for speed, or defence against opportunistic adversaries, are often behind that. But no evidence is immutable. If it hasn’t been tested, or contested, we can’t really call it ‘evidence’. And it misses the opportunity to educate the community about what is at stake in a policy issue, and thereby for it to become more accepting of the policy initiative itself.

The head of Infrastructure Australia’s secretariat ….. commented in the following terms about many of the infrastructure proposals submitted to that body: ‘the linkage to goals and problems is weak, the evidence is weak, the quantification of costs and benefits is generally weak.’

In situations where government action seems warranted, a single option, no matter how carefully analysed, rarely provides sufficient evidence for a well-informed policy decision.

The reality, however, is that much public policy and regulation are made in just that way, with evidence confined to supporting one, already preferred way forward. Hence the subversive expression, ‘policy-based evidence.’

Without evidence, policy-makers must fall back on intuition, ideology, or conventional wisdom—or, at best, theory alone. And many policy decisions have indeed been made in those ways. But the resulting policies can go seriously astray, given the complexities and interdependencies in our society and economy, and the unpredictability of people’s reactions to change.

Consultants often cut corners. Their reports can be superficial, and more fundamentally, they are typically less accountable than public service advisers for the policy outcomes.

It is as important that we have a rigorous, evidence-based approach to public policy in Australia today as at any time in our history. This country faces major long-term challenges; challenges that have only been exacerbated by the economic turbulence that we are struggling to deal with right now. When the present crisis is over, we will still have the ongoing challenges of greenhouse, the ageing of our population and continuing international competitive pressures. We should not underestimate the significance of those challenges, which place a premium on enhancing the efficiency and productivity of our economy.

The steps in forming evidence-based policy

1. WHAT constitutes real evidence

Methodology – Analytical approach allow for proper consideration of the problems

Capacity – Research skills are sufficient to undertake the analysis

2. WHEN is adequate evidence available to inform decisions?

Time – To harvest data, gather new data and test the analysis

Good data – High-quality data bases support timely analysis

3. HOW can credible evidence be ensured?

Transparency – Open debate and discussion to test and educate the public

Independence – Incentives to deliver advice in the public interest

4. A receptive policy environment

Willingness to test policy options and the structures and resources to do so.

 Evidence-Based Policy

The essential ingredients

For evidence to discharge these various functions, however, it needs to be the right evidence; it needs to occur at the right time and be seen by the right people. That may sound obvious, but it is actually very demanding. I want to talk briefly now about some essential ingredients in achieving it.

Methodology matters

First: methodology. It’s important that, whatever analytical approach is chosen, it allows for a proper consideration of the nature of the issue or problem, and of different options for policy action.

Half the battle is understanding the problem. Failure to do this properly is one of the most common causes of policy failure and poor regulation. Sometimes this is an understandable consequence of complex forces, but sometimes it seems to have more to do with a wish for government to take action regardless.

Even when the broad policy approach is clear, the particular instruments adopted can make a significant difference. Thus, for example, economists overwhelmingly accept the superiority of a market-based approach to reducing carbon emissions, but they differ as to whether a cap-and-trade mechanism or an explicit tax (or some combination of the two) would yield the best outcomes. Australia’s apparent haste to embrace the trading option remains contentious among some prominent economists, illustrated by recent public advocacy by Geoff Carmody2 (in support of a consumption-based tax) and Warwick McKibbin3 (in support of a ‘hybrid’ scheme, with trading and taxation components).

How one measures the impacts of different policies depends on the topic and the task—and whether it’s an ex-ante or ex-post assessment. There is a range of methodologies available. There is also active debate about their relative merits. Nevertheless, all good methodologies have a number of features in common:

  • they test a theory or proposition as to why policy action will be effective—ultimately promoting community wellbeing—with the theory also revealing what impacts of the policy should be observed if it is to succeed
  • they have a serious treatment of the ‘counterfactual’; namely, what would happen in the absence of any action?
  • they involve, wherever possible, quantification of impacts (including estimates of how effects vary for different policy ‘doses’ and for different groups)
  • they look at both direct and indirect effects (often it’s the indirect effects that can be most important)
  • they set out the uncertainties and control for other influences that may impact on observed outcomes
  • they are designed to avoid errors that could occur through self-selection or other sources of bias
  • they provide for sensitivity tests, and
  • importantly, they have the ability to be tested and, ideally, replicated by third parties.

Examples

Australia has been at the forefront internationally in the development and use of some methodologies. For example, we have led the world in ‘general equilibrium’ modelling of the ‘direct and indirect effects’ of policy changes throughout the economy. Indeed, the Industries Assistance Commission, with its ‘Impact Project’ under Professors Powell and Dixon, essentially got that going.

But Australia has done relatively little in some other important areas, such as ‘randomised trials’, which can be particularly instructive in developing good social policy. We seem to see a lot more, proportionately, of this research being done in the USA, for example.

Cost-Benefit – part of the story

Most evidence-based methodologies fit broadly within a cost-benefit (or at least cost effectiveness) framework, designed to determine an estimated (net) payoff to society. It is a robust framework that provides for explicit recognition of costs and benefits, and requires the policy-maker to consider the full range of potential impacts. But it hasn’t been all that commonly or well used, even in relatively straightforward tasks such as infrastructure project evaluation.

The head of Infrastructure Australia’s secretariat recently commented in the following terms about many of the infrastructure proposals submitted to that body: ‘the linkage to goals and problems is weak, the evidence is weak, the quantification of costs and benefits is generally weak.’

Quantification of the more ‘subjective’ social or environmental impacts – another part of the story

It is very welcome, therefore, that Infrastructure Australia has stressed that any project which it recommends for public funding must satisfy rigorous cost-benefit tests. It is particularly important, as Minister Albanese himself has affirmed, that this includes quantification of the more ‘subjective’ social or environmental impacts; or, where this proves impossible, that there is an explicit treatment of the nature of those impacts and the values imputed to them.

In the past, this has proven the ‘Achilles heel’ of cost-benefit analyses for major public investments: financial costs are typically underestimated, non-financial benefits overstated.

Rubbery computations of this kind seem to be endemic to railway investment proposals, particularly ‘greenfield’ ones, which rarely pass muster on the economics alone. It is disquieting to observe, therefore, that rail projects feature heavily among the initial listing by Infrastructure Australia of projects warranting further assessment, totalling well over $100 billion.

Among these we find such old chestnuts as a light rail system for the ACT, and a Very Fast Train linking Canberra with Sydney and Melbourne. The rail proposals are not alone in evoking past follies, however. I note that expansion of the Ord River Scheme is also on the list.

It is undoubtedly challenging to monetise some of the likely costs and benefits associated with certain areas of public policy. But often we don’t try hard enough. There are nevertheless some examples of creative attempts. These include work by the Productivity Commission in areas such as gambling, consumer protection policy and even animal welfare.

A coherent analytical framework

The key is to be able to better assess whether benefits are likely to exceed costs, within a coherent analytical framework, even if everything cannot be reduced to a single number, or some elements cannot be quantified. Thus in our gambling and consumer policy reports, for example, we could only provide estimates of net benefits within plausible ranges. In the analysis required under the National Competition Policy of the ACT Government’s proposal to ban trade in eggs from battery hens, we quantified the likely economic costs and identified the potential impacts on the birds. However, we drew short of valuing these, as such valuations depend on ethical considerations and community norms that are best made by accountable political representatives.

Good data is a pre-requisite

A second essential ingredient, of course, is data. Australia has been very well served by the Australian Bureau of Statistics and the integrity of the national databases that it has generated. But in some areas we are struggling. Apart from the challenges of valuing impacts, and disentangling the effects of simultaneous influences, we often face more basic data deficiencies. These are typically in social and environmental rather than economic domains, where we must rely on administrative collections—or indeed there may be no collections at all.

Data problems bedevil …… the human capital area. Preventative health strategies and pathways of causal factors are one example. Indigenous policy provides another striking one, involving a myriad of problems to do with identification, the incidence of different health or other conditions, and their distribution across different parts of the country—all of which are very important for public policy formation.

Crucial education area

In the crucial education area, obtaining performance data has been an epic struggle, on which I will comment further. In the COAG priority area of early childhood development, a recent survey article from the Australian Institute of Family Studies concludes:

The dearth of evaluation data on interventions generally . makes it impossible to comment on the usefulness of early childhood interventions as a general strategy to sustain improvements for children in the long-term.

Data deficiencies inhibit evidence-based analysis for obvious reasons. They can also lead to reliance on ‘quick and dirty’ surveys, or the use of focus groups, as lampooned in The Hollow Men. A colleague has observed that a particular state government in which he’d worked was a frequent user of focus groups. They have a purpose, but I think it is a more superficial one, better directed at informing marketing than analysing potential policy impacts.

The other risk is that overseas studies will be resorted to inappropriately as a substitute for domestic studies. Sometimes this is akin to the old joke about the fellow who loses his keys in a dark street, but is found searching for them metres away under a lamp post, because there is more light. Translating foreign studies to Australia can sometimes be perilous, given different circumstances and the scope for misinterpretation.

One topical example is the celebrated work by James Heckman in the USA demonstrating the benefits of preschool education based on the Perry Programme. That work has become a policy touchstone for advocates of universal intensive preschool education in Australia. While that policy may well prove to be sound, Heckman’s work does not provide the necessary evidence. As he himself has clearly acknowledged, the Perry Project was confined to disadvantaged children. And the main gain from the intensive preschool treatment that those kids got came from reduced crime. So if there is relevance for the Perry work in Australia, it may be mainly confined to areas where there is concentrated disadvantage.

A major failing – not generating the data needed to evaluate our own programs

A major failing of governments in Australia, and probably worldwide, has been in not generating the data needed to evaluate their own programmes. In particular, there has been a lack of effort to develop the baseline data essential for before-and-after comparisons. As an aside, I should note that quite often even the objectives of a policy or programme are not clear to the hapless reviewer. Indeed, one of the good things about having reviews is that they can force some clarification as to what the objectives of the policy should have been in the first place. Examples of policies with unclear objectives from the Commission’s current work programme include the Baby Bonus, drought assistance and the restrictions on the parallel importing of books.

In the Commission’s first gambling inquiry, we had to undertake a national survey to get a picture of the social impacts, as there were no good national data around. We recommended that, in future, consistent surveys should be undertaken periodically, but this has not happened; the field has become a bit of a shemozzle, and we seem to be confronting the same problems again in revisiting this topic 10 years on. Moreover, while in this time there have been a multitude of harm minimisation measures introduced by different jurisdictions around the country, very few of those were preceded by trials or pilots to assess their cost- effectiveness, or designed with the need for evaluation data in mind.

In the Indigenous field, even the much-anticipated COAG Trials lacked baseline data. The only exception, as I recall, was the Wadeye Trial, but those data were derived from a separate research exercise, which took place before the trials commenced. More generally, we don’t even know how much money has been spent on Indigenous programmes, let alone how effective those programmes may have been. There is currently an initiative underway to remedy that, through a new reporting framework involving all jurisdictions, with secretariat support from the Productivity Commission.

Data collections funding being cut

Overall, we are seeing funding for data collections actually being cut. This is partly a consequence of the so-called ‘efficiency dividend’ in the public sector and the blunt way it is imposed. A consequence is that in agencies that have responsibility for collecting data, vital survey information and other data collections are being jeopardised. This seems particularly perverse at a time when governments are seeking to promote evidence-based policy-making.

In contrast, Australia has made great strides in assembling comparable performance data across jurisdictions through the Government Services Review. This is currently being reviewed by a COAG Senior Officials group. Foreign government officials visiting Australia have often expressed astonishment at what we have achieved, and international agencies such as the OECD and the UN have praised the Blue Book and Overcoming Indigenous Disadvantage reports.

Australia could and should have done a lot more

But Australia could and should have done a lot more to take advantage of its federal system as a natural proving ground for policy learning across jurisdictions. Indeed, in some cases, rather than encouraging data provision to enable comparisons across jurisdictions, the basis for such comparisons has actually been suppressed.

I mentioned earlier a lack of data on school learning outcomes. Such data are better now than in the past, but it has been a real struggle. And the data we have managed to collect and publish are highly aggregated. It certainly hasn’t got down to the level of individual schools, and involves very weak tests that don’t reveal much about comparative learning outcomes across the country. The OECD’s Programme for International Student Assessment (PISA) data has generally been more revealing as well as more timely—despite being collected internationally.

Andrew Leigh from the ANU has published an interesting paper with a colleague, analysing the impact of individual school performance on literacy and numeracy. But his research had to be confined to Western Australia, which was the only jurisdiction that released school data. Even then, the data were only revealed implicitly in charts. Leigh was obliged to digitise the charts to get the numbers to allow him to do his novel analysis.

Fund the evidence base that we need

So I think there is an opportunity, under the New Federalism banner, to fund the evidence base that we need to compare policy performances across our Federation, and thereby to devise better national policies where national approaches are called for. An important recent initiative in this direction is the allocation of additional funding, as part of a $3.5 billion education package, for a new performance reporting framework for schools. The responsible Minister, the Hon. Julia Gillard, in endorsing the new framework, stated ‘It is my strong view, that lack of transparency both hides failure and helps us ignore it … And lack of transparency prevents us from identifying where greater effort and investment are needed.’

Real evidence is open to scrutiny

This leads directly to the third area that I wanted to talk about: transparency.

Much policy analysis, as anyone in the public service will know, actually occurs behind closed doors. A political need for speed, or defence against opportunistic adversaries, are often behind that. But no evidence is immutable. If it hasn’t been tested, or contested, we can’t really call it ‘evidence’. And it misses the opportunity to educate the community about what is at stake in a policy issue, and thereby for it to become more accepting of the policy initiative itself.

Transparency ideally means ‘opening the books’ in terms of data, assumptions and methodologies, such that the analysis could be replicated. The wider the impacts of a policy proposal, the wider the consultation should be. Not just with experts, but also with the people who are likely to be affected by the policy, whose reactions and feedback provide insights into the likely impacts and help avoid unintended consequences. Such feedback in itself constitutes a useful form of evidence.

The Commission’s processes are essentially based on maximising feedback. I won’t dwell on this much here, other than to say that, in a range of areas, we’ve learned a great deal through our extensive public consultation processes, particularly in response to draft reports. If you compare the drafts with our final reports you will often see changes for the better: sometimes in our recommendations; sometimes in the arguments and evidence that we finally employ.

Transparency in policy-making helps government too, because it can see how the community reacts to ideas before they are fully formed, enabling it to better anticipate the politics of pursuing different courses of action. So the signs of a greater reliance again on Green Papers by the Australian Government, as advocated by the Regulation Taskforce, are very welcome. For example, the policy development process for addressing global warming clearly benefitted from an elevated public debate after the Green Paper was released.

Evidence-building takes time

Transparency can have its downsides. In particular, it ‘complicates’ and slows down the decision-making process—transparency involves time and effort. That is what appears to have militated against draft reports in a number of the recent policy review exercises. This has been a shame, especially for the major industry policy reviews last year, which contained recommendations with important ramifications for the community and economy.

There is an obvious clash between any government’s acceptance of the need for good evidence and the political ‘need for speed’. But the facts are that detailed research, involving data gathering and the testing of evidence, can’t be done overnight. As already noted, in some cases the necessary data will not be available ‘off the shelf ’ and may require a special survey. In other cases, data needed for programme evaluation might only be revealed through pilot studies or trials with the programme itself.

On a number of occasions in the past decade I have been approached about the possibility of the Commission undertaking an attractive policy task, but in an amount of time that I felt was unreasonable for it to be done well, particularly in view of the time people need to make submissions and give us feedback. When the Commission does something, people rightly expect to be able to have a say. As a consequence, those tasks have more often than not ended up going to consultants. And in most cases the results have vindicated my position.

Good evidence requires good people

The fifth area of importance is capability and expertise. You can’t have good evidence, you can’t have good research, without good people. People skilled in quantitative methods and other analysis are especially valuable. It is therefore ironic that we appear to have experienced a decline in the numbers with such skills within the public service at the very time when it has been called upon to provide an evidence-based approach that relies on them. Again, that’s been largely a consequence of budgetary measures over a long period of time. Research tends to be seen as a more dispensable function when governments and bureaucracies are cut back.

Several manifestations of the consequential reduction in capability have struck me. One is the lower calibre of some of the departmental project teams that I have observed trying to do review and evaluation work. Secondly, there appears to be increased poaching of research staff within the public sector, or at least pleas for secondments.

We are also seeing major new initiatives to train staff. One significant example is the Treasury’s sponsorship of a new programme, to be run by Monash University, to teach economics to non-economists. We have seen a shrinkage of the recruitment pool of economics graduates in recent years and I wonder whether the study of economics may be turning into a niche discipline in our universities.

We’ve also seen a major increase in the contracting of policy-related research outside the public service. A lot of those jobs have gone to business consultants rather than to academics. This contrasts with the experience in the USA, where the academic community seems to be utilised much more by government.

Contracting

Contracting out is by no means a bad thing. It has been happening progressively for decades. But it does seem to be changing in character more recently. The focus seems to be broadening from provision of inputs to policy-making, to preparation of outputs—the whole package. This gained public prominence last year through media reports of the Boston Consulting Group working up an ‘early childhood policy’ and developing a business plan for the ‘global institute’ for carbon sequestration. Also, KPMG seems to have become active in the infrastructure policy area.

Consultants

There are clear benefits to government from using professional consultants: new ideas, talented people, on-time delivery, attractive presentation and, possibly, cost—although some of the payments have been surprisingly large. But there are also some significant risks. Consultants often cut corners. Their reports can be superficial, and more fundamentally, they are typically less accountable than public service advisers for the policy outcomes.

Academics

Whether academics could be drawn on more is a key issue. In an earlier era, the involvement of academics was instrumental in developing the evidentiary and analytical momentum for the first waves of microeconomic reform. Examples from the trade and competition policy arena alone include Max Corden, Richard Snape, Fred Gruen, Peter Lloyd, Bob Gregory, Ross Garnaut, Fred Hilmer, among others. Where are the new academic generation’s equivalents in support of the ‘Third Wave’? Only a few names come to mind of academics making a notable public contribution to policies bearing on human capital development.

Such involvement is of course a two-way street—with both demand and supply sides. The supply side seems to have been diminished over time, partly as promising academic researchers have sought more attractive remuneration elsewhere and partly as their time has been increasingly consumed by their ‘day jobs’. On the demand side, one sometimes hears senior public servants complain that academics can be very hard ‘to do business with’ or that they are too slow, or lack an appreciation of the ‘real world’.

There may be some validity in these perceptions, though I suspect that they may also reflect an unrealistic view of how much time is needed to do good research; and perhaps a lack of planning. Perhaps also a desire for greater ‘predictability’ in the results than many academics would be willing to countenance. As Brian Head from Queensland University has observed: ‘Relatively few research and consulting projects are commissioned without some expectation that the reports may assist in upholding a certain viewpoint.’ As I recall it, Sir Humphrey Appelby’s maxim—akin to Rumpole’s first law of cross-examination—is that ‘one should never commission a study without knowing what the answer will be.’

Independence can be crucial

Evidence is never absolute; never ‘revealed truth’. The choice of methodologies, data, assumptions, etc. can all influence the outcome, and they do. Anyone who did first year stats at university probably read Darryl Huff ’s book How to Lie with Statistics, which was an early indication of the potential and the problems.

Given unavoidable need for judgement in evaluation, evidence is more likely to be robust and seen to be so if it is not subjected to influence or barrow-pushing by those involved. Good research is not just about skilled people, it is also about whether they face incentives to deliver a robust product in the public interest.

Some years ago, following a talk that I gave at a gambling conference in Melbourne, an American academic came up to me and said that the Commission’s report was being used extensively in public debate in the States. I expressed surprise, given the extent of homegrown research there. She said ‘yes, but we don’t know what to believe’. That appears to be because research is polarised in that country between that sponsored by community and church groups and that sponsored by the industry. And there is suspicion that ‘he who pays the piper, calls the tune’.

Independence is even more important when dealing with technical research than with opinions. People are better able to judge opinions for themselves, but the average person is naturally mystified by technical research. They look for proxies to help them know whether the results of such research are believable. The status of the researcher (or who is paying for the research) is one such proxy.

Economic modelling

Economic modelling is replete with these sorts of issues. Any model comprises many assumptions and judgements which can significantly influence the results. For example, the Productivity Commission and industry consultants used similar models recently to estimate the economic impacts of reducing tariffs on cars. The Commission found that there would be significant economy-wide gains from maintaining scheduled tariff reductions. The other modellers, using different and some less conventional assumptions, projected net losses— with the current tariff rate coincidentally turning out to be ‘optimal’.

In modelling the potential gains to Australia from a mooted Free Trade Agreement with the USA, the Centre for International Economics, in work commissioned by DFAT, obtained a significant positive result, whereas separate work by ACIL Tasman projected negligible gains at best. More recently, modelling of the Mandatory Renewable Energy Target (MRET) in conjunction with an emissions trading scheme, either found it to impose substantial additional costs on the economy or to yield substantial benefits, depending on the modeller and the sponsor. COAG’s final decision to implement a 20% target nationally essentially favoured the latter estimates. However, Commission researchers found the sources of gains in that modelling difficult to justify.

A ‘receptive’ policy-making environment is fundamental

We come to the final and most important ingredient on my list. Even the best evidence is of little value if it’s ignored or not available when it is needed. An evidence-based approach requires a policy-making process that is receptive to evidence; a process that begins with a question rather than an answer, and that has institutions to support such inquiry.

As has been found through the work of the Office of Regulation Review, and now the Office of Best Practice Regulation, often we see the reverse, especially for more significant proposals. The joke about ‘policy-based evidence’ has not been made in abstract—we have long observed such an approach in operation through the lens of regulation-making in Australia.

Ideally we need systems that are open to evidence at each stage of the policy development ‘cycle’: from the outset when an issue or problem is identified for policy attention to the development of the most appropriate response, and subsequent evaluation of its effectiveness.

The ongoing struggle to achieve effective use of regulation assessment processes within governments tells us how challenging that can be to implement. These arrangements require that significant regulatory proposals undergo a sequence of analytical steps designed firstly to clarify the nature of the policy problem and why government action is called for, and then to assess the relative merits of different options to demonstrate that the proposed regulation is likely to yield the highest (net) benefits to the community. These steps simply amount to what is widely accepted as ‘good process.’ That their documentation in a Regulation Impact Statement has proven so difficult to achieve, at least to a satisfactory standard, is best explained by a reluctance or inability to follow good process in the first place.

Evidence-based approach undoubtedly makes life harder for policy-makers

I admit that an evidence-based approach undoubtedly makes life harder for policy-makers and for politicians. Lord Keynes, who seems to be well and truly back in vogue, said in the 1930s:

There is nothing a Government hates more than to be well-informed; for it makes the process of arriving at decisions much more complicated and difficult.

I think we can see what he meant. But, against this, are the undoubted politicalbenefits that come from avoiding policy failures or unintended ‘collateral damage’ that can rebound on a Government, and from enhancing the credibility of reformist initiatives.

Guided reading and extracts from:  Challenges of evidence-based policy-making, Gary Banks AO, http://bit.ly/1rxjmvw

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s