Peter Achterstraat : I really want to say how, um, how thrilled we are to have you all in the room here too—to maybe hear some of the experiences but also take some ideas away and maybe come up with some more of your own ideas at a later stage.
Now we've got the beauty of having a practitioner experience of the regulatory experiment, and we're very fortunate to have with us Michael Brennan, the Chair of the Commonwealth Productivity Commission.
Everyone knows Michael from reading the Financial Review etc., so I won't, um—I won't go through his CV. But, um, suffice it to say that internationally the Australian Productivity Commissioner in Canberra is held out as best practice around the world. And I was fortunate to sit in on a couple of Zoom meetings with the OECD and others, and the way that the fellow practitioners in other countries ask questions of and refer to the work done by the Federal Productivity Commission make me really proud. And it's a real privilege for us to have Michael here with us.
I'll hand over to you, Michael, if you can introduce the fellow panel members and take it from there.
Michael Brennan: Thank you. Thank you very much, Peter. Uh, hopefully you can hear me—apologies for those who can't quite see me—but we'll kind of, you know, we'll lean around the place.
Welcome to the second session, where we're going to take a practical look at some real-life examples of regulatory experimentation. And we've got a star-studded lineup here—I'm going to ask them to introduce themselves soon. But to preview that, that's Alex Kennedy, who's going to talk us through a live case study on regulatory experimentation from his experience in New South Wales Liquor and Gaming.
Also—well, I'm moving around here—but Rose Webb, who is an experienced regulator (sorry about this, too close—how's that?), an experienced regulator but also now the Chair of the ANZSOG Regulator Community of Practice.
We have Dave Trudinger from the Behavioural Insights Unit within New South Wales Government, and also Ravi Dattapowell, who is with the Behavioural Insights Team, which is now a global enterprise having spun out of the UK Government some years ago.
I loved in the introduction this morning when Peter was talking through the paper—he focused in on three levels: tools, institutions, and culture. And it seems that they operate in kind of descending order of concreteness. But each of those three is very important. You can have one without the other two, or two without the third, and the thing would fall apart. It's all of those three things working together—thank you very much—that create an environment conducive to regulatory experimentation.
So I'm going to ask each of the panelists just to quickly introduce themselves and give us a couple of upfront thoughts, and then we'll move into Alex's case study and some further discussion.
Why don't I start at the far end—Rose, you go first.
Rose Webb: I believe this is working. Um, so I'm going to talk from the point of view of the regulators or practitioners on the ground, but hopefully my remarks are equally useful for policy makers and the people who actually make regulation.
I think my contention would be that regulators have been experimenting so long as they've been doing their job, because they have a lot of discretion in how they apply the law that they're responsible for implementing. And often part of that decision-making process about implementing the law in particular ways evolves over time due to various experiments that they carry out.
Um, a couple of examples that occurred to me were the use of instruments like enforceable undertakings, where regulators do have the opportunity to write pretty flexibly rules that apply to a particular agent that they're regulating in a particular case.
And one example way back from the 1990s—just to show this has been around for a long time—is the law around compliance programs. So back in the 1990s, consumer and other regulators had the power to do any enforceable undertaking. They started writing into it that one of the things that the company who had been found to contravene in some way should do is institute a compliance program in their business.
Over the course of the 1990s, it got more sophisticated about what that compliance program should look like. They started writing in things like: you have to have an audit; they had requirements about what the training would be. And during the course of the 1990s, the courts realized that this was a very powerful instrument to use to encourage compliance in the future.
And during the course of a number of Australian Law Reform Commission reports and then policy decisions by 2001, it actually turned up in the Competition and Consumer Law that the court can order a compliance program, that it should have these features—and those features were the ones that had been developed through that experimentation during the '90s.
So I think there's an example of where policymakers took the on-the-ground experimentation from the regulators and put it into practice.
Another example is where regulators use their exemption power to exempt people. Again, pretty much they're experimenting in some ways because they're sort of saying: we're having an experiment here that the law shouldn't apply to these people in these particular circumstances.
And I know when I was at the ACCC, we had the third line forcing law which said, you know, if you tell someone that you will only sell a product to them as long as they also—you know, I'll sell you my beef patties as long as you buy my special sauce—that was an absolute requirement in the Australian law. And the only way people could get around it was by notifying that to the ACCC, and the ACCC had a discretion to lift the notification.
We never lifted any in the franchising sector. We got hundreds and hundreds of them, and admin clerks put them on the system, but nothing else happened to them. And so it was a blanket exemption, really.
And eventually, after a lot of persuasion with the policymakers, we were able to change the law and say people only had to notify if their third line forcing arrangement would lessen competition.
Again, another example—and I'll probably just call out to policymakers: if you find that the regulators are actually giving blanket exemptions, the law is probably not working so well, and that's an experiment that shows it's time for a change.
So I think there's lots of opportunities to think about the link between the on-the-ground experience of the regulators and the policymakers, and look at some of those as being experiments as well.
Michael Brennan: David, when we go to you—
David Trudinger: Yeah, thanks Michael. Um, so I work in the New South Wales Behavioural Insights Unit, and we support departments, clusters, regulators in designing experiments to test whether interventions work.
And look, one of the things—as a behavioural scientist—one of the things we're really interested in is how context shapes human behavior. And obviously COVID, as you heard earlier, was such a wonderful space to see how context was shaping behavior.
One of the things I'm really interested in here in this discussion is perhaps maybe not just the lessons we'll learn and take forward, but maybe also the lessons we're going to forget. Because one of the things in behavioural science you learn about is what we call heuristics or rules of thumb. And one of those really well known is the availability heuristic, which is: you tend to put a lot of weight and judgment and support towards things that are of immediate impact or immediate recall.
And one of the concerns I've got—and I know this is shared in the room, and you can feel it already—is the things we're starting to forget about: the reform opportunities that we've had, still have, since and in COVID. And it's almost like we need to set our clocks.
And this paper actually is absolutely fantastic in terms of setting our clocks to thinking about: in five years' time, are we still going to be having the same innovative approach and the same aspiration for reform that we did a year ago, two years ago—and that we really need to take forward now.
So really, this paper is a really excellent guiding platform—not just for government but also for all the stakeholders involved in regulation—to think about how we sustain that behavior change.
Michael Brennan: Thanks Dave. Ravi?
Ravi Dattapowell: Thanks. Um, so I'm from the Behavioural Insights Team, which is a different organization to Dave's. We're a global firm—we started out in the UK, as Michael said, inside the government, similar to what Dave does. But we obviously work now globally with governments and organizations around the world.
And I think one thing that really struck out to me from all the things we discussed today—actually, I think there was a quote on one of the slides earlier today—which is really: I think we need to, if you want to encourage experimentation, you need to start celebrating failure. Well, not celebrating, but certainly accepting and recognizing failure.
The whole point of an experiment is: you don't know if it's going to work, right? I mean, if you knew it was going to work, you'd just do the thing. But you don't. So that's why you do an experiment.
And even a failed experiment is an opportunity to learn something and to think about and be more accepting of: if an experiment doesn't work, well, what can we learn from that? What do we take away from that?
And to sort of, you know, really build that culture of experimentation, I think we need to start thinking about: what do we do when the experiment fails, and how do we react to that?
Because I think that's probably going to be the thing that will really help to entrench the culture going forward—is how we deal with situations when things don't work out.
Because I think COVID kind of gave us the opportunity where regulators made some quick decisions, and I think people were forgiving to sort of say: look, you're making decisions quickly, that's fine—please just fix this one thing that you didn't think of.
But how do we keep that spirit going forward? And I think part of that has to be from both governments, the public, from the private sector—everyone being a little bit more tolerant and accepting of failing and things going wrong when regulators experiment.
Thanks for having me.
Michael Brennan: Alex, I'll get you to introduce yourself—but then maybe go straight into talking a bit about the regulatory sandbox trial that you've got underway at the moment in relation to gaming equipment. Tell us a bit about the trial and how you thought about designing it.
Alex Kennedy: Yeah, sure. Um, my name is Alex Kennedy. I'm the manager of the policy team responsible for casino gaming machines and registered clubs policy. And one of the things that obviously we look at is, under our gaming legislation, we have a bit of a responsibility not only for the balanced development of industry, but we also have a responsibility to prevent gambling harm and to minimise gambling harm.
So we have a little bit more of a formal experimentation method through our regulatory sandbox. And the reason for that is that obviously the gaming industry is highly regulated, and there are a lot of provisions in relation to how you load credits onto gaming machines and how you obtain credits off gaming machines. A lot of them have been around for a very long time, and they are primarily geared towards a cash-based economy, and they are also designed to achieve the aim of minimizing gambling harm.
So when we looked at the framework, one of the key considerations we thought about was that if we were going to experiment with digital payments for gaming machines, what we didn't want to see was effectively just a digitization of the existing regulations. What we wanted to look for was something completely new, and if we could build a new regulatory framework from the ground up to respond to these dual priorities.
And so we've had set up for a little while a regulatory sandbox, which is designed for anyone within the gaming industry to bring innovative technology to us that doesn't fit within our existing regulatory framework. And it allows us to test these products in a live environment in venues with real patrons under the supervision of an independent researcher. And the idea behind that is that the independent researcher will evaluate performance of these products, provide us with the outcomes from them, and then we can start to build a regulatory framework around some of those outcomes.
And so, talking to industry, one of the directions they gave us first of all for experimentation in our sandbox was for cashless gaming. We know the economy is going cashless. We know that COVID has accelerated that. And that really—the acceleration of that—is really becoming almost an existential question for our industry, as cash is almost eliminated from venues.
So we viewed this as a big opportunity—not only for industry but also an opportunity on a harm minimisation side and also on the side of improving some of our anti-money laundering procedures as well, because we know the risks that cash poses from an anti-money laundering perspective.
So very early on in the piece, we engaged with industry and with the gaming machine manufacturers who are looking at these cashless technologies to talk to them about the fact that what we wanted to see was not merely digitization of the existing framework that we've got. If they had new features that they had that would achieve the aims of harm minimization and improving anti-money laundering controls, then we wanted to have a look at some of those so that we could see what a regulatory framework would look like in the future.
And so what that's ended up with is a situation where really we're looking at it almost from a technology-neutral standpoint and looking more at the features that each of these individual products has to achieve those goals, so that we can then make an assessment of their performance in a real-life setting to help us build that regulatory framework.
Moderator: Ravi, you've in your organization's overseeing a lot of regulatory trials. What does that sound like—a typical process to you? And what sort of things do you think through when you're designing a regulatory trial?
Ravi Dattapowell: Yeah, absolutely. I mean, I think it's really pleasing to see, you know, thinking through kind of the aims and objectives and really having a clear process evaluation. I think those are the sort of key things I would sort of flag.
First off, like what kind of evaluation are you doing and what is the kind of design of it? Is it just a sort of quick pilot? Is it a more kind of quasi-experimental design or more like a randomized control trial? What is the kind of methodology you're looking for?
And then thinking about exactly what and how big the sort of sample size is—is it, you know, sufficient enough to run two or three or four different variations?
Probably one of the most important ones for us is: what exactly is the outcome you're targeting? And this might seem really obvious, but even when we work with departments or kind of within one organization, they might have different interpretations.
I remember there's one project in the UK which my colleagues worked on—they always talk about this one—and it was, you know, it was with the Department of Energy at the time. And kind of the first people they spoke to said, "Look, this project is about, you know, reducing energy usage—we're saving people money." And then something slightly different in the department said, "Oh, well actually it's about helping those on low incomes save energy." And then, "Well actually it's about helping those on low incomes heat their homes adequately." And then, "Oh no, actually it's about helping people heat their homes efficiently."
So, you know, now all of those are very reasonable outcomes. But if you probe into them, some of them are actually directly contradictory with each other, right? Like if you want people to heat their homes adequately, that means they're presumably not doing it adequately enough, so they spend more—rather than if they're saving energy, when they would spend less anyway.
But the point is all those are fine. It's important to be really clear on exactly what it is you're measuring and what your criteria for success is up front, because it's very easy to run a trial and then to kind of get a whole bunch of data at the end and not quite know what to do with it.
And so I think the work that is being done here is really important because it's clear that there's a framework in place, that there are some clear outcome measures that are being looked at, and, you know, clear measures of success—so harm minimization, reducing, also increasing capabilities for anti-money laundering. You know, having that up front is really, really important.
So I think those sort of key things are what we kind of look for.
Moderator: So Alex, just on that—how did you design the evaluation framework for the trial that you've got?
Alex Kennedy: I think the main thing that we did early on was we decided, once we started looking at the products that were coming in and just how different they were—and everyone came at these problems from very different directions, as you'd expect—is that we took the decision: we're not going to do one big trial. We're actually going to do individual trials for each of these products.
And part of that is not only the products themselves but also where the trials were going to occur. The gaming industry is quite diverse, and so if you're performing trials in a live environment, then you've got multiple different circumstances.
You can have a trial, as we're going to have, with over 100 machines in a very large club. You can have a trial in a very small premises with under 30 machines. You can have a trial in a venue that has a really extensive loyalty program, so you're going to have more frequent patrons who gamble. And trials in venues where they rely much more on casual gamers and don't have a big loyalty program.
So knowing that there were a lot of different circumstances the trials could occur in, and then also knowing that the products themselves were quite different, we decided that we'd run the trials individually for the products themselves so that we could tailor them to the circumstances where they were going to be placed.
And so that we could also have a look at, as I said, the features that the products had rather than the products holistically—because with how different they were, you wouldn't be able to compare the outcomes from each trial. You're really looking at apples and oranges between the products, the features they have, and what they're doing.
And so for us, it was more about evaluating down to some of the particular features they have and comparing the harm minimization measures from one product to another in particular to say: which one of these is more effective, which one isn't effective.
And so some of that is definitely about collecting the quantitative data that we can get from turnover of gaming machines, loading of credits. But a lot of that's also going to come from the qualitative data that we get from surveys—from not only the participants in the trial but also staff members and their observations as well.
So setting up the evaluation for the trial has meant really narrowing in again on: what are we trying to get out of these trials? Which parts of this framework do we want to implement in the future? Which is looking at it at a much more narrow level than just: what is this product doing and how is it performing?
Moderator: As policy makers we are always exhorting people: evaluate, evaluate, evaluate. And uh, it strikes me that, um, that's a worthy and noble thing, but it can often seem perhaps like red tape to those on the ground—right? Collect data, ask yourself the right question, etc.
Dave, I might bring you in here. How do you ensure that your evaluation framework is practical, that it's fit for purpose, that it's commensurate with what it is you're trying to do?
Dave: Yeah, and great question. And Ravi picked on—took out some of the sort of key features you might want to think about in designing an effective evaluation.
I just want to draw out one more thing from that that's probably really important we haven't talked a lot about this morning, and I want to put on your radar—and that's really thinking through voice of the customer. And when I say customer, I mean not only the person on the street but also business stakeholders involved in the regulatory environment.
So voice of the customer and the experience of the customer in that regulatory context—a really big challenge to think through how you incorporate the experience of your customers, the customer, in designing and supporting innovation and how you listen to them so you can hear really clearly where the regulatory burden is falling in the day-to-day practice of business.
So Sally this morning gave us some really clear examples—if you're like a customer of business regulation, we're in a crisis situation—the biggest rub points or friction points are. And one of our challenges in designing evaluation in government is to think through: well, how do we incorporate that insight, knowledge, and data into our experimentation?
One of the things that my team have done, working really closely with our colleagues in Better Regulation Division in DCS, is we've designed what we call a sludge audit. So you've heard of nudge—nudge is that thing about where you locate the chocolates in the aisle so the kids buy the chocolate or don't buy the chocolate. You change the location of the chocolates, you increase or decrease the purchase of those chocolates.
Sludge is like the word sounds—all those excessive frictions that get in the way of a good customer experience. And one of the things that we've been doing is applying that approach—so thinking through customer experience and the journey that a customer might interact with in terms of regulation. So whether that be application for a home building license or a trade license or the experience getting permission for out-of-hours service—thinking through what are the different friction points that your customers are experiencing.
And then having it—and getting some data on that. So looking at time, but also looking at how it relates to the New South Wales Government customer commitments, which are all about, you know, sensible things about acting with empathy, ensuring people have a good experience, and so on.
So putting some metrics on that—because if we don't measure it, it's very hard to talk about it. And one of the advantages of measuring it, we've found with these sludge audits, is that you can have a very reasonable conversation about: is that sludge, is that friction, serving a purpose at this point in the journey of the customer?
And that then enables you to make that assessment about: is it fit for purpose, is it meeting that need?
So really, we're really strong advocates for using those systematic methods to identify where the customer experience is falling short in terms of the objectives you want to achieve.
So two things just to reiterate:
- Make sure we include the voice of the customer and the experience of the customer in regulation and regulatory reform.
- The second thing is—Michael, you were saying to me before on the table, I thought was a really good point—that regulatory reforms, sometimes you can think about that in terms of: where are the sort of immediate barriers or blocks or lag, so that bad regulation—get rid of. But you can also think about it in terms of the untapped potential. So if we connected more, or if we freed up a space here, there's a lot of productivity gain to be made.
And I think one of those areas that's really clear—and this question really highlights it to me—is in data and data sharing. Because there's probably a lot of regulatory or behavioral barriers in the sharing or not sharing of data across government, but also between government and industry, that we could probably be a lot smarter on. And if we were smarter on it, we'd probably create opportunities to see where the kind of experiments we're talking about can have an impact.
Moderator: Thanks Dave. We're going to come to culture in a second and bring Rose in. But before we do that—Ravi, just on data in the context of these regulatory trials: what is the importance of data? That's a bit of a Dorothy Dixer, but how should people be thinking about data collection up front?
Ravi: It's a really good question. And I think that when designing a regulatory experiment, really the design of the trial and the data collection should go hand in hand. And actually, maybe even slightly challenge the premise of the question to the extent that one flows from the other—I would say you really want your trial design to follow your data collection, rather than your data collection to follow your trial design.
And what I mean by that is—we've worked with lots and lots of governments, and you know, this is even true in private sector organizations—and one thing that I found is consistently true, which is that data architecture is—the nicest way I can say it—is weird at the best of times. Right? Like governments and businesses just always seem to have odd outcome measures.
So if you look back at a lot of our old trials—we did a lot of work in taxation. And kind of, if you've heard about, you know, "nine out of ten people pay their tax on time"—that was kind of one of our original trials that we did.
And the outcome measures for a lot of those trials are weird things like: payments made by a proportion paying by 13 days or 26 weeks or something like that. And the reason is, when we go to departments and say, "All right, you want to get people to pay their taxes more quickly—cool. Can you tell us what the average time to take payment is?" They're like, "Well, we can't tell you how quickly people pay. We just know what proportion pay by this date." Or they'll be like, "We want to increase the number of payments or the amount that people pay." It's like, "Well, can you tell us how much?" It's like, "Well, no. We can tell you they made a full payment or a part payment or no payment, but I can't tell you how much they've actually paid."
Right? So there'll be all these sorts of weird things going on. And so it is often far easier to simply design around these sorts of outcome measures that you already have.
For a couple of reasons:
- The first is, you know, as we all know, it's difficult enough to get regulatory experiments off the ground—to get these sort of things happening. And if you can work within existing systems, it's a lot easier. You know, that was one of the ways that we originally—and what we still—work is to try and build within existing systems. Because if you're facing resistance, if you can just say to people, "Look, once we implement this thing, you don't have to tell me anything different. Just collect the data you already collect and just give it to us and we can analyze it." Right? Like that kind of reduces the burden on people quite significantly.
- But the second benefit of doing this is that if it is a metric and it is something that they are trying to shift, you can bet it is—you know—some deputy secretary or middle manager or the secretary or departmental KPI. Right? Someone somehow has it on their KPI and they are taking notice. And if you can shift that objective, all of a sudden, if you're able to go to them and say, "Hey, you know that thing that drives all of your departmental behavior and is on your management reports and goes to your board and all that kind of stuff? We just shifted that by like five percent," or whatever it is.
So finding ways to design your trials around existing data processes, I think, is probably the most important thing to think about when you're designing a trial.
So as I said, if you're looking to design an experiment, think about what data you have and then build off that—rather than trying to design perfect experiments and then worry about data collection later.
Moderator: Thanks Robbie. Rose, on the outcomes—I mean, we want to foster a bit of regulatory experimentation. We've seen some of the benefits—the $3.1 billion that Peter talked about, that over the decade that Peter talked about today. But yeah, I mean, can we point to areas where experimentation has resulted in an improved policy or a better outcome?
Rose: Um, yes. I think, like, for so long as there's been regulation, there's been this idea about strategic regulation and outcomes-focused regulation. But I guess what we're talking about now is probably morphing more into what the OECD is calling agile regulation. And this idea that you have your outcomes, but you also take the ideas of agile project development and apply them to regulation.
And that's a sort of perfect environment in which you can use things like data, quick evaluation cycles, and things like that. And so I think as people are sort of thinking about the opportunities to do that—as Dave said—the sort of customer focus, sometimes regulators struggle a little bit with: are they our customers or are they regulated entities? And can they both be the same?
But definitely making sure you're getting those feedback loops happening quickly from all the stakeholders—the people that you're regulating, the people for whom you are regulating we are all part of the institutional infrastructure, if you like.
Rose: Um, and as I've keep iterating, the people who are actually doing the regulation on the ground should be talking to the policy makers all the time. And I think that was one of the big advantages of the COVID situation—that some of those barriers just had to melt away. And I certainly saw it from my perspective that whereas, um, I'd probably spent quite a few years hammering away at the door of the Health Department in New South Wales to get various projects underway, suddenly we were best friends. And because the SafeWork inspectors and the Fair Trading inspectors were out there enforcing their public health order, and we had to just really quickly turn things around. So that was sort of—the crisis brought out the opportunity there.
There are a couple of points that can be a barrier. One is the data and the information sharing ability. So people, I think, can often be a barrier. There's dreadful provisions in pretty much every regulation that say anything you find during the course of your regulatory activity you can't share with others, and that often prevents good experiments being done.
I had an experience signing an MOU with myself so that one part of the agency could share data with things, and then we could use that data for some good work we were doing.
And I think another example I'll call out—and my colleagues are here from BRD who actually did this—when we were introducing the Design and Building Practitioners Act, we didn't wait until we had the whole Act introduced and implemented and the building is built in New South Wales in 10 years' time and work out then whether it was working. We started some evaluation cycles quite early on, even before the legislation started. So we were evaluating: do our people who are giving out the licenses actually understand how the law works—so our staff members? And we found a few gaps there that showed that we needed to do some more internal training.
But we also did quite a lot of stakeholder consultation to see if people on the ground were actually understanding what the legislation was meant to do. And I think that helped us with adjusting some of the regulation and the guidance that we gave way before we would have actually been able to work out whether this was going to make for safer buildings or not.
So there were a lot of opportunities. I just make a big plug for the OECD's guidance on agile regulation, because I do think it's got some really great points to help with experiments.
Moderator: Thanks, Rose. Ravi mentioned it earlier, and Peter—you had a slide with the quote from—was it Roger Jowell in the UK?—about failure.
So experimentation has to be open to failure, and it's got to be open to surprise, right? We might have a hypothesis, but it may or may not be true.
David, in your experience, can you think of instances where an experiment has yielded a bit of a surprise? Are there particular instances that you think of that you thought would work but didn’t, for example?
David: Yeah, look, I guess there's a general point there really, isn't there? If you're not surprised, you're probably not looking at your data or listening to your customers enough, right? And we really have to do that listening piece, because there's always something going on that you need to explore further.
Maybe one example—with the sludge audit that I mentioned before. So we worked with our colleagues to look at where the frictions were in a license application process. And the expectation that we all had was that most customers—so most of the people applying for their license—would only need a very, very limited amount of support in their application process. That was estimated at five percent.
We ran through the sludge audit process, had a clear assessment of what was going on, and it was clear that actually it was more like 20 percent of customers. So 20 percent of the people who are experiencing this regulation were being impacted negatively.
So that was really helpful for us, because we were then able to make some recommendations about change, which was then able to do two things:
- Vastly improve the customer experience by getting rid of the things that people were constantly calling the service centre to get support for.
- Vastly reduce the cost to government, because all of that additional checking in about what was going on and checking your documents was adding to time in call centres and so on.
So just a really neat recent example where we had an expectation about how a regulatory burden was going to fall on a particular cohort, and we were able to assess through a pretty robust process that elicited data for us, and then able to make change that had an impact.
So a really positive surprise for us, but something that we were then able to act on. And I think what we try to do with all the experimentation that we follow and the experiments that we design is to create those environments where you can hear that data, listen to it, understand the customer voice in that, and then take steps to mitigate.
Moderator: Thanks Dave. Let's go to culture. As we said at the outset, there's tools, there's institutions, there's culture—as per Peter's list. We can develop the tools. We are all part of the institutional infrastructure, if you like…
Moderator: Uh, Rose—what about culture? It's the great intangible. What can we do to encourage a culture of experimentation?
Rose: Um, I think it needs to be at various levels. And, um, probably one of the most challenging levels is the politician class. Um, because I think they like the idea of bringing in some legislation and it is what it is—and not so much the idea about, um, we're not sure what the best legislation will look like and we need to experiment here.
So I do think, um, getting buy-in from, you know, your political masters is important. Um, but I think you shouldn't give up because you are stuck with whatever regulation or legislation you have. Um, that sort of—if you gave up at that point and said I'm stuck with this black letter law, um, you wouldn't be able to get anywhere.
So I think recognizing the opportunities to honour what the Parliament has passed but still, um, put it into effect in different ways. And that's where I think the opportunities that regulators have to be quite, um, have a lot of discretion—be quite sort of thoughtful about what they do.
They're always going to have limited resources, so they're always going to be prioritizing. And so making sure that those decisions that you're making about prioritization are based on an idea about: let's have an experimental idea about this. We'll set our priorities to start with, but we won't keep them stuck as they always were.
Um, I think once upon a time regulators may have had a reputation for being quite sort of rigid. And, um, certainly there can be some cohorts—maybe people who've got a big police background—can be a bit hard to shift sometimes culturally because they're used to that sort of—the easiest way for them to operate is to have a pretty black letter law approach and not think about outcomes so much.
But even police forces today are becoming much more strategic and much more sort of focused on what they're doing. And definitely I would say amongst people who are thinking about why they are regulated and what they want to do, um, do have quite an open mind.
And, you know, it always used to stun me sometimes when I’d go and talk to someone who was, you know, on the front line dealing with these pile of license applications like Dave would talk to—and they would have fascinating insights about what the actual people who they were dealing with—the customers and the people they were regulating—were dealing with. And, you know, just milking that opportunity and showing that you're open to those suggestions I think is really helpful as well.
Um, but you know, it doesn't always work. We've talked about failures. Um, and sometimes you're in a position where taking a bit of a black and white, non-experimental approach to things is the only way to push some reform through—and you've just got to live with that as well.
Moderator: The police are an interesting example because in one sense they've got a very significant armory of rules and regulations they can enforce—but it's actually all judgment on the ground, isn't it, whether implementing public order or whatever it is.
We talked a bit about failure—and how do you not only tolerate failure but even celebrate failure. Do you have any tips on that, Rose?
Rose: Yes, I think—I always remember when I worked at the ACCC and Rod Sims would get very annoyed if our success rate in court was more than 90%, because it showed we hadn't been pushing the envelope enough on the cases we were taking. Though I knew if we went down to about 55%, he wasn't so excited either. But that was okay.
But I think that whole idea—when you're a regulator, like if it's within the confines of the law, it's sort of straightforward and you don't need to deal with it. The tricky matters are the ones that are in the grey areas, where maybe you're pushing the envelope a little bit on the interpretation of the law—not beyond what it should be, but you're sort of taking it to its furthest extent.
And if you fail on some of those matters—and, you know, reprobation from the High Court of Australia is the sort of most obvious type of failure—you've still got to learn to live with it and say, well, it was worth taking. We knew what we were doing. We had a—you know, it was an experiment. We took this case. We showed that either the law wasn't what we thought it meant and maybe should be changed, or that we were, you know, under a misapprehension about what Parliament had wanted here.
So always making sure that people didn't feel like they had, you know, somehow failed—notwithstanding what the media is saying, what maybe even the Minister is saying—that, you know, they were backing the decisions of the regulators that you're making.
And as you said, Michael, people—you know, policemen on the front line—are always making decisions. And you need to have that sort of idea that you're backing them in, even if there is a failure.
Obviously you need to have processes and practices in place to make sure people aren't going rogue. But I think if—and if the failure is due to that—well then that's a different situation. But if it's just due to someone, you know, doing their best shot at interpreting the law or applying the law in a particular circumstance and it hasn't come off the way it should have, I think you've got to make sure you support people.
Moderator: In your case study, Alex, have you thought much about what failure would look like? I suppose you're taking quite small, modest steps here—but yeah, have you had failures as yet, or is it too early?
Alex Kennedy: It's too early, but we are open to it. I mean, one of the early things we did when we were talking about some of the goals and aims we had in relation to harm minimization, anti-money laundering, was we did issue some guidelines that said: here are some of the features or characteristics which, based on the evidence that we know of, we think might address some of these issues—and we would like to see them trialled if you want to incorporate them into your products.
But the whole purpose of putting them through the trial was in the knowledge that they may not work, or that we may see from some of the other products that have approached in a different way something that works better than what we'd indicated in those guidelines.
And so that's where, from the outset, we've tried to build in a little bit—as Rose said—a bit of a mindset of failure from the beginning, in terms of an open mind of: this is what we think, based on the research, is going to work. But we're conducting these trials for a reason—because we want to know the answers. And if they're not what we expect them to be, then better we know now before we set this framework up than after we build it and then suddenly discover it hasn't worked.
Moderator: We talked about the federation earlier too—but if, say, it was a failure, I mean, would it be easy enough to go out to other jurisdictions and say, yeah, we tried this, didn’t really work? Would that be a hard thing to do?
Alex Kennedy: Look, it would be. But I think it's also a matter of that—you know, the benefit of cashless gaming for us is that it's not going away. Either industry is going to keep innovating on this into the future. If it turns out that the products that we're seeing at the moment aren't quite working the way we'd like to, then I have no doubt that industry will figure out another way.
And that's going to be part of the open discussion we have as we start to bring that data together towards the end, which is: what does this all look like at a holistic level, having looked at the various products that we've got, the goals that we have as well, and what is that then going to look like to build that framework into the future?
The other states and territories—you know, talking to their gaming regulators—they aren't seeing a lot of cashless gaming applications at the moment. That could change in a heartbeat as well, depending on the approach of their gaming regulators, industry, and their jurisdictions as well.
So it's also a matter of keeping an open mind as to what evidence you're seeing from those other jurisdictions. If they suddenly come out with something as well, then of course you have to take that into consideration.
Moderator: All right, we might go to the audience. Um, who has got a question for one or all of our panelists? Why don't we—we've probably got the roving mic, so I think, yeah, why don't we go down here.
Audience Member (Wes Lambert): Hello, Wes Lambert, and I'm here today on behalf of COSBOA. I very much appreciate that you went into businesses to do the testing and the experimentation. But the question is: did you focus on larger businesses that have lots of transactions and lots of customers and lots of interactions, or did you go into smaller businesses that may have less transactions or less data—but that represents 87 percent of all businesses in New South Wales?
Moderator: Is this for Alex specifically or on that trial?
Wes Lambert: No, no, no—any of the trials. The experimenting with regulations. It's not specific to gaming. It's any of the experimentation to how they would apply—when a regulation is put in place for large businesses because it seems to affect a large number of same businesses, but yet that regulation is disastrous for small business. Is that taken into account?
Dave: Do you want to go first, or—I'll offer at a general level. What I can say in terms of that issue of designing appropriate experiments is really thinking through that, as I said before, the different customers in the cohort you're trying to—you’re having an impact on.
So I think it's a judgment call that needs to be made in every set of circumstances about exactly the trade-off that you're talking about there in terms of the different cohorts. Because there are other, obviously, similar trade-offs that need to take place with other different compositions of regulated entities that you want to be conducting your experiment in.
The other trade-off that's probably really important as well—just to pick up and go from there—is in terms of supporting innovation and experimentation, is thinking about—I talked before about the availability heuristic. The other thing that often people put a lot more weight on is acts that have an impact rather than acts of omission.
So we tend to think about experimentation in terms of: well, I don't want to do this because this could have this disastrous negative effect—rather than thinking also: if I do nothing, there's going to be this continued disastrous effect.
So sometimes we've weighted a little bit towards the: well, we won't take action because there could be a negative consequence—instead of really thinking through what's the consequence of not taking action. And that probably applies in the example you've raised there in terms of really thinking through which cohorts are going to be impacted in an experimental context.
Rose: I was just going to add that it raises a very good point about—even though we're all saying you need to sort of co-design these experiments and engage with all the stakeholders—often it can be quite difficult to get representative stakeholders from small business. Because small business people are, as you're mentioning, very busy and don't have a lot of resources.
And similarly, often you find the same problem if you're trying to engage with consumers, because there's no sort of obvious voice—whereas big business are often well represented. So I think it's that making sure that when you're doing the experiment or designing the experiment, that you're coming up with a mechanism by which you get all the voices at the same time. And that can be quite complex.
Audience Member (Wingser): Hi, Wingser from Deloitte Access Economics. So as a behavioral economist and an evaluator, I have been so energized by this morning's conversation—so thank you so much for sharing your experiences.
My question for the panel generally is in regards to what happens after experimentation. So how do you ensure that the results of your trials actually scale and have that scalability to apply to the state more broadly? And how do you think about scalability in the work that you do?
Ravi: Sure. So I think this is probably one of the ones that is more challenging than other questions. So part of this relies on organizations kind of taking up the results of the trial and implementing the results.
The ideal world is that actually the first trial isn't the only one. And it's really a process of continuous improvement. And we often talk about trying to implement what we call a "winner stays on" model. And so you might run a trial against the business as usual, and you result in a new process that has some sort of improvement—you see some improvements, which is great.
But the idea is then that becomes the new business as usual. You think about: well, what's the next thing that we can do? How do we then simplify the process further? Are there changes we can make? And sometimes that might fail, and you go backwards—and all right, fine, we'll stick with the business as usual.
But keeping on innovating and iterating and things like that. But I think one of the other things as well when scaling is to ensure that the benefits you're seeing do come through when you go to scale—which sometimes they don't. And if that's the case, trying to understand why that might be—like what is going on in that process.
And indeed, part of that process is thinking about: well, how will this scale if we do scale it up? So whenever you're designing a trial, that is another thing to think about if you're really looking to think long term.
But another ideal is that hopefully, if experimentation works—if it is successful—that the scaling kind of comes not necessarily through scaling up that intervention, but scaling up that process of intervention.
I know there are certainly places where we've worked where that has become more common. So a couple of examples—in the UK, I know that there is a lot more openness to experimentation. The Australian Energy Regulator as well in Australia—I think that their board is very keen on testing and evaluating. So any new regulations they bring in, they generally try and do some sort of experimentation now.
You know, that wasn't the case, I think, kind of five, six years ago. But they sort of started on this process, have found it to be quite valuable, and now that is sort of almost their default position—to say, well, what are we putting out? Have we tested? What does the testing say?
And that testing can be quantitative and qualitative, but they're looking for that to be built in. So really, I think there’s a few ways in which that sort of scaling can hopefully happen.
Moderator: Marco to Steve, was it?
Audience Member: Yeah, but it—does it help resolve Rose's question of how to get the ministers to play? So if one minister is doing better on sludge and the other minister's not, and if we can have the Liquor and Gaming Authority doing regulation experimentation—wow, what we can do anything, can't we?
Dave: We definitely can do anything. And I'll take you back to my first point as well—let's not forget this moment. Because it's really in three years, five years' time—where have we gone?
And again, to plug this paper that's been launched today—it really sets up not merely just that approach of experiment and then scale, but what I'm reading in this document also is almost that state of permanent experimentation.
And also, I guess, connecting that with the point about the sludge and the voice of the customer—and connected data—is that if you're continuously engaging with your customers and stakeholders, business, people who are touched by any regulatory system, then you're seeing outcomes, you're aware of impact, and you're seeing opportunities for change.
And I think that's the critical thing. KPIs for ministers is probably above my pay grade, but I'm sure we can have a chat about that.
Rose: Well, I think as I said before, there are absolutely within some of the regulatory spaces—definitely that focus now in the OECD report and so on—around setting clear objectives and targets for regulatory reform.
And I think you can see that in the New South Wales context. What we provide with those sludge audits is an opportunity actually to do a bit of before and after—so seeing where the degree of customer friction is at point A, implementing reform, coming back at point B.
And I'd say there's probably similar things set in terms of Alex's work as well in terms of the sandbox.
Moderator: I think we had one behind there.
Audience Member (Narelle Hooper): Thanks very much. Narelle Hooper from Company Director Magazine. Really appreciate the discussion, and I know New South Wales has had leading processes on this.
And Rose, this might be a question for you or perhaps Michael—how do we translate these experiences and lessons across the state borders and build a community of practice so nationally we can all lift our game?
Rose: Thanks. Speaking with my national Community of Regulators Community of Practice hat on—absolutely, this is the sort of thing that—it's the reason why we have this community of practice across Australia. Because we do want to learn from each other.
And these ideas about agile regulation and so on—some things are often the subject of the webinars and the talks that we do. And I think also that probably there's an awful lot that goes on in terms of inter-jurisdictional discussion that maybe doesn't make it all to the surface.
But Alex already mentioned about how he's talking with the other jurisdictions about what they’re doing. And certainly in areas like work health and safety and consumer protection where there are national laws, people try and work well together. And if someone tries something in one jurisdiction—to let others know.
We certainly had the experience with Minister Kean that he was very keen—so to speak—on changing some aspects of the consumer law in New South Wales. And we went first with a few things like gift cards and ticket reselling that in …in the short term incurred the ire of our colleagues around the other states because we had sort of moved first. But then that was an experiment—we did it in New South Wales, it worked, and it was picked up nationally. So I think there's sort of opportunities to use competitive federalism to make improvements and, you know, have some experiments in those jurisdictions that are willing to go first.
But in the end, you do sometimes just have to live with the fact that some governments are probably more adventurous than others. And, you know, New South Wales during COVID was definitely one of the leaders, because I was on so many inter-jurisdictional hookups where we said, "We're just doing this in New South Wales," and they went, "Watch," and then two weeks later they'd all be doing it as well.
Michael: Yeah, I'll just throw in—I mean, we had a question earlier about could we benchmark states. It's tough. I think, you know, I think that where states can be pretty defensive about being benchmarked and—"Oh, you don't understand." I mean, we do this annually in our Report on Government Services. It doesn't have a strong regulatory emphasis, but it does more on service delivery. And it's useful, but there's a lot of defensiveness around that.
So I think it's tough. I think harmonization is tough because every individual jurisdiction tends to have the thing that's really important that you put into the harmonized instrument. I think the key is two things: where it's possible to do the mutual recognition—I think that is a kind of a red tape reduction that is not relying on harmonization but just a bit of trust of regulators in another jurisdiction.
But also do things showcasing success stories without a huge amount of, you know, "A's doing it better than B," but just kind of selling the story. It is remarkable to me—we defend the federation as the great laboratory for experimentation, but nobody knows what's going on in the next-door jurisdiction, it seems.
Ravi: Something that the—particularly in those areas that are quite, sort of, you know, have a very strong state or territory focus. Just to add to that, Michael, I think a key thing might also be just actually not just publicizing successes but failures as well, right?
Like, if you want to be genuine about sharing and helping other people, it's not just, "Hey, look at me, I'm so great." It's actually, "We tried this stuff and it didn't work," right? And I think that shows that, firstly, you're being a bit more humble and upfront, but also it is genuinely helpful—right? Like, "We did this thing, it didn't work, so you guys don't do that."
I think that can also be a really key step. And some places are better at this than others. And I know that actually Dave's team—the VRU—is good at kind of highlighting when experiments don't work as well as when they do. But I think, yeah, a lot more organizations could do it.
Moderator: All right, we might go for two more. I think we had one here and one over there. Yeah, just here—why don't we go there first and then…
Audience Member: Do individual personalities matter? So say, Alex, if you got a different job tomorrow and moved on, how would things go?
Alex Kennedy: I think they can. In our experience, obviously we've got a bit more of a formal experimentation framework set up, which sort of forces the organization into a posture of allowing experimentation.
So we have the formal sandbox, which has a framework, and it says, you know, if you have something that doesn't fit within our existing framework, you can apply to this for us to trial it and experiment with it. That kind of forces the organization into a posture of experimenting because you've got that system there—you get an application in and you've got to assess it according to the framework and go ahead with it and take a look at it.
So that's where I think the formal processes can be quite helpful at fostering a bit of that culture so that you're not relying on individuals so much to have that open mind towards experimentation.
I think obviously then it becomes a little bit harder when—as Rose was talking about earlier a bit—the more informal experimentation that can happen with regulators does probably rely a little bit more on personalities. But that's where often culture generally across an organization can really help foster that.
Moderator: We'll go here, and we might have to be quick or else I'll have failed in my KPI to Peter.
Keith Gomes: Keith Gomes, Advisory Board Member at Australian Super, but also immediate past board director at Clubs New South Wales and on the board of the Universal Services Obligations Telco Regulator.
Peter, this has been a fantastic morning for me personally. I don't know how many board directors are in the room, but very enlightening. And to you and the Commission—you've done a great job.
But my question is actually to Michael. And I want to focus on the word "productivity," which is the purpose of why we're here today—looking at the impacts of productivity.
For me, productivity has two big dimensions: time and importance. And for those of us who go to a—this is a particular request as much as a question, Michael—because of the federation and the consequences of that from a positive point of view.
If I look at all of the experiences that I've had pre-, during, and now post-COVID—whenever that's going to be—outside of PCR tests, the biggest, most time-efficient experience, given the importance, for me: when I visit my GP.
And I would argue that where cashless society has gone, I would argue that there'd be less than 10 percent of the people in this room that even have ever used an e-script. I used one on the weekend. Why didn't my GP tell me that ages ago?
So I think it's one of the most inefficient ways of our time allocation—very important, those meetings to the GP, for those of us who go more than others who are much younger than me.
So my question to you is: how can we take something so important—it's something that we haven't got a choice in, it's right up there in the important scale—and how can we make that experience much better?
Michael: So it's a good point. I think it ties in with something that came through this morning and probably is a bit of the meta theme for today: how do you take the sort of level shift that we've got during COVID and use it to kick-start an ongoing process of experimentation and reform?
E-scripts are a good example of that. I think telehealth’s another. So we kind of did the thing that you would naturally do during COVID to bring a thing online—that’s the kind of the level shift, right? Obviate the need to go sit in the GP waiting room. Obviate the need to get a handwritten scroll to go and fill your script with the chemist.
The question to me is: what’s the next phase? What’s the next bit of innovation that comes on the back of that? Does telehealth remain forever just a phone consultation or a video consultation? Is it just the digital version of the physical thing? Or do we actually use the technology now to develop something that’s even better—right? That economizes on the clinician’s time, economizes on the patient’s time, gives a better quality experience.
And with e-scripts—I mean, you’d expect me to say this—but is that a mechanism by which some of the regulation of the pharmacy industry could effectively be broken down a bit? Because we’re now managing to fill these things electronically.
I think that’s probably where the promise is. So yes, there’s the time saving. Then there’s a kind of the ongoing broader reform that could come of it.
Peter: Peter: I’ve failed. But, um—back to you.
Moderator: Thank you very much to our panelists. Terrific to get all of the breadth of experience—practical and theoretical. And thanks again, Peter, for the paper and today.
[Applause]
Peter: Thank you very much, Michael. Thank you much for the panel.
I think what we’ve heard from the panel is: with expert regulation experimentation, we’ve clearly got to set the objectives right at the start so we know what we’re doing. We need to do clear evaluation later. We need to collect the data. Agile regulation—we’ve heard about.
We’ve heard about probably the pros and cons of having an MOU with yourself, Rose—I think that might be a little bit too far.
We’ve also heard from Alex—new technology coming in, the new way businesses are doing things. Regulators have to be ready for that.
When we focus a little bit—and I continually focus it—on celebrating the losses and the failures, it’s all very well to say, but I think culturally each of us often think of: that’s the person who did that wrong.
Now, I know when I used to work in a legal area, we knew a few of the people in the office—"Oh, that was the person who lost that case. That was the person who lost this case." And so it’s easy for us to say we’ve got to celebrate losses, but we’ve got to do it.
And I know very early in my career I had the benefit of an Assistant Secretary in the Department of Finance. I was a young graduate in the Department of Finance, and I had this idea that if we could get—it was the first year the Department of Defence Finance had started—and I had this idea: if we get all the floppy disks coming in from all the other agencies and put them together, we could do the budget a bit quicker.
And so I was allocated a small amount of money to do it. I went around to a number of departments, tried to get them to undertake this initiative—but it was a failure. And I had to keep asking for more money and more money. In the end, I spent $100,000 on this, and it was a flop.
And the Assistant Secretary, who was my boss—called Tony Harris, who was a lovely man, but quite tough—and every time I was going to walk down the corridor, I tried to hide so I couldn’t see him. I was just so embarrassed.
Anyway, one day I got the call from Tony’s EA and said, "Look, the Assistant Secretary wants you to come and talk to him." And I was paranoid, because my previous job—I’d worked, as most people of my vintage—I started in the mail room. And I thought, "I do not want to be sent back to the mail room."
So I went in and saw Mr. Harris, and he said, "Look, tell me about this experiment you did with the floppy disks. I understand it didn’t work."
I said, "Look, it didn’t work because all those boofheads in the other departments—they just didn’t understand what a good process was going to be. And they’re all idiots. They just did not understand that by sending the floppy disk in, we’d all be better off."
And as I was talking, I realized I hadn’t engaged these people well enough, and I hadn’t done that. And then he said, "Well, okay, that’s no good. Next time, do it better. Any questions?"
And I said, "Look, Mr. Harris, I thought you were going to sack me and send me back to the mail room. But you know, you’ve encouraged me. Why aren’t you sending me back to the mail room? I failed."
And he said, "Peter, I can’t afford to send you back to the mail room because I’ve just spent $100,000 on your education. And it’ll cost too much to send you back."
Anyway, so I was very pleased then that I realized that. But on a serious note, I think all of us—if we think of someone who’s done something, who’s failed—please, let’s just step back and encourage them. Because that’s what it’s all about.
Anyway, we’re going to come back in one more minute—we’re going to come back in five minutes. Have a break. You’re going to enjoy the next session. We’re going to be using Slido, and we’re going to be having the various tables. We’ll have a facilitator from the Productivity Commission on each table, and we’re going through a thing called COM-B.
Now it’s not a Volkswagen or something like that—it’s a new framework, which is: Capability, Opportunity, Motivation, and Behaviour. C-O-M-B. Right?
And we’ll go through it as a group, and then we’ll be doing Slido. So it’ll be a bit of fun, and it’ll help us work to the next stage of regulatory experimentation.
So we’ll break for a couple of minutes. But before I do—if you can just join me in thanking Michael and the panel again.
[Applause]