We’ve noticed you’re visiting from NZ. Click here to visit our NZS site.
We’ve noticed you’re visiting from NZ. Click here to visit our NZS site.
Join our expert panellists as they explore the art of mastering rapid reviews. Learn how to quickly review even the most complex of work programmes to identify what’s working, what could be improved, and what options are available to improve efficiencies.
Kia ora and welcome to today's webinar. Today we're going to be talking about rapid reviews. We'll go into a bit more detail of what we mean by this later, but at a high level we'll be talking about ways of generating insights quickly to support decision making.
In the current context where existing work programmes are being reviewed or new programmes are being stood up, this is particularly important to ensure that limited budgets achieve, are directed towards the most effective areas and deliver results. Between the three of us on the panel today, we have experience in delivering rapid reviews in different circumstances based on our respective expertises. My name is Nick Leffler, I'm the Organisational Design and Change Leader at Allen & Clark and I've mainly led rapid reviews in organisational assurance of systems and processes or in response to critical events.
Kia ora tatou, I'm Marnie Carter and I'm the Lead of our Evaluation Research Practise. So when I've been involved in rapid reviews, it's been when I've had to bring an evaluation lens to quickly understand whether a policy, a programme or an intervention is being delivered as it's expected and it's often been to inform funding decisions that need to be made quickly. And I'm Jason Carpenter and I'm the Portfolio Lead for Regulatory Design and Assurance.
I've worked on a range of rapid reviews on a range of topics in New Zealand and Australia but often involving a regulatory or legislative driver. So what do we mean by a rapid review? Well it's in the name, in that they're done quickly. Typically we tend to think about them as being done in something like six weeks or less and generally in response to specific circumstances that may have arisen.
They tend to therefore have a high level of scrutiny either internally or externally as they relate to an event that is taking place. Yeah thanks Nick, you mentioned six weeks so to be honest that's quite a generous time frame. We have done ones that have been as quick turnaround as a week or two.
So it really is something that happens at pace. In my experience it's often been when there's a work programme or a project that's starting to get close to the end and it might be that something's gone a little bit off track or a minister might have a change in priorities or it's seen as not performing well and that funding deadline is looming. So you need to really quickly understand what the programme's achieving, whether there's been any efficiency barriers or anything that needs to be quickly improved.
So as an example of this, back when COVID meant there was a close of borders, there was an intervention that was brought in to help the tourism workforce find employment. Now when the borders opened again there was a need for some really rapid information to understand do we actually need to continue this intervention or can we go back to business as usual. And that information needed to be happened quickly because the border was open pretty soon.
Yeah, that's similar circumstances to another type of rapid review that we've done where you might be getting ready to launch a new service or scale up an existing service and you require some organisational assurance that the organisational capability and the processes are fit for purpose and will deliver the expected results. So in the COVID period as well, very early on in COVID when contact tracing was becoming increasingly important to prevent the spread, we were asked to do a very rapid review of the processes that had been put in place for contact tracing to provide the Ministry of Health with some organisational assurance that it would actually achieve or reduce the spread. Yeah, good point.
And I think we are seeing a bit of a pattern in that COVID, which was an unprecedented and somewhat of a critical event, meant that there was a lot of policy and decision making happening at place and a need for a lot of quick information at the time. So COVID was definitely a time of rapid reviews. Yep.
The other type of rapid review that we've done relates to critical events where something may have taken place and leadership needs to understand what caused the event, what can be learned from it and what safeguards can be put in place to avoid something similar happening in the future. So an example of that that we've done somewhat recently was for an Australian regulator where they'd had a confidential information potential breach and so they asked us to come and look at the operations and processes and legislative requirements to make sure that they had the systems in place to be sure that it wasn't going to happen again in the future. So responding to that event and then very quickly making sure that they had the processes.
Yeah, and something like a privacy breach, you've got to move quickly on that. That's not something that you can have the time to do a leisurely review. Yeah, and I think that in the prioritisation that people are able to give it, it's like, you've had something that's happened, give it a scare.
You can then commit resources to making sure everyone's in place to do it quickly. And one of the other ones that especially what I do is the impending or recent changes in policy, regulation or legislative environments can have significant impacts on organisations or delivery and people may need to understand what they mean for that organisation or the regulated parties or how it works. So examples of rapid reviews in the COVID space around COVID orders, what does this mean for specific things around the border, but also other things when there's been legislation changes and people need to understand what that change means for them and their organisation and how they need to respond to it.
So this sort of legislative regulatory changes causing, you know, being the event that triggers the need for the rapid review. So those are four different types of reviews we've just talked about and there's probably different circumstances that would lead to a review that needs to be done quickly as well. But we find these four broad types useful in framing what we've learned from delivering them.
And what we'll talk through a bit today. So throughout today's webinar, we'll give some examples of each of these and what we've learned from them. And in that context, it's also useful to remember that not everything is suitable for a rapid review, or there may only be aspects of things that have happened that are suitable for a rapid review.
I think the big thing there, as we said, it is fast. It's normally in response to something that's happened or is about to happen. And the types of recommendations that you can give are a little bit different than what you may be able to give if you had, you know, 10 months, a year, two years to do the full review.
And so just understanding the trade-offs that come through will be something we talk about today. So that's a really good point, Jason. I mean, what would be a time when you wouldn't want to do a rapid review? If the review required significant levels of alignments around things like what does success mean? Like if it's a big broad set of outcomes around wellbeing or something, trying to gather that information, build something new to check how well it's tracking can be really difficult in a couple of weeks.
So it's often things that are operational or specific in nature where the recommendations can be put in place to sort of make short-term immediate changes to work programmes, to ways of operating, to recommend bigger shifts. It's those sorts of things that have a narrower focus with a very specific set of stakeholders and ways of working that you can look at rather than we're going to build something new that needs all this buy-in from different stakeholders. So we've talked a little bit about what we call a rapid review, but what makes them different to a more traditional review when you have more time? So some of you might have experience with reviews that take a relatively linear approach, similar to the diagram that's up on your screens at the moment.
But what makes a rapid review different is primarily that these steps don't tend to follow in that same linear manner. There tends to be some iteration back and forth. You might need to go back to reconsidering your scope when you get to the end because you found that you didn't get the data that you expected to get from the process.
And so you saw in those diagrams that they are relatively similar methodologies with similar steps, but you might be doing them in a different sequence and uncovering things as you go. So this usually means that your document review and your engagement will be happening at the same time and evolving based on what you've gathered so far. Yeah, thanks Nick.
And I'd say for me that the kind of toolkit that I have isn't very different to what I do in any other type of evaluation. So I'm going to be looking at documents, I'm going to be looking at data, I'm going to be doing interviews and engagement and talking with people. But the difference is how I would, the order and the way in which I would do those things.
So I'm reviewing my document and within that document I'm finding a strand of evidence that I would then immediately start testing through engagement and then that engagement might find an additional strand or something to explore further. I might go back to the documents, do some additional interviews. So it really is much more integrated and much more circular in the way we do it.
That kind of also leads on to how we would test a hypothesis. So in a traditional researcher evaluation you kind of have your questions that are set at the start and you might have some for a rapid review, but you're essentially forming your hypothesis based on the data that you've gathered early in the process and then you're testing it, you're refining it, exploring it further through engagement and identification. So it's again, it's that iterative process, not sort of here's our questions, here's the answer or here's our hypothesis, here's whether it indeed was found to be true and really just focussing on slowly and surely iterating.
In best practise you might do your full document review, then move on to stakeholder engagement and have a quite a formal process. It's just by nature of having limited time you do have to. Exactly and that's one thing I found hard as an evaluator, you're very much trained to do your document review, you do it very neutrally, you pull out the themes and then you go back and you look and see what's emerging.
You just cannot do that in a rapid review. And then one of the key differences that we've found is the nature of the recommendations are a bit different to a full review and we do find that they do tend to be linked to an event, linked to something urgent. The recommendations are normally quite immediate and in focus and so they're about triggering action rather than sort of a strategic end point 2050.
It's more what needs to change right now to sort of respond to this thing that's happened or to prepare for what's coming down the pipeline. And so all those things that we've talked about, they do represent some of the challenges around rapid reviews. So using interlinked and circular evidence which we've just talked a little bit about, how the nature of engagement needs to change as the review progresses, maintaining robustness is a really, really, really important feature of it.
How do you have faith in the process when you are having to make trade-offs and then how you go about administering the review process, noting the changes, the potential need to revisit scope and just managing the engagement. It's a little bit different and something we'll talk about a little bit more. Up on the screen now we have a quick poll.
So we've got four things that we're sort of talking through today. What we'd be keen for you to answer is what do you see is the biggest challenge with running a rapid review. So the options there are interlinked and circular evidence, the changing nature of engagement, maintaining robustness and administering the review process.
And so if you click on that, we'll be able to see what comes through and we can then definitely make sure that we spend a bit of time on that and answer any questions that come through about that. So please do vote in soon, so put your initial reaction and get it in and then we can respond. And if you do have any other challenges that aren't up there, feel free to pop them in the chat and we'll address them in the questions at the end.
Not surprisingly, the maintaining robustness one that you mentioned, Jason, is one of the ones that's coming through as the main one that people think is a challenge, but there's quite a lot around the changing nature of engagement as well as administering the review process. Thanks. So I'm going to start with the one that got the least votes because that's what my notes say.
So I'm going to talk a little bit about the interlinked and the circular nature of rapid reviews. So as we've talked about, it's just not linear. So I cannot do my scoping, then my data collection, and then my analysis and my findings.
It doesn't work like that. So you almost have to approach it from something similar to a legal discovery process. So I would read a document and I'm immediately, as I go through, thinking, so what? What does this mean? What's next? And then the document might have told me that I need to talk to someone, say a programme manager.
I wouldn't wait. I would then talk to that programme manager and then ask them about it and they would tell me, okay, you need to read these other five documents and talk to these other three people. So then I would follow that line of enquiry.
They might point me to some more documents or back to the original one. With that though, it is important to check in with whoever has commissioned the review. So there is a danger, particularly in a rapid context of following something that might end up being irrelevant or a tangent, and you do not have that time to waste when doing a rapid review.
But that also brings to mind the whole thing we'll talk about more in the maintaining robustness, but you need to make sure that the relationship with the person who's commissioned the review is at a stage where you can reliably test information with them. You don't want to risk capture and being sent down the wrong way. You still need to keep an open mind.
So that is something that we'll come back to when we talk about maintaining robustness. Yeah, that's a really good point. Going back to the way that you're looking for evidence in a circular manner is that essentially you're not going through and reading everything and then summarising.
You're building the story or the narrative as you're going. So again, you're not doing all your evidence collection and then your interpretation. That interpretation is essentially built in and it's evolving as you go about the review process.
Quite important, particularly in tight time frames, is the triaging of information. So you need to be pretty careful and pretty hard on getting some information, identifying whether it's likely to be relevant. If it is, include it.
If it's not, it's out and you do not have the time to do anything else. That reminds me of a rapid review we were doing when a critical event had taken place and we only had a very short period of time to do it and the document collection was over 700 documents and obviously you're not going to have time to go through everything. So finding ways to categorise them and understanding which ones duplicated others or which ones consolidated information from others was very important in being able to get through the documentation and identify the strands that we then wanted to explore in the engagement.
So how did you do that categorisation? Well, I think we worked closely with the programme team and authors of documents to understand what their purpose was and if they were feeding into other documents and then sense test some of that with other parts of the organisation and external stakeholders to understand which parts of those documents they had seen and that helped us understand, well, we had a core set of documents that provided the programme information and it was in an evolving environment so there were daily updates and we quickly confirmed that it wasn't necessary to go through every daily update because a lot of the time there was no new information and so having that relationship with the team that was delivering the service, we were able to identify where the critical events had happened and then focus only on the documents immediately before or after. Yeah, that makes a lot of sense and when you've got 700 documents there's no way you can get through all of them. There's a sense as well that you've been asked to be an independent reviewer so you are having to review these documents with a sort of a lens around like critical eye, like what are you actually trying to answer, who's it for, why is it important, what's it trying to do and so not just accepting everything at face value but really it is evidence to the review, it may have lots of other uses but what you need it for is a little bit different and so just viewing it as evidence and how you're going to look at it and having a critical eye.
Yeah, that's a really good point and probably the last thing I wanted to say on the interlinked nature of evidence is that because you're following several different tracks at once, it likely means that you're going to be at different points so you might have followed one strand of evidence and you pretty much quite quickly get to your conclusions and then it's just about validating that. For other tracks of investigation it's likely that there's going to be a bit more work to do, a few more strands to follow, so you might still be in the discovery phase or the data collection phase which of course are interlinked but you're at different points so you're pretty confident on some findings and you're testing them, you're still really doing that investigation on others and you have to be comfortable with it when you're doing a rapid review. Got a couple of great questions online here, Libby's asking what are the challenges around getting staff to engage participate when they don't share management's sense of urgency, they are more concerned with getting their day-to-day work done.
That's definitely a challenge and I think you know ensuring that the right communications by management out to the team around the importance of engaging is critical to being able to deliver these in the time frames and it's not necessarily only internally because sometimes there are external stakeholders who are involved and getting them to have that sense of urgency is similarly critical to being able to deliver it but also you need to work with the commissioning agency to enable that because they need to communicate to their stakeholders of the importance of participating. There's a question from Gordon around managing hidden agendas and egos and I think it can be linked but a lot of the time you've got an organisation where people have different sets of priorities and they may not align and so they don't care as much about the review as the other manager and I think the really key thing is the commissioning and the ownership of it by the commissioning person to make sure that those stakeholders are available because you will be relying a lot on engagement as a way to gather evidence and understand what's happening so it just really is fundamental to the whole process that you have a method to make sure that they have buy-in and that they are available because they are fundamentally key to the whole process. Christine is asking is there an accepted theoretical basis to the rapid review methodology? So I would say the answer to that is it depends.
I don't think in my experience anyway you would not pull out your how to do a rapid review methodology book what you would draw on is other research and evaluation methodologies and adapt them so essentially you are using a theoretical basis that might be for example if you're doing a rapid review and bringing an evaluative lens you'd essentially be doing somewhat of a grounded theory approach I mean you're going in without necessarily having or expecting to find anything and you're looking at what's emerging but that's something that you kind of draw on and adapt rather than anything that is specific to rapid reviews. And one of the things with that is that this scope of what's in scope and what can be answered with those types of evidence is quite important so if you're trying to answer really big broad questions very quickly then you may get into sort of ropey territory a bit faster than if you're talking about more focused you know operational things or specific events the types of evidence that can lead towards conclusions in a robust way is different than we're trying to do whole outcomes for a system. Exactly and you have to tailor every time for the length you have because what you can do in six weeks is different to what you can do in one or even two weeks so there is a real need to make sure that how you're approaching it is really framed around what time you have and what you're trying to achieve out of it.
And we did touch not just then with the question but the evolving nature of engagement is something that really defines the sort of rapid review process as we're talking about it so here we talk about like a narrow, broad, narrow kind of approach where your first initial key stakeholder discussions have to be with the most relevant people who have the most pertinent information about what's happened and so Marnie talked about you do a review there's a programme manager you know you saw you brought you build this view of who are the absolutely fundamental people that you must talk to because they're involved or have really key information and you talk to them first and then use that to then work out who else you need to talk to to unpack the different questions you've asked to answer so you sort of start narrow then go broader you know the extent to which you can do that will depend on the type of review how fast you know who's available those sorts of things but then really to bring it back and so that sense of validation and making sure that you are testing what you heard from the broader set of stakeholders with those core groups to make sure it's right make sure it matches with their understanding all those sorts of things so to really sort of you know start start narrow go broad and then bring it back narrow to then retest and one of the the big things when you're talking about that when you send it up is like being really clear on why they're included so with a big review you can often have secondary objectives to make sure that people are engaged and they feel that they've been listened to with the rapid review it's often not possible or feasible to do it so really having a really clear way to say this is a core personal organisation for this reason and therefore you know i'm going to prioritise meeting with them these people are really important to the system but maybe you know they're a secondary stakeholder get to it that time yeah and in terms of who is in that core group um i think it's important to know that you need to rely to some degree on whoever's commissioning the the review to tell you who's in there but also on your initial investigations might identify other people that that need to be included and then you kind of need to back yourself in terms of your instinct so once you've done this a few times you kind of know who those key people are likely to be and sometimes you might have a little bit of a firm discussion uh with whoever's commissioning it on on who needs to be there but as the reviewer i think you've just got to actually back yourself and and you've been asked to do this and trust that you're going to know who those people there that need to be in that in that initial group and there's also something around taking a critical lens to when you're being told that actually this group isn't relevant because you you talk you get a sense from all the engagement of who might and who might not and it may be again coming back to that thing around making sure of not being captured or steered in a particular direction that you want to question that when you're being directed to not engage with a group because you might through that conversation actually find some really interesting things yeah very much so and i think you've every step of the way that's when you're being told how or not to engage with what documents you should include or not you've got to keep that critical eye in the back of your mind and saying why why why and the team is really important to that it's like you are bringing your experience and your expertise and having done reviews and looked at different organisational approaches to even have that sense of like is this the right set of people to be talking to is there someone else is there something missing like what else is happening behind the scenes i think another another part of that is as you work through the process that the type of engagement does shift a little bit so at the start you're potentially coming in relatively cold to an area you may know about the sector but you haven't been in that organisation or that specific area so you're really having a really extractive sort of you're trying to get as much information out of the people as you can as quickly as you can and then as you move through the process shifting to a testing and validation kind of approach so using those hypotheses using that engagement to then test that you've understood it correctly test that there's nothing else missing and sort of use that process to fill in the gaps and to work out where your sort of hypotheses are going to be one example of that with the australian regulator around the the confidentiality was the the initial group was very organisational focus and then you know legal requirements how are we going to operate as a regulator and then as you went broader you got into broader conversations around culture and how we like to operate and what best practise might be and then had to sort of bring that back in and test that with what was a little bit of a set of railroad tracks around completely different views on how the organisation should operate but having a way of understanding those different pieces and then testing and validating you know what they meant in the different contexts and how they might then lead to recommendations which you know generally do tend towards the more legal end of the spectrum and my work around what what does the organisation have to do to be safe but just the extraction to see it's making a validation quite quickly as you work through those different views you talked about Jason are interesting as well and particularly when you're doing a review of a critical event that may have happened and to the point we made about backing yourself you'll often find that not all groups within an agency will agree with the findings or the recommendations that you're starting to uncover and when you're trying to validate them with them you might get pushback from a certain group particularly if they were a core part of the critical event so we you know we've done reviews where the people actually doing the work at the coalface were starting to agree with what we were finding as were sort of senior leadership but there were levels in between who felt uncomfortable by what we were finding and were challenging it but we had to find a way of making sure we had enough evidence to support the findings to bring them along on the journey to ultimately be at the point where the review was accepted and also being in the mindset that that challenging is actually a useful part of the process because it does mean you've actually got to be sure that you have got enough evidence to have made that call so I quite enjoy that that challenge process because it makes sure you haven't left any stone unturned. Yeah and if you know that that challenge is there you can then respond to it in the way you set up your report, the way you talk about it, the way you then engage with decision makers to acknowledge that there is differences of opinions based on these priorities or these prioritised things that this is what the findings are. So back to that big point that everyone was interested about around maintaining robustness and let's be clear we are clear that rapid reviews need to maintain rigour but that the speed also means that there has to be an acceptance of the trade-offs and the compromises you might have to make in terms of the completeness of the evidence you're able to gather, the time obviously significantly influencing what you can do in that review.
So the decisions you make need to be transparent as do the inferences you've reached and you have to document all the caveats and limitations because otherwise you won't necessarily you know somebody might say well I don't agree that there's sufficient evidence here so you have to tailor it to what you're able to do in that time and document everything really carefully. Yeah absolutely so one rapid review that I did that went quite off track was one where we didn't do that well enough so I was coming to it with my evaluator background and we put a nice little section in our report that lists what the limitations are and what the caveats are, that was not enough. So I really have learnt that it doesn't necessarily have to be in the final output of the report but you need to have a really clear documentation of what decisions were made, why they were made, what this means in terms of the evidence that was followed or included or excluded and you need to have that documented otherwise there is potential for picking apart and critiquing of findings and you want to be as robust as you can and make sure you avoid that.
And making sure the findings are actually based on evidence and not you know it's like there is a and some of that's the scoping thing like the types of issues questions you know that are suitable for rapid reviews the recommendations need they do need to be sort of tight immediate those bigger broader like more challengeable questions they're just potentially not suitable for rapid review because you need that bigger. And that comes back to the circular nature because if you go through you're finding actually we cannot get enough evidence to answer this question in a robust way then you need to circulate back to what your questions are and reprioritise or refine. One way we've we've ensured that we've had the sufficient evidence to justify the findings in in a review of an organisational capability and process was to take what we call at the time like an audit light approach where anything that was said we asked for documentary evidence of this is the way the process is actually supposed to happen and where that documentary evidence didn't exist we had to exclude what we were told about the process because you couldn't support it with the standard operating procedures that were supposed to be in place so that's one way we've done it in the past to make sure that the evidence supports the findings.
We talked a little bit earlier on around the risk of being steered by the group the people commissioning and so you know again I think part of maintaining robustness it's really important to back yourself to have that critical view based on the documents based on what you're hearing and challenge the things that are coming up and what you might be being told by the group that commissioned you. Yeah and it's about when you go particularly on that broad bit of engagement making sure that you are including those who might be able to give a counterfactual or a different perspective or come to it from having a different role in whatever it is that's being reviewed. Okay so another key point is around administering a rapid review so I know we've talked about everything going quickly and doing everything at pace one thing that needs to be done quickly that needs to be done really well is setting the terms of reference so obviously in any type of review that you do you need a terms of reference but it is particularly important that it focusses on not only what you're going to do but how the relationship is going to work so particularly between the reviewers and those that have commissioned it.
The reason that that is so important is because the commissioners of a review are vital to making sure it's happening so they need to communicate to their stakeholders why this is urgent, why it's important, why is it being done. They need to get on board and the commissioners have a vital role in that and they need to ensure that people can make themselves available at short notice to this so this is the priority. There also needs to be frequent communication with the commissioners so it's not where there's set terms of reference you go off and do the review come out with a report at the end.
It's very much a two-way street where they're setting the terms of reference but the review team is also setting expectations of them as well. What's their role in it? How are they going to communicate to this? How are they going to make sure that people are involved? That reminds me of when we did a review of a critical event for an agency. We set up daily stand-up meetings at the beginning and end of every day.
At the beginning of the day to confirm that all the stakeholders were lined up. At the end of the day to talk about what we'd heard during that day, the additional documents that had been discussed that we were wanting to get access to and basically tasking the commissioners to go away overnight and provide that. So the participants in that daily stand-up, was that just the review team or did that include the commissioners? It was with the commissioners so that we could ensure that we were getting what we'd heard about during that day ready for the next day.
One of the things with the trade-offs of doing a rapid review is that that commissioner will have to own the fact into the future that it was a rapid review and that you didn't have 10 months to go and uncover every piece of evidence etc. So making sure that they're across the journey so that they can then justify and defend the recommendations because you'll be gone and they'll have to then live with them into the future. So taking them on that journey is really really important.
That's a really good point. The other key point probably about administering a rapid review is that there is not enough time to set up any new systems or structures or processes. So you need to draw on what's already in place.
So in terms of the data collection that's why we've really emphasised document review because you do not have time to set up a survey and to go and collect a whole lot of new information. You've kind of got to rely on what's there. That includes drawing on existing relationships, groupings, so if there is an existing group that meets regularly can you piggyback onto that and can you use what's already there.
And that includes relying on existing relationships. So one thing I will say that I've learned is that it makes it particularly important but also difficult when engaging with Māori or Pacific or any groups really where you might generally want to take a little bit more time, build that relationship, go a bit more slowly. If you've got a week you do not have time to do this well.
So you need to be aware of the compromise that entails and it might mean that you need to rely on certain people that there's an existing relationship with, that are recognised as spokespeople for iwi, hapū or other groups, and it might mean that you need to draw on existing evidence. So if there's been a Waitangi tribunal that is focused on a particular issue and that's been established views throughout it, you can draw on that information. That said though that is one real challenge with rapid reviews is that doing that engagement, particularly with Māori but with anyone really, there is some compromise there.
So throughout today's webinar we've been talking about some of the challenges, some of the lessons that we've learnt from doing rapid reviews, and I guess if we were to distil it into key themes that we would take away from this? Yeah I think for me it is about that robustness. So it's being clear on the trade-offs that you're making, being really explicit about those, but at the same time ensuring that you're backing your process, that it's going to deliver the evidence that's required to support your findings. And I think building on that is that you're going to be working in an ambiguous space, everything's going to be changing and you're going to be working at pace and that the team that you that you build to do the rapid review is really really important.
They have to be used to working with unclear information, making decisions with limited information, you know testing hypotheses with people that know a lot more than you about different things like this. The team makeup is really critical and similarly the access to those stakeholders is fundamental. We've talked about that a little bit but you're trying to work at pace, you're trying to uncover information, the participation and access to the stakeholders in a timely way is super critical to the overall success.
Yeah and for me it's really that thing around building trust throughout the commissioning process and as you undertake the review so that the findings will be implemented. So as you were talking about before, you want to make sure that the recommendations can be immediately implemented as well as then making sure that the team that commissioned it own it so that you can deliver what you expected. So let's look at some of the questions that are coming through.
So James asked how can you assess the success of regional plans or policy statements? I think that's a really good question and I think my answer would probably be that that would probably be really difficult to do in a rapid review process and the number of stakeholders and types of outcomes and the availability of data would probably mean that it's better suited to a bigger, longer, more complex traditional review. With him saying that there's probably parts of the process or operation or how you engage or different specific aspects of that that would be suitable for a rapid review. So it really is just understanding what you're trying to get out of it, what the breadth of the sort of agreement would be and then what can be done in that sort of rapid time frame.
Yeah Sandy's asked any tips to avoid the rapid review replacing or negating the future delivery of a potential full evaluation. That's a great question. Yeah that is and to me I think there's a couple of ways you can do that.
So the first thing I would always do is be really clear with the commissioner right up front that that is not the intention of the rapid review and that there will in most cases still be the need for a full evaluation. So putting that on the table right up front is pretty key. The second is being clear on the scope of the rapid review.
So like Jason just said there are parts of most policies, programmes, interventions that are not suitable for a rapid review. So you are only going to ever scope your rapid review onto the bits that are suitable for that really fast paced iterative. And again making that clear that the rapid review is focussing on this bit.
There is all of this other stuff that is not being considered and will not be considered as part of the rapid review and that's where you're going to need to focus on an evaluation. Yeah and things like the availability of data that's something you might be able to very quickly ascertain in a rapid review that you know could then lead to recommendations around the focus of a fuller evaluation and time and whether the data is available to support a fuller evaluation. Exactly it's a good point actually when I've done rapid reviews I've sometimes had a kind of list almost of the things that would have been nice to cover but we couldn't or questions that have arisen that we can't answer through this process but that maybe need to be looked at.
And so the rapid review may have been helpful to actually help shape an evaluation. I've got another question here from Claire who's asking how do you set a good purpose or scope that has integrity and can be maintained? I think when you're working on the scope there is a bit of fluidity at the beginning of a rapid review process when you're working through what is realistic in the time frame that the rapid review needs to be delivered in. And so there is a process through the commissioning of working with the agency to really confirm what is the intended use of the review, what is the potential evidence that might be available and therefore how do you set the scope that doesn't need to be changed as you go through the review.
I think that might be what you mean about the integrity of the scope as in you don't necessarily we showed in the diagram earlier on that you might get to the point where you think you've got findings and recommendations or you're not able to support those with evidence and you have to go back to the scope and obviously you want to limit that so that you really are staying true to that scope you set up front. So there is a process up front. So how do you know how broad or how narrow to set the scope? I think that's a difficult question and it's something you have to work through over time.
Potentially as you're going through the discovery phase, when you're really like that tight engagement that Jason talked about when you're getting up to speed with the programme, that might be the point where you're still refining the scope because you're discovering what might be available. And I think they're asking the really open questions, the why, they're not assuming anything and making sure that everything is explicit and just going through a real process to make sure that there's not sort of a miscommunication between what they thought they were getting and what they've asked for and how you've interpreted it. So just lots of why questions, open questions, what could go wrong, like why do you want this, what's triggered this, why a rapid review instead of a formal process and just really unpack all of those assumptions.
And then I think one of the things that I like to do is that, you know, pre-mortem, I'm sure there's other words for it, but just ask them like, if this failed, what's the most likely reason it failed? If this report wasn't accepted, why wouldn't it be accepted? And you start unpacking some of those assumptions around, oh well if staff don't feel they've been talked to then they might not, okay well then it's really important that we have staff included in those sorts of questions. And you're saying you focus the pre-mortem on the review itself. Yes, 100% on the review.
Not the, whatever you're reviewing. Got another great question here from Christine who's asking, do you use rapid peer review as you go as well? So when we've done rapid reviews, I guess there's two parts that come to mind in relation to that. So we often do sort of daily internal team stand-ups where we are testing each of our emerging narrative that might be coming out, the findings that we might think we want to test through the next bit of engagement.
And that's a really useful point to sense test with the other people on the review around what you think you might want to be focussing on. Because another point we didn't necessarily touch on before is that you might have a team who are doing engagement separately or different tasks and might not be hearing all the same conversations. So bringing that back together and having those daily conversations around, this is what I think I'm hearing, this is how it might apply to what you heard, and starting to unpick those things.
Yeah, I agree. So it's sort of the peer review process is essentially embedded into the doing of the review. So again, it's not that linear do it and then write the report, someone reviews the report.
It's an ongoing and iterative process that is embedded as you are undertaking the review. Yeah, and they're really critical, especially around that evidence to support the findings. Like, do we actually have the evidence to make this finding? Or if not, what are we trying to say? There'll be something in what you're saying, it's just how you frame it and how you can then go and get more evidence.
And it does come back to managing your own ego. If you're pretty certain on something, you have to be open to the fact that others in your team may disagree. And that really is a positive thing.
Got a question from Angela, which was, how can you implement learnings in a moving workforce? And I think particularly in a context of change at the moment, there is a lot of shifts in workforces. So that is a challenge in terms of the implementation. But if there are rapid reviews going on right now, it's making sure that it remains owned by the team rather than the person, so that ultimately there is somebody involved who can pick it up or can pass it on to ensure that the recommendations can be implemented.
Yeah, that's right. And I think that comes back to the fact of making sure that your recommendations are really practical, really implementable, and able to be picked up by whoever is the next person in that seat. Also, while we talked about having a process that is quite iterative and moving, the report itself is static.
And so that is something that it is quite important to ensure that the report documents the evidence and the reasoning could be picked up and understood by someone who was not involved in the review process, and that the recommendations are very actionable. And those have been tested and ensured so that it's not relying on the people that were involved in the review itself. It can be picked up by a new or changed report.
And I think with a full review or a Royal Enquire, a commission, Royal Commission, etc., you're sort of really looking for recommendations that will stand the test of time. And so if the organisation changes, they still need to be relevant and they still need to signpost the future state. For these rapid reviews, you normally can be a bit more ruthless and say, this specific thing needs to change.
You need to invest more in how you interact with your workforce because you have an issue where you keep changing your policies and no one knows what's happening. So that's just being really cognisant of the situation that you're in and that you're making recommendations into. And you can just be a bit more focused in a rapid review, just by nature of what they are, to sort of be more specific.
Yeah. And the recommendations often are quite operational and able to be implemented quite quickly. So it is often something that occurs pretty soon and often that is when the workforce hasn't changed too much.
Yes. I think there's also something around there of making sure that as, you know, recognising that the people who commission a rapid review, it has a purpose, they want an outcome from it. So if there is a context of changing workforce, moving workforce, that a good handover is provided, because otherwise you might be part way through a rapid review when the team changes and there's a risk there of losing access to stakeholders because you're no longer getting that strong ownership, that communication to the sector of you need to engage because this is important, because otherwise you won't be able to deliver the review.
So I think there's something around working with the team that commissioned it to make sure that the review can be delivered. We've got time for one more question. We've got a question from Memo that says, what are the potential unintended consequences of a rapid review? So I think one of the consequences that comes to mind is that because it's done quickly, you may have missed a strand of evidence and therefore not have the full picture.
So that comes back to that using interlinked evidence to identify all the strands that might contribute and that triage of documents. So I think the risk there I think is that you don't actually have the information needed to provide credible findings. But through the process that we've used in rapid reviews in the past, we've found that we've been able to get a lot of documentation.
I gave that example of having over 700 documents, we were able to triage them and that helped guide the direction that we then went. So I think that's one way to mitigate that. One thing we do find is for a lot of sectors, there's a lack of evaluations, a lack of reports.
And so one potential unintended consequence is that your rapid review becomes the only piece of evidence that people can rely on. So it can be over-referenced, given the methodology that went into it, just because it's the only thing that exists. Exactly.
And that comes back to being really clear about the limitations and what this can and can't be used towards. That wraps up our webinar. Thank you very much for joining today.
You might have questions specific to your organisation. Please reach out and we'll book a time to talk through those with you. We have a collection of webinars on our website and we'll be running more of these in the future.