We’ve noticed you’re visiting from NZ. Click here to visit our NZS site.
We’ve noticed you’re visiting from NZ. Click here to visit our NZS site.
Evaluation isn’t just about creating reports—it’s about creating impact. This session explores how commissioners and evaluators can lead a purposeful journey through each stage of the process, ensuring insights translate into real-world action. Learn how to structure engagement, navigate complexity, and ensure evaluations drive meaningful change beyond the final report.
This webinar is ideal for evaluators, commissioners, and decision-makers who want to enhance the value of evaluations by embedding engagement, insight, and action into every step of the journey.
Tēnā koutou katoa and welcome to this Alan and Clark webinar on navigating the evaluation journey. I'm Marnie Carter and I'm a senior consultant here at Alan and Clark specialising in evaluation. So I've been doing evaluation for coming up 17 years now and during that time I've learned quite a lot of lessons about what evaluations have had a real impact and affected change and also probably a few that have sat on the shelf and not really affected the change that we want.
And I'm here today with my colleague Eleanor. Hi everyone and welcome. I'm Eleanor Jenkin, also a senior consultant at Alan and Clark.
I have a slightly different professional background from Marnie. I've come to evaluation a little bit later in my career having worked previously as a lawyer and then in policy and advocacy in the not-for-profit and government worlds. So I've seen evaluation in action as an evaluator but I've also seen it as a commissioner and as an implementer and I'm going to be bringing some of those insights to the conversation we're having today.
So today I'm joining from beautiful Narm or Melbourne and in Australia it's our practise to acknowledge the traditional custodians of the lands that we're on. Today I'm on the lands of the Wurundjeri people of the eastern Kulin nations. I want to acknowledge them as the traditional custodians and I want to acknowledge the traditional custodians and owners of the country that those of you in Australia might be joining from today.
I'd like to pay my respects to all their elders past and present. Oh kia ora, thanks Eleanor and I'd also like to acknowledge the mana whenua of Te Whanganui-a-Tara where I'm standing today and extend warm welcome to all tangata whenua and tangata tiriti who are joining us. So speaking of people joining us, we've got almost 400 people registered for this webinar.
So that tells me that it's a real topic of interest and something worth discussing. For some of you it probably is your first time coming across Eleanor and Clark and you might not be familiar with us. So just to let you know who we are, we are an Australasian based company and we're dedicated to make a positive impact in communities throughout Aotearoa, Australia and the Pacific.
So expertise spans evaluation, strategy, policy development and change management. But what motivates me and I think what sets us apart is our real commitment to making our evaluation findings drive real action. So we'll be sharing some lessons that Eleanor and I and our colleagues have learned through undertaking about 200 evaluations over the public and NGO sectors in the last few years and we're really excited to be able to share our insights through these three webinars.
So today's session is all about how to take stakeholders on the journey of your evaluation to maximise the impact of that work. We're going to be delving into some practical things that you can do like more strategic engagement, translating evidence for maximum impact and embedding learning throughout the lifecycle of a project. Ultimately, Marnie, my hope is that you'll walk away from today's session with a few new strategies in your toolbox to help you turn your evaluative insight into real action.
Sounds good. So we are really keen to hear from you. We know that many of you are experienced people and we're really keen to understand whether some of you have shared our pain and had a moment where you thought what was an excellent evaluation with really exciting findings ends up gathering dust on a shelf or buried in a shared drive.
So we want to know what it is about those situations where it didn't quite work and we've got a poll for you. So what do you think is the biggest barrier to bringing stakeholders on the evaluation journey to create real impact? Is it a lack of stakeholder buy-in? Maybe not knowing who the key stakeholders are? Is it over-consultation of those key stakeholders or is it adequate stakeholder engagement planning? Look, while that poll is running, I thought it would be worthwhile just to quickly recap why stakeholder engagement is so critical to keeping the evaluation report off the shelf as Marnie described. Look, firstly, it demystifies the whole evaluation process for stakeholders and then secondly, it makes evaluation much more transparent and you get the diverse insights that you're looking for throughout that evaluation process.
And lastly, and this is a word that we're going to come back to quite a lot in this webinar, it builds buy-in. So all of that means your stakeholders are likely to find the results much more credible, much more relevant and that then means they're more likely to actually pick them up and use them. Yeah, absolutely and we've got the poll results coming in now.
So I can see that there is a real key finding here. We've got 57% of people said that lack of stakeholder buy-in is the main challenge for getting evaluation influence and that stakeholder engagement planning got about 39% of responses. So I've got to say that is not a surprise to me.
Eleanor, I'm sure it resonates with you as well. And I would say what this poll really hammers home is that when you're doing an evaluation, it is not enough to just have a technically excellent methodology. So I'm a trained evaluator and I've got to admit I tend to go straight to the method.
How are we going to do this? How are we going to make sure data is robust? But it's actually not enough. We've got to really plan for engagement or it doesn't create change. So okay, time for honesty.
If I reflect back and probably particularly in my early career days, if I picked a sample of maybe 10 evaluations that I've been involved with, I would say hand on heart that probably only about half of those led to substantial change. Yeah, look, those odds aren't great, are they Mani? But I don't think you're alone in that honesty. I suspect that that does kind of resonate.
It certainly resonates for me and I'm sure a lot of other people in the room. So the critical question then is, what happened to the other 50%? Why didn't they create the change that we were all looking for? Well, it probably goes back to the poll result and that we actually didn't think enough about how to gain stakeholder buy-in. So we've done all the usual things.
We had a kickoff meeting with the client. We'd give regular progress updates. We'd do emerging findings presentations.
But when I reflect back, I think all of that was informing. It wasn't really actually creating that buy-in. So you mentioned that you feel like that was an issue that you faced in your early career in particular.
Looking back and on the journey you've been on, what have you learned to do differently in that time? What do you think can change the outcome for some of those evaluations? Yeah, so I think when we actually did manage to do an evaluation that created change, it was because we identified the key players from the get-go. And that was not just the people who the client said were the key players. And we were deliberately really strategic about bringing them on the evaluation journey.
So I think that's probably my first key insight. We've got to how we approach evaluation design. So it's not about developing the methodology and then having a little stakeholder engagement plan tucked on as an appendix.
We need to really plan for stakeholder engagement right from the start of our evaluations. And we need to make that front and centre. Yeah, and look, we recently finished an evaluation where when I look back, and I think the team did that really, really well.
Yeah, why was that? Well, it was an evaluation of a programme that was run through partnerships between a number of agencies. From memory, it was nearly 10 agencies. So hang on, you had 10 different agencies that you had to buy in.
That sounds very logistically challenging. How did you manage that process of engagement with all those different groups? Look, I think because we realised from the outset that that was going to be a complexity in the evaluation, we really got on the front foot. So one of the first things that we did, and this was before we'd settled an evaluation plan or anything, was host a round table with all of those stakeholders.
And that round table gave us an opportunity to explain the evaluation, to explain how they could be involved, to discuss things like data provision and methodology. But critically, it was also a chance for us to understand from those stakeholders what their interest in the evaluation was, how they thought they could contribute and how they wanted to engage with it. So that's a really good point, because sometimes it can be a bit one way about like, we need this from the evaluation.
And I think you've highlighted a really important point there about what do the stakeholders actually want and how can we meet those needs? Yeah, and we actually followed up the round table with a short survey to each of those partners. And that included some questions about their reflections or any concerns that they might have about the evaluation. And the outcomes of those processes then informed the evaluation plan, the stakeholder management approach for the rest of the evaluations.
Well, I really like that. So that's quite meta and quite innovative. So you're using an evaluation method, a survey, during the evaluation design stage.
I really like that. And I think that probably helps to also overcome that barrier when we're often having a kind of co-design session with stakeholders in the room, where it's the loud voices that get heard. So if you're going back to them with a survey, it gives an opportunity to make sure that you're actually canvassing all the different perspectives.
Yeah, that's right. And the other thing it did was just give them a little bit of an opportunity to digest and reflect how they wanted to engage. Yeah, you're not just posing a question and expecting all the answers right then and there.
Yeah, that's right. And look, the way that that sort of informed our approach is that, for example, we conducted findings workshops with all of the stakeholders, but then we identified a core key group of stakeholders who were more intensively involved in the evaluation. And so they, for example, provided written comments on draft reports and outputs.
They were all invited to the routine evaluation meetings. So we were able to calibrate our engagement based on what we had learned from those stakeholders early on. Yeah, so that makes a lot of sense.
You're not just taking a blanket approach, but there's always going to be a group of stakeholders that want to have much more active involvement and some of that are pretty happy to sit and forget and just be informed occasionally. Yeah, that's right. And look, when I reflect on the result or the impact of that approach, we ended up with stakeholders who really understood the evaluation.
They were really invested in it, and that meant that they became champions for its findings and its recommendations. Yeah, and that's exactly what we want to actually get that influence and not have the report just sitting on the shelf. Yeah, that's absolutely right.
So look, we're popping up on your screen now some of our key tips around building buy-in. I think, you know, while it's a little bit of a given, obviously a first step in all evaluations is going through a thorough stakeholder mapping process, but I suppose for me one of the key takeaways is doing that in a way that's really thoughtful and deliberate and involves a two-way street with those stakeholders so that you're really calibrating to what they need. Yeah, and I think for me, as you said, stakeholder mapping, very important, but it's also important to understand how the power structures work within that stakeholder ecosystem, and I'll talk a little bit more about that in just a second.
So let's now maybe move on from the when to talking a bit more about the who and the how of really effective stakeholder engagement, particularly where you've got a view towards maximising that impact at the end of the street. The example I just gave was an evaluation where it was super clear from the outset who the key stakeholders were, but that obviously isn't always the case. And, you know, as we know, stakeholder mapping of one form or another is a routine part of evaluation.
So that raises the challenging question, if it's routine, why is it so easy to get wrong? Yeah, so, I mean, if I'm thinking back, I did peer review on evaluation recently where it looked like on paper the team had done everything right. So they did this stakeholder map, as you said, and then they used that to identify an advisory group. They held workshops and they created some lovely looking dashboards and infographics.
But the problem was when we came back and looked in six months after the evaluation, very few had the recommendations had actually been actioned. I feel like it's such a disappointing experience as an evaluator and it must be for clients as well. Yeah, you put so much time into creating all these, like, clearly evidence-based recommendations and they don't get taken up and it's like, why? So when you went back and did that post-mortem, what did you identify as being kind of the breakdown in the process? So the main thing was actually that there was no question that there had been extensive stakeholder engagement, but it was just kind of doing the standard things and we weren't actually engaging strategically.
So to kind of illustrate that, I'll give an example of a recent evaluation of a social sector initiative where we were much more strategic about how we engaged with stakeholders. And so what was the difference in your approach between the two? Okay, so both cases we did stakeholder map, but in the one where it worked, we took it one step further and we identified what we called the formal power structures. So I've put the example up on a slide here and in this case, that was the ministers who had come up with the brilliant idea of the initiative and it was a group of senior executives who had decision-making power and they were our client who had commissioned the evaluation.
So it was step one, but then we took it a step further and we also identified what we called the shadow systems. Sounds quite spooky. It does, like, I know it sounds a little bit like a spy thriller, but what I actually mean is those are the informal networks that influence where the change actually happens.
So in this case, it was the leaders of the social service delivery teams who were the real change agents and the ones that could actually action the recommendations from the evaluation. So how did identifying those shadow systems then kind of change your approach to engagement? Did you engage with those two, you know, formal and shadow systems differently? Yeah, we definitely did. So we obviously needed to work with our client, the formal power structures, but once we'd identified who the shadow systems were, we held a meeting and we really listened to their needs about what it was that they wanted from the evaluation, which was quite different from the client, what evidence they would find compelling and also how they wanted to package and receive the evaluation findings.
And that very much directed the approach that our evaluation took. Yeah, right. And so how did you have to do things differently for those two groups in practise? Like in practical terms, what did that mean? Yeah, so for the formal power holders, which was the senior executives, it was really clear that they wanted numbers and they wanted impact data.
So we designed our evaluation approach there using a quasi-experimental methodology and that meant we could be like, hey, here's some hard data and some numbers. But that was not what resonated with the power agents on the ground. So what resonated for them was stories.
So we used our qualitative data to create big nets, which made the evaluation findings really come alive for them. And that was critical because that meant they could actually see what the case for change was, why we developed the recommendations they had and what difference it would make if they actually actioned them. And that was quite different to a recommendation from on high coming from an evaluator they'd never seen through the government department who felt like their big boss.
Yeah, that's interesting. I must say though, in the back of my mind, one of the things I'm wondering is this obviously required a diversion of resources or time or changing your original methodology a bit. How did you get the client on board with sort of investing that time and energy in those shadow systems? Yeah, so it was a bit of a process.
We were in some ways lucky, although I'm not sure if that's the right term, and that this initiative had been evaluated a couple of times before and there had not been action from those on the ground to implement their recommendations and actually there'd been resistance. So they were kind of primed from the start to do things a little bit differently. And then the second thing we did was just take a slowly, slowly approach to getting them on board by showing them what the points of resistance were and how we could overcome those by taking a deliberate approach and really reassuring them that their need for numbers would be met, but the change that we created would not happen unless we took an approach that brought the change agents on the journey.
And they came on board to their credit. Yeah, fantastic. Well done.
It sounds like you navigated a tricky path through that one for the ride out. Yeah, and you know what? I'm looking back and making it sound easy. It was not at the time.
There was resistance kind of from both groups actually, but the other thing that worked really well was getting them in the same room and helping them to understand the different perspectives and we got there in the end. Yeah, fantastic. And look, I think that example highlights the importance of engaging with the right stakeholders, but also thinking really deeply about how you engage them and engaging with people with a deliberate view to generating buy-in in the end.
And I've seen that work and I've seen it backfire, where we've identified the right stakeholders, but by trying to engage them in a way that just doesn't work for them, we've ended up actually kind of alienating them and maybe building a bit of resistance to the evaluation and its outcome. Oh, I'm cringing as I hear that because there is nothing worse as an evaluator than rocking up with good intentions and end up actually alienating the very people that you want to hear and centre their voices in the evaluation. So thinking about that experience that you've mentioned, is there any particular situation that we really need to keep a radar out for? Yeah, look, I think it's a particular risk when you're trying to engage stakeholder cohorts who are both simultaneously marginalised and not necessarily listened to or don't feel very well listened to, and at the same time they're over-consulted.
And in the Australian context, there are two cohorts that just immediately spring to mind. They're Aboriginal and or Torres Strait Islander people in communities and people with lived experience. Yeah, and look, I cannot say I find that surprising, very much the case in New Zealand too.
So Māori, Pacific peoples and those with lived experience are often over-consulted, perhaps under-listened to, and feel like we're constantly asking for participation and evaluation or consultation or seeking views. So what do we do? Well, look, I mean, the first thing to say about it is that in some ways it's a product of progress, right? As evaluators and commissioners are getting better at prioritising marginalised voices in evaluations, the demands on a lot of those groups are increasing. And so we're hearing from some of them that they're just not properly resourced and able to focus on all of the things they're being asked to do at the same time.
Okay, so question then, I mean, how do you consult people who are already feeling over-consulted? It's a really good question. And look, I don't think there's a sort of simple answer to it. Oh, Eleanor, you're not going to give us the shining silver bullet.
The good news I've got is that we have tried a few strategies in recent evaluations, which I think have helped. And I'm going to talk through a couple of those. The first is to properly plan ahead.
And I know that sounds really simple, but it isn't necessarily always done well. We usually have a pretty good sense when we're coming into an evaluation about which stakeholders are likely to be hard to reach. So what I found is that if you build in a really early touchpoint with them to understand their interest in the work and how they would prefer to engage, you can really get ahead of that issue and design your approach with that in mind.
I think the second strategy that's worth keeping in mind is thinking about whether there are other ways to gather the data you're looking for. So if a group's over-consulted, there's always a really good chance that the information that you're looking for can be gleaned from an existing source. So that could be from earlier consultations or from existing literature.
Often you can find some of the insights you need without having to go out and do that primary data collection. Yeah, and I think those two points go together. So if you're doing that early touchpoint and you're getting a back-off message, then you might want to look for what else is out there that you can draw on.
Yeah, that's right. And look, and I think the last sort of strategy to keep in mind is sometimes you want to test your and your client's assumption about the need to engage with some of those harder to reach cohorts at all. And I know that sounds a little bit controversial, but if you sit back and ask yourself, is the input essential or is this more about being seen to consult, that can give you a steer on the right way to proceed.
If it's really more about being seen to consult, then sometimes if you just adopt a position of invite and inform, so invite people to participate, inform them of the evaluation and the outcomes, it's sometimes better just to leave it at that. Yeah, and like I get it, it sounds sensible, but how do we make that work in practise? So we recently worked on an evaluation where the client was really keen to do the right thing. They wanted to involve Aboriginal and Torres Strait Islander organisations at multiple stages along the evaluation journey.
So when we reached out to these organisations, the message that we got back from them could not have been clearer. The subject matter was not a key policy priority for them. They felt they were being over-consulted by the agency and by government more generally.
They didn't feel properly resourced to respond to all of those requests for engagement. And the type of engagement we were seeking was just too intensive and demanding. Yeah, and it's quite ironic, isn't it? Because that sounds like it's really the government department earnestly having a great intention to involve and engage, but if the group is not interested, how do you navigate that situation? What's the path forward there? So in the end, we had to pull a few different threads together.
We drew on some existing literature and data. And then we also turned to standing reference groups. So we were lucky that we were able to draw on input from a few of those reference groups and use them as forums to get high-level input in the meetings.
And that fulfilled the feedback that we'd received from these organisations, that they weren't going to provide written submissions or sit for interviews or focus groups or things. The other thing that we did was we cast the stakeholder net a little bit wider. We invited organisations beyond kind of the usual suspects who we thought might have some relevant insights, basically.
And then after all of that, we stepped away. So we'd given organisations the opportunity and we sort of left it at that. Yeah, and look, that sounds really sensible.
So it's not like we should shy away from getting relevant people's perspectives and really making an effort to do so. But if we're hearing they're happy to not contribute, then we've got to leave it at that. So on the screen, you can see we've put a few tips for engaging with hard-to-reach groups.
And I think for me, particularly the example you just gave, Eleanor, is really highlighted that having an early touch point is vital. And then you can plan your engagement to either be quite intensive or pretty light touch. Yeah, that's right.
And look, for me, it's about that tailored approach. And actually, sometimes the way I think about it is you need to, on occasion, take yourself out of the evaluator's shoes where you're thinking about what data do I need and what's the easiest way to get it, and put yourself in the stakeholder's shoes where you're thinking, you know, what's their perspective on it and how do you accommodate what they need and prefer as well. Yeah, so really tailoring.
That's the key message, I think. So look, we've canvassed the when, we've canvassed a little bit of the who and the how. I think now let's shift to exploring in a bit more depth a couple of specific strategies that you can use to make your stakeholder engagement that bit more strategic.
I like the sound of that. So we're now getting into the nuts and bolts. So the first of those strategies really focusses on the evaluator's role as an evidence translator.
I think those who've been on the front lines of evaluation will know you're often engaging with stakeholders who value and need different types of evidence, and that evidence need can really change through the life of an evaluation based on a whole range of factors. So the question is, how can you be deliberate about meeting these needs so that the evaluation shares the right insights to the right stakeholders in the right way and at the right time? Wow, I'm glad you mentioned it because actually I do have something that has worked really well and that I find useful and that's to use an evidence journey map. So what I like about it is it's a tool and you can get a bit creative in how you present it, and it visually represents the flow of evidence throughout the course of the evaluation and it helps you really zero in on when and how that evidence is going to be used by different groups of stakeholders.
So as you go through the evaluation journey, it marks the key decision points where your evaluation evidence can influence stakeholder actions. Great, and so what does that look like in practise? Okay, well we have got an example up on the screen here. So you can do it as quite a straight and linear timeline, but I like to do it as a little journey down the evaluation path.
So here we identified that there were three key decision points at which different groups of stakeholders needed different evidence. So you can see the first decision point was to inform evaluation, sorry, annual planning. The second decision point was when a programme re-budgeting process took place, and then the third point was there was an important summit with the ministers responsible for the programme.
And there was quite different evidence needs at each of those points. So as an example, in the annual planning process there was a need for process evaluation evidence to understand what had worked well and what needed change, and that could inform the annual planning. Whereas if we look at decision point three, the meeting with ministers, it was important there to have as much hard data as we could on programme effectiveness.
And so I can imagine that this is the sort of thing that you could layer sort of those shadow systems as well onto, and really if you wanted to, depending how complex you wanted to get, really layer those different stakeholder cohorts and those points that you're imagining. Yeah, you absolutely can. And I think an important part of it is to kind of prefer for worst case scenario.
So as we went down the evidence journey path, we really thought about what are the potential blockages to the evidence being used. So a really important one here was that third decision point, the meeting with ministers, where they were essentially going to say, is this programme going to continue or is it not? But we know that ministers are really busy and they're time poor. So that was a potential blockage to them actually being able to use and understand the evaluation data that we were providing.
So we needed to come up with a strategy to translate evidence for that stakeholder group in a way that they could really access it. And so in this case, we identified that all they are going to really do is look at a one pager. So we created a series of dashboards that zeroed in on the key information on programme effectiveness.
They could easily digest it and it was in a format that they could use to inform decision making. Marnie, I'm interested to know, at what stage of the evaluation would you think about developing this sort of evidence journey? So we did this one as part of the evaluation planning process. And the reason that we did that is because it identifies and informs our methodology.
So when we knew what type of information was going to be needed and when, we could plan the evaluation methodology. So we got those insights and we delivered them at the time it was needed. So we're putting up a few little tips here.
This is for our evidence for evaluation translation. And look, as I've just indicated, I'm a real proponent for packaging evidence to meet the needs of different audiences. Yeah, look, and I think when you do that really effectively, that process of packaging evidence for different stakeholders, a flow-on effect is that you're often generating shared understandings across those audiences.
And that can be really important as well for supporting that impact down the track. Yeah, I mean, I couldn't agree more, to be honest. So let's think about now another strategy that you can use to turn that engagement into actual evaluation impact.
And for me, it's really important that we think about how we can embed learning throughout the evaluation journey, not just at the end say, ta-da, here's our presentation and report, and now you're off you go and implement it. If we've embedded that throughout the journey, then I think we have a much higher chance of our findings actually being taken up and used. So if we think about that in a little bit more detail, I would say that obviously one strategy that we can use, which I say quite often, is to include a cross-section of key stakeholders in a collaboration or co-design of the evaluation.
But if we're thinking a bit more strategically, what else can we do to build learning into the journey? Yeah, so we've popped up on this slide a few steps for embedding learning across the lifecycle of the evaluation. And we're partway through a multi-year evaluation at the moment where we've drawn on most of these, not all of them, but most of them. So we've been commissioned by the funder of a programme, but we're also working really closely with the organisations who are delivering it.
And from the design phase, we've worked with our client to build in learning opportunities for those organisations doing the programme delivery. Okay, so now forgive me if I'm being very stereotypical, but I would imagine that maybe the organisations that are doing the delivering might be more practical on the ground types, and they might not necessarily know a lot about evaluation. So how did you go about embedding their learning into the evaluation journey? So I will challenge the stereotype a little bit.
I knew I was probably, shouldn't have gone there. It was, look, in terms of their evaluation maturity, it's a bit of a mixed bag across that group of stakeholders. But what I will say is pretty consistent across them and I think has been critical to the potential success of this approach is that they're all enthusiastic and keen to learn more about that.
It's a starting point, right? Yeah. And so what we did from the outset was develop a learning plan, which included co-developing the theory of change, the logic model, the outcome mapping, as well as collaboratively developing the tools. But we also worked with each grantee to develop a programme logic for their funded programme.
And the thinking behind that was that that was really an opportunity for us to learn more, but also for them to have a bit of coaching, frankly, that would be of value to them as they went through that programme. And then we also held periodic roundtables. Now, that's hardly an earth-shattering approach, roundtables.
But what was interesting about these ones is that they weren't just for sensemaking, although that was part of it, but we've also deliberately built in an opportunity in those roundtables to reflect critically and collectively on the evaluation process itself. Okay. So that obviously helps the evaluation, but it also has a benefit for that reciprocity in supporting their learning.
Yeah, that's right. And I think, you know, again, none of these on their own is earth-shattering stuff, but it's an example of how being really deliberate and mindful about building learning in across the life cycle of an evaluation, what that can look like. And we're still in the early days, but we're already noticing a few benefits from this approach.
Yeah. What do you think? Well, you mentioned a magic word before, Marnie, which is reciprocity. It introduces reciprocity into the evaluation.
So we're not just collecting data in an extractive way, but we're bringing value to the stakeholders. I think the second thing it does is it builds their investment in the process and their understanding of the evaluation. And, you know, it also just makes our work better, right? It's more informed by the nuance and the realities that stakeholders are experiencing.
And I just think all of that bodes much better for that evaluation being picked up and used every day. Yeah, totally. And I mean, I'm going to give a suggestion that might be a little bit controversial, but we've found that actually sometimes you can literally bring programme personnel into the evaluation team.
So it's often when we've done that, it's been when we're doing an evaluation for quite a small organisation. So obviously they want the evaluation to show the impact, but they also want to upskill their own teams so they can understand and use evaluation results. So there have been some occasions when we've invited someone from the client team to actually join us as part of the evaluation team and do the data collection and the analysis with us.
So I'm going to confess I've never done that and my instinct is to feel a little bit nervous about it. How's it worked from your perspective when you've done that? So I would say mostly well. It can be a real win in terms of we get to tap into their insider knowledge and they can guide us about you need to interview these people or look at this documents or data.
And it also helps them because it ensures that we're focussing on the things that are relevant to their needs and they know what we've done, why we've done it and it's likely to create more impact. However I will sound a word of caution, it doesn't always work as might have expected. So you're telling me there might be some pitfalls as well? There might be.
So I'll just tell you about a health sector evaluation that we did recently and that was bigger and the client was a government department and we had some good success with small NGOs so we thought hey we'll do that again. And so we invited a member of the government department client into the team and we wanted to take them with us when we did field work. But we put the invite out to the health services that we wanted to visit and we got really low engagement, lots of declines saying no thank you we don't want to participate.
Yeah okay so I look I think I have a sense of where you may be going with this but I'm going to ask the question anyway. Why the low uptake? Okay well perhaps unsurprisingly we found that for the health services that we wanted to visit the thought of a government department official coming and evaluating them was just a big red flashing warning light and they just didn't want a bar of it. And so look we arguably should have thought of that beforehand but it did reinforce that it's really important to embed learning in an appropriate way.
So in this case we stepped back, we reframed and instead we created joint learning sessions where we got the health service providers and the government officials together at key points in the evaluation and we helped them sort of learn and broker from each other's perspective. So still learning but not you know a government official coming along with a clipboard to give them a pass or a fail. Yeah and I'm also I sort of I love the pivot to joint learning sessions because I think in that situation it would be very easy to just abandon the learning ambition full stop and just say well that person's not going to be part of the team anymore but that pivot that you're describing kind of preserves the safe environment so that stakeholders want to participate but still achieves a learning outcome for the programme staff.
Yeah it does so I think that goes back to that tailoring. Yeah it can definitely embed learning but it's got to be in a way that works for all the parties. Yeah and look and I think that this topic of learning is one that's it's not going anywhere.
We're really finding a trend here in Australia that there's a an expectation from evaluation commissioners when they commission external providers like Allen and Clark that the evaluation approach is going to explicitly build in in-house capability uplift and the Australian Centre for Evaluation which is an Australian government agency recently released a report on the state of evaluation in the Australian government and that actually showed that 22% of all Australian government commissioned evaluations now adopt a hybrid internal external model. That's a lot so one in five. Yeah and so I think it it shows that government is looking for a value add for capability and that's that's grown out of that increasing commitment to embedding evaluation and evaluative thinking but it also shows the value in learning that the commitment to upskilling is really critical because it makes it more likely that evidence will be shared and it makes it more likely that decision making and programme management in a day-to-day sense will be evidence-based.
Yeah and look that makes a lot of sense to me. So when we're looking back and reflecting on what we've talked about today we've explored a few things about how we can intentionally build stakeholder buy-in for evaluations right from the start. So Eleanor we've we've had a few discussions and a few thoughts out of everything that we've discussed today what would you say that your key takeaway would be? Look I think for me it's the importance of not not just doing stakeholder planning as a routine kind of set and forget it's a it's about thinking deeply really early on to develop some tailored approach to those key stakeholders and that when you do that really well and you think about how they want to engage and what evidence is compelling to them it can really create champions for that evaluation at the end.
What about you? Well I've got to say that I'm someone who just likes tools so for me it is drawing on those things that we can use to take a standard approach and just make it a little bit more strategic. So I like stakeholder mapping but the extended version where we map both formal power structures and shadow systems and then I also really like the evidence journey map tool that we talked about before as I find that for me having a really visual output that shows who the stakeholders are when they need information and how they're actually going to use evaluation findings helps us to keep our evaluation work really tightly tailored to their needs and we have found that in doing that it increases the likelihood that you know all our great evaluation work and our recommendations will actually be taken up and implemented. And that's the main game at the end of the day isn't it? Oh it totally is and I think that's the main message that we've been talking about today is like if you really plan for how you'll take your stakeholders on the journey then you're much more likely to get that impact that we're seeking for our evaluations.
So we're going to go to questions. So we have a question here from Bridget and Bridget is asking when consulting with indigenous communities and communities with lived experience how have you found success with incorporating indigenous approaches to evaluation rather than western evaluation methods? So I can have a little go at that first. You go first Marnie and then I'll jump in.
So look in the in the Aotearoa New Zealand context we really take the approach of nothing about us without us. So if we are in a situation or evaluation where we need to engage with indigenous communities our very first step is to ensure that we have a team that includes members who either whakapapa Maori or have Pacific experience and ancestry and that means that we kind of keep ourselves safe and we keep those that we're engaging with safe. We also ensure that people with that key knowledge and experience are in leadership positions so it's not us Pākehā in Aotearoa New Zealand context kind of barging in with our ideas and ways of doing things but we are looking to our leaders with lived experience or those that bring an indigenous perspective to kind of be our guides as to how we approach that engagement.
How about in Australia? Look so all of that and just a vote for the importance of ensuring those measures and approaches are taken. I would say however that I this is an issue that we talk about a lot and I think that one of the things that is a challenge is the need for commissioning agencies when they're thinking about time frames and budgets and you know requests for quote and various things to be appreciating the investment that's required to really properly embed First Nations ways of knowing, being and doing into an evaluation approach. There's a lot that we can do but it needs to be a partnership I think between everyone involved in an evaluation to do that in a really deep way and I know that in that report from the Australian Centre for Evaluation that I mentioned earlier that was actually identified as the key focus for government to make improvements in coming years so I think that was really promising recognition.
Yeah and so I completely agree so if we're looking to another question that we've got one from Pam that's come in is how do we do meaningful engagement when the minister has already decided on a preferred option for change? Bit of a spicy one there. I'll jump in on that one if you like. I would venture to say there is not an evaluator on this blue earth who has not been in the position of having to do consultation where we you know the outcome of the policy or programme decision is a known.
I think that there are two ways that you can really approach the framing of that sort of consultation and engagement. The first is about understanding where there is space for influence so I think a lot of the time even if the minister has a sense of what that direction is going to be there can be a lot of space for how the outcome is shaped you know and it might be around how implementation is conducted, it might be around the policy detail so I think being clear where there is room for influence can be an important aspect of this sort of challenge. I think another framing that sometimes I bring into those situations is just recognising the importance of people having a say and so I can think of a process that we went through not long ago where it wasn't actually it was in a regulation context not an evaluation context but we were doing consultation for various reasons that everyone knew what the outcome was going to be.
It can feel very tech boxy and demoralising when it's the case. Yeah and what I found interesting about it was you know our position was to say to the stakeholders look ultimately the decision is a decision for government but we are here in good faith to listen to you and to understand your perspective and to report back to government okay and I was sort of surprised that after the session a couple of them came up to us and said thank you like we really did feel like you listened. Yeah and probably that honesty right from the start made a difference so you're not trying to hide the fact that this is you know something different to what it is but that you are here to listen.
So the next question is how do we manage evaluation priorities in the context of government priorities? That's a question from Steve so that's quite a broad question Steve but I think I'll start kicking off with just a thought on how we do that in the context of bringing people on the journey of evaluation. So when there's a misalignment between evaluation priorities and government priorities I think what we have an obligation to do is take both parties on the journey to bring them in line as much as we can. So part of that is some of the tools that we just talked about identifying who the key players are, what they need from the evaluation, what their priorities are and any barriers and then using some of those strategies like planning for how we can get them the data that they need to inform their particular priority and try to just slowly navigate any differences and by engaging well and engaging often and engaging strategically I think we can do that.
Eleanor any thoughts from you? No no I think you've answered that one beautifully Marnie. Oh well thank you very much. So we've got a question from Helen there how practical is it to think that you can evaluate your own project or your own programme? Well I think that speaks to an inherent tension in evaluation between the value of insight and understanding and familiarity and the value of independence and I think that those two things are intention to some degree in most evaluations.
The appropriate balance I think really is very context driven and so I think there are certainly situations in which people are able to evaluate to an extent their own initiatives but I think that I've certainly seen circumstances where the closeness to the programme and the subject matter can cloud the objectivity around it and you can get a degree of defensiveness and in those circumstances having an independent voice come in I think is really really important. Yeah well not to be flippant but if you're evaluating your own programme I guess you're your own stakeholder so it's easy enough to bring you on the journey. What I will say though is other than agreeing with you Eleanor I do always think that some evaluation is better than no evaluation so if there's not budget to undertake an external or independent evaluation then absolutely it is better to do an internal evaluation to get some understanding of what's working well and what's not unless you've got some information which is better than nothing.
I might have time for one or two more so here's a curly one. What if commissioners don't like the report and will not release it because the findings are undesirable? I think probably most of us have been in a situation like that. That was from Anne Marie so I can have a quick crack at that one.
So my first response is that ideally if you have undertaken a process like the one we've just talked about where you are bringing stakeholders including commissioners on the journey it makes it much less likely that you will get to the end of process and produce a report where they don't like the findings because they will have been involved the whole way along and they will understand why those findings are what they are and hopefully they're more likely to release it. The second point though is that I do think as evaluators when we're genuinely engaging and listening to people it's often about messaging so commissioners really don't have a leg to stand on if they're wanting to question the findings but the way in which they are presented we do have an opportunity to get the messaging right so rather than saying this is a weakness of the programme we can say this is an opportunity to strengthen these aspects. Yeah and look while I agree with that mining I confess I'm maybe not as optimistic as you are and I've seen a number of occasions where government has opted not to publish a report for entirely separate reasons to you know like the findings don't align with a political moment or some other process that's going on so it's not so much about bringing them along on the journey as the interference of kind of how that interacts with external factors and in those circumstances what I think can be helpful is often you kind of have a sense of when it's potentially a little bit of a sensitive topic and what we've done I think to relatively good effect is built into the reporting process from the outset an intention to develop publicly facing materials and so they might be stakeholder summaries or you know parts of the exec summary or whatever it might be and what I've often found is that where the commissioner has agreed in the outset to the value of feeding back to stakeholders and making sure that they know the outcomes they're generally more likely to be willing to put something out than the whole thing out.
Well and generally stakeholders will know that so there's been a promise made and it's not good politics to not follow through on it. So unfortunately we have run out of time now to answer all the questions but we're really happy to catch up with you at any time to answer any more questions you have and discuss any potential ideas that might support you further. Well that's us, thank you very much for joining us today and we will see you at the next one.
Thanks everyone.