Published on 5 May 2024

Mastering Measurement – What to measure, how and why

45 minute watch
Dr Brendan Stevenson Quantitative Analytics Lead Contact me
Dr Fiona Scott-Melton Performance + Impact Lead (NZ) Contact me
Dr Rebecca Gray Senior Consultant Contact me
Jeremy Markham Senior Consultant Contact me

Everyone knows measurement is important, but how do you ensure you’re getting it right?

This is a challenge most organisations face as they attempt to balance measurement—investment, time, accuracy, and much more. Achieving this harmony will give you a clear understanding of internal performance and external impact, aiding decision-making at all levels.

Join the discussion

Drawing on years of experience supporting clients from businesses to governments, our expert panel (Fiona Scott-Melton, Brendan Stevenson, jeremy Markham and Rebecca Gray) discuss the gnarly side of getting measures right by answering tough questions such as:

  • How do you ensure your measures will fairly reflect progress and generate quality insights?
  • What can you do to avoid them being overly burdensome to collect?
  • How can you avoid measures unintentionally driving the wrong behaviours?
  • How do you maintain consistency of measurement while also reflecting changing views about what is valuable?

Who should watch

If you’re involved in any aspect of measurement, from data collection to decision-making, this session is for you. Please note that a basic understanding of measurement fundamentals is assumed, so it may not be suitable for beginners.

Webinar Transcript

Read transcript

Kia ora koutou. Thank you so much for joining us today as we talk about mastering measurement. What to measure, how and why. 

 

My name is Rebecca Grey, I'm one of the senior consultants at Allen and Clark. I've been working here on some really interesting research and evaluation projects. Now just over a third of you joining us today are new, you might not have come to our webinars before so you might not be familiar with Allen and Clark. 

 

So just briefly, we're an Australasian consultancy. We're based in Wellington and also Melbourne and also working in the Pacific. We focus on areas including strategy, research, evaluation, change management, programme delivery, policy, just to name a few. 

 

And we give a damn about empowering people to overcome society's big challenges. So that's why we regularly run these free webinars and we find ways to provide expert advice when we can. We've had nearly 500 people sign up to join us today so this is a topic of great interest at the moment. 

 

Now we're also assuming most of you are probably already grappling with various measurement related challenges. So we won't start with a 101, we'll be diving straight into some ideas about challenges and considerations for your practise. I'll be chairing the session, I'll be forwarding on some of your questions to our panel of expert teammates, make them good questions please, and my teammates here all bring years of experience in measurement and their own unique angles to this topic. 

 

So as we go, if you've got any questions pop them in the chat and we'll check it out. Here's Jeremy. Yeah hi everybody, I'm Jeremy Markham, I'm a senior consultant here at Anil and Clark. 

 

I've got a pretty diverse experience across the private and public sectors including, you know, I ran a sales team for a commercial organisation inside government, I helped to produce and design executive level reporting on organisational performance, and also done quite a lot of work around helping projects and programmes demonstrate the value that they're delivering. I've been at Anil and Clark for a couple of years and continue to do some good work around benefits management, measurement frameworks, and also had a really good opportunity to help some different places look at how their data teams were set up, so that's pretty good. So that's me. 

 

Kia ora, ko Fiona Scott-Melton ahau. Before joining Anil and Clark I worked for the Office of the Auditor General as a senior performance auditor, where I led audits that had a focus on whether public agencies knew whether they were achieving their desired outcomes or not. That's kind of how I got into the whole outcomes and measurement thing. 

 

Since then I've been at Anil and Clark as a senior consultant for the past six years and worked in both our office and in our New Zealand office. My background includes developing outcomes frameworks and interventional logics and performance measurement frameworks here at Anil and Clark. Tēnā koutou, ko Brendan Smith ahau. 

 

I went to Massey University where I was and still am involved in longitudinal study, then to Tiringa Hauora, health promotion, and now into Eldon and Clark about four years ago, starting just as COVID was hitting, so I'm 99.9% sure that it was just a coincidence. I'm a mixed method researcher with a strong stats background. Awesome, so I'm glad to have this team here to talk with you today, and as I mentioned I'll be throwing all the most annoying questions you can think of at these guys as and when you think of them. 

 

But first we're actually going to start with a little measurement gathering ourselves. We've got a poll that's about to come up. What is your main measurement challenge at the moment? We've got constructing measures, unintended consequences, data collection, analysis, reporting, and budget. 

 

What do we think is going to come up guys? Click away, click away. I love these unintended consequences. Yeah, they're pretty fun. 

 

I'm not seeing anything coming up yet, so hopefully someone's going to start clicking. I'm thinking maybe data collection. You're looking right so far. 

 

Yeah, I'd probably go data collection as well for me personally. But constructing good measures can be pretty challenging. Yeah, they're in a tight race with what actually constructing measures and data collection. 

 

Reporting and unintended consequences are kind of competing for the bronze medal here. Okay, let's get into it. Big topic, don't have time to address every single question, but we've picked a few to work through. 

 

So let's start with where do I begin? Yeah, so thanks Rebecca. So you said before this isn't a 101, so I don't want to dwell on this too much. But I definitely think just to get us started, it is important reinforcing the importance of starting well, beginning with the end in mind, having a clear picture about who's going to be using the data, what's the point, who's going to be better off as a result. 

 

So you'll see on the screen there's some questions there, including like, do we really need to do this? And when I ask that question, I'm not just saying the answer shouldn't be, oh, because it's our framework, or it's because that's what we're told to do, or that's what you're meant to do. It's actually asking, well, what's the why? Because when you're wanting to go through the process, you're going to want to get buy-in from people to it, people who need to participate, and it needs to be stronger than this is just what we have to do. You really want to create that hook. 

 

And sometimes thinking about, well, what would actually happen if we didn't do any measurement here can help give you clarity around that. So there's some great questions up here about, do we really need this? Who's your audience? How are they going to be using the data? Who's going to be better off? What's going to constitute good enough? And that's really important from a feasibility perspective. But also things you've got to think about, what do you know about data availability, data maturity, even data sovereignty? Yeah, which is, who owns the data? I mean, just raise the issue, of course, of data sovereignty generally, and data sovereignty especially. 

 

Understanding measurement from this perspective, kind of unpacking to prevent some of these unintended consequences that can occur. There's some very clever people speaking already on this, so I'd encourage people to look them up, especially Manaro Raunga. So getting clear about this as early as possible will help shape your decisions. 

 

Okay, yeah, totally. And another question we've had is about the current political environment impacting thinking about measurement. We understand the government puts a strong emphasis on transparency, accountability, efficiency within public sector operations, but also within anything else that's contracted to them. 

 

This isn't new. There's a lot of ways to show impact by measurement, right? We're hearing recently a greater emphasis on certain types of measurement, particularly those focused on showing return on investment. Fiona's going to talk a bit about expectations for measurement of performance and some of those different approaches. 

 

Yeah, thanks, Rebecca. So just to start off with, there's a range of organisations such as New Zealand Treasury, in Australia, the Victorian Department of Treasury and Finance, or various offices of the Auditors General. They all provide great guidance on performance measurement. 

 

So I suggest you actually have a look. They're there for everybody, that information. But in New Zealand, I'd say the guidance from the Office of the Auditor General is particularly useful as it provides both guidance and highlights some good examples from recent agency reporting of what good looks like. 

 

With that in mind, I want to highlight some key aspects of the expectations that I think are important starting points. So the first one straight off is, going back to Jeremy's point really, it's just planning measures. Public organisations need to report on both their operations and their strategic outcomes. 

 

From my point of view, when you think about that, it's a bit like an orange. The inside of the orange are the operational measures. These are inward looking. 

 

You may want to measure things like timeliness, equality, quantity, or the efficiency. One of the points that the Office of the Auditor General makes is that too many public agencies just focus on activity levels. And that just doesn't tell you very much. 

 

The other part of the orange for me is your strategic outcomes. So that's outward looking. So it's the difference that you're trying to make. 

 

And this is the other part of the performance story that organisations need to tell. But we all know that it's really hard to do it well. It really doesn't matter whether you're a public agency or NGO. 

 

Increasingly, it's important that you can demonstrate the difference your work or programme is making. It's crucial for continued funding. One of the most important elements is the impact on our communities. 

 

Just on that, before you go on any further, I'm just kind of curious, actually. So did the Auditor General say anything about any reason why they thought that organisations were strong or had put a big focus on the efficiency kind of measures, but a little bit less on the impact? They didn't even really do efficiency. They just actually did activity. 

 

Yeah, okay. So the really bad examples just... When we say activity, we're talking about the number of footsteps, or number of visitors to somewhere, or the number of people who participated in a programme. They gave no indication of the value even of that programme. 

 

So the problem is that when you just get the number of something, all you know is that X number of people came through. They don't tell you very much more. And so what have they said about designing more robust measures to actually show that value? Yeah, so there's a few different tips that they've actually given in the round. 

 

So here's just a few that you might want to think about. Plan for results. So identify what difference you expect to make in the short, medium, and long term. 

 

Typically, some of those of us in the business, we call it developing a logic map. The next thing you might want to do is think about what's in your sphere of influence. So what are the things that you've got the most control over? And try and focus on measuring those, not all those other extraneous, exciting things maybe, but they're not things that you have much control over. 

 

Because that can show your own impact, right? Yeah, exactly. So you want to show your own impact. The measures need to remain relevant over time. 

 

Because what you want to be able to do is actually show a trend so that you can show whether you're progressing in the right direction or not. And I hate to say it, but it takes several data points. So two or three data points doesn't give you a trend, you need it over time. 

 

And then they need to be free of perverse incentives. They should contribute to telling a balanced story. They should be annoyanced enough to be able to make the whole picture. 

 

And then I'd say, finally, make sure that you use a combination of quantitative and qualitative. So some of what I've been hearing now is a real shift back to quantitative. And quant is great for telling you quite a bit about the plot, again, across a large population. 

 

So it's great for that. But it doesn't tell you how something really worked. What were the factors that made it so successful? What were the really, it doesn't tell you those really heartwarming stories that actually make you think, oh yeah, that was a great programme, it's really made a difference. 

 

That's what it doesn't do. And using two together, you can avoid spurious correlations. You can stop things going wrong. 

 

And honestly, talking to you, Fiona, made me think that actually the Office of Auditor General recommendations can apply to so many different organisations trying to map what they're up to. Yeah, yeah. I really liked the exam. 

 

They had a really nice example there. It was really simple. And it just kind of showed you kind of like what good would look like, essentially. 

 

A couple of measurements. And then they actually had a bit of a, you know, just a little bit of a narrative around it. And then just a bit of trend information. 

 

It wasn't necessarily really complex. In fact, I'd say go back to going back to better to measure less, but do it well. Yeah. 

 

Going back to the longitudinal measures, of course, over time, there is always a risk that the measure will end up not measuring as circumstances change. I won't actually measure it anymore. So you need to actually adapt. 

 

And sometimes you need to have like a transition plan and have a different measure coming over top. So there's a period we've got both. And you can continue that trend. 

 

But otherwise, you're measuring nonsense eventually. Yeah, I think that's a really important point. And I really liked what you're saying, Fiona, bringing in that point about making it small, because the reality is, is that there's so many more things than we could measure than we practically can with the resources. 

 

So we've got to be pretty smart in terms of deciding what we're going to do. It's better to do a few things well than trying to be too ambitious. Yeah. 

 

And I think some of the worst I've seen is actually when people are measuring a whole lot of stuff, but they don't know why. Right. I've kind of flicked to this output measures thing, but I feel like we're already slightly talking about it. 

 

Yeah, I think so. So that's fine. Was there anything else we need to say about these output measures, though, about telling that more nuanced story? I don't think so. 

 

I think the main things are... Well, when you look... So this was about investigations. Rather than just saying the number, one of the more nuanced things you could do is talk about the size. Typically, investigations or audits, whatever they are, they aren't all of a uniform size. 

 

So start breaking them up into maybe big ones, medium ones, small ones, whatever works that makes sense for you. Maybe add something about the complexity or the average cost or type. Maybe measuring coverage in particular, say, within risk areas or an industry or a population group or measuring investigations at different points through process. 

 

Again, open, resolved with customer, resulting in a risk, conviction, so that you've got a bit more of what did it lead to. You could use average investigation time, but I think the problem with that is it stays too high. Again, I'm not always sure. 

 

Problem with averages is they get distorted so easily. That would be my thing. So, yeah, you could do that, but it's wide open to distortion. 

 

Right. And talking about limited resources, you were going to talk a bit about that, weren't you, Jeremy? Yeah, we'll be talking a little bit more about that later because it is... Realising that in the real world in which we're operating, there's some pretty tight constraints at the moment. So, Brendan, I think you were going to talk a bit about that sort of decisions about what Yeah, well, decisions are absolutely crucial. 

 

So, when you constrain what is being measured, how you choose and who makes the decisions becomes even more important. And add to that quant point, quant-qual point is usually best to begin with and maybe end with qual because understanding becomes before management. Understanding becomes before measurement. 

 

Yeah, right. Good. So, OK, let's talk about that, Brendan. 

 

How are some ways that can play out? What do we need to be mindful of here? Well, I guess I can talk about understanding the quantas especially. So, for example, which populations are privileged to focus on, I'm going to come from more public health perspective, population health. So, a good example is the tyranny of the majority where you're broad, all population measures are sort of telling you one thing, like a marginal improvement in circumstances, usually a good thing. 

 

But there may be smaller subpopulations that are experiencing significant, much larger deteriorations. So, the slide sort of shows that a little bit, so it's showing some smaller groups for whom things may be going quite differently than the majority, which is blue and yellow combined is green, of course, it's the colour choice there. So, in this case, a negative unintended consequence is masked by the positive change in a much larger population. 

 

There's some classic examples and outcomes for Maori and Pacific peoples in our history and now. And if you can't see them in the data, of course, you can't know if this is the case. So, being aware you're focussing too much on the measure rather than what you want to measure. 

 

You don't want the indicator you're measuring to be mistaken for the actual measure of the outcome. Yeah, okay. So, how do we remember the measure is not the outcome, it's actually just an indicator towards the outcome? This is good arts law. 

 

So, basically, when a measure becomes a target, it stops being a useful measure. So, a good example are ED wait times, which is sort of a crude indicator of the actual outcomes that you want to achieve, like reduced suffering, short recovery times. If you're not careful, it can turn into an efficiency exercise. 

 

So, how do you get patients through the system as fast as possible, minimising time with nurses or doctors, triaging and prioritising? Rather than questions like why there's so many ED in the first place, you know, it is ultimately about people being healthier for longer, reducing suffering, early mortality, improving wellbeing and helping people flourish. And again, being aware that the delivery and the difference is different for different populations and groups. So, how do you improve and tater it for them? So, that's a really good example about particularly looking outward focus in terms of service. 

 

But even within organisations, if you think about operational performance, unintended consequences are a really big factor there too. Can I just stop you for a sec before you get into the unintended consequences? We actually had a question about this. Someone says, Tony's asking, can unintended consequences actually be a virtue because they can highlight blind spots? I would suggest to Tony that if you find unintended consequences and report on them, yes. 

 

But if your process creates unintended consequences itself, that's a different scenario. So, I kind of like that because I've done a lot of work in the area of business process improvement. And one of the biggest issues out there is what you might call hidden waste. 

 

So, this is the fact that in an ordinary circumstance, you could go into a situation and it looks as if everyone is just working really, really hard, doing a fantastic job. And so, it looks like, well, there's no problem here. But actually, there can be a lot of what you call hidden waste and things that can come along that can surface issues and problems can be really useful. 

 

So, yeah, I think Tony's onto something there. Yeah. But your examples are more about people accidentally focussing on the wrong thing, right? Well, so it is interesting though, because it's about, I do think it's really good to surface that. 

 

I was just going to talk about, and there's just so many examples, they're easy to think about. But what I was going to say is that the unintended consequences can be quite complex. Like, you can think it's about, oh, we're going to make one change to this group over here and the consequence is going to hit this other group. 

 

But actually, it can happen even with the same people. So, a really good example of how this could play out, let's imagine you're in a commercial context and we've got salespeople. And then you've given those salespeople targets, talking about targets versus measures before, but you give them targets. 

 

And those targets might, in one sense, be really powerful and motivating in order for them to drive their performance and improve. But they also come with the risks of mis-selling, right? Because people might be doing things inappropriately to essentially get their incentives. But on top of that, and I've been a sales manager before, so I've definitely seen this, if you hit a point where the target feels unachievable or unfair, it can be incredibly demotivating. 

 

It can create stress and really undermine performance. And although that example was about sales, actually, I'm sure we can all resonate with that last one, right? And there's plenty of other things as well. So, those of you in measurement land out there, you'll know the saying, what gets measured gets done or what gets measured gets managed. 

 

Well, I kind of think about, well, okay then, so what's the flip side of that? If we're saying we can only measure a small number of things, that means we probably can't aren't going to be measuring everything that's important. So, if what gets measured gets done, does that therefore mean that the thing which doesn't have to measure maybe doesn't get the attention it deserves? So, that's an example of just something how it could be quite hidden, but actually, you sort of got to think it through. If you think about the qual problem again, part of the feedback is the testing with the people that the programme, whatever it is that's being delivered, is you're checking with them all the time. 

 

So, then you will pick up the say if it's not cheap, if what consequences are occurring and what are intended, positive or negative, are occurring for them, which means you might have to reflect that back up with the measures. I agree with you, because I think that's the problem with the unintended consequences, they stay hidden from view. So, it goes back to your point, Rebecca, about how do you surface them? I mean, they don't always have to be negative. 

 

They could be something really unexpectedly pleasant, but the thing is how do they get surfaced and seen? So, just before we go, just two things I particularly wanted to make sure that the audience kind of hear in relation to that. My kind of high level view on unintended consequences is they're very often associated with the fact that we end up causing us to focus on too narrow a component or a small part of something without looking at the bigger picture. And it's that there's a cost associated with that. 

 

And the other one is definitely so much, it's not so much about the measure, and this is where it goes into Goodheart's law, but it's about people's perception of how that data is going to be used. And when people think it may be used to judge them or lead to some adverse outcome, that in itself drives a lot of behaviours. Yeah. 

 

And this is actually, we've had a few questions just come up now which kind of relate to that because people are quite thinking about, well, okay, it's all very well and good to come up with intervention logics and measures and things, but how do you get stakeholders on board and how do you get decision makers on board? Because that agreement phase of the decision makers don't understand the intervention logic, they measure things they don't understand. This is Paula's question here. We've had someone else asking, is there a strategy for safeguarding myopic views of powerful stakeholders? Or yeah, how do we do that? How do we socialise it with the people actually making decisions based on these measures? I reckon there's a long tail. 

 

You've got to spend the time as you talk to them and engage them and pull them into it. Often these things are stood up too quickly and they develop loads of measures. This is a stakeholder problem, everybody puts in a bit of what they want into the measure and away they go and it just turns everybody off because there's too many measures. 

 

It's happening too quickly rather than broad engagement, go broad first and then once later. Yeah, I was just going to say I think in the world of psychology, it talks about how people tend to be more orientated to the threat rather than the reward. So if I'm one of those people, and this is what's important about actually putting yourself in their shoes and thinking about what are the dynamics they're working in and what might be driving them. 

 

So when you bring a potential measure in, not only does it potentially create the burden of having to be involved with the tracking, but what's the impact of that measure on them? So you kind of understand why people may be a little bit reticent. That's why I think it's quite useful to be really thinking about who that person is, what's the what's-in-it-for-them factor, and being really clear in your communication and just trying to be really constructive in the engagement. Yeah, I guess I see it slightly differently. 

 

So the thing I would say around some of it, like the interventional logic, is I think it is an issue because people don't talk actually generally outputs, outcomes language, so it doesn't mean a lot. But I think part of it is actually about who's involved in developing these things and making sure that those that are most pertinent, including senior decision makers, are part of that decision-making process and actually become part of the process of actually informing that intervention logic itself. And then I think actually, yeah, I've seen lots of measures really misused. 

 

Some really classic examples, and I think one of the worst I heard was the Power BI dashboard thing. What happened was that the senior leaders, they didn't know how to use the dashboard properly, so they used to just get a PDF and they'd talk off the PDF. So the dashboard was dead in a way because they never used it properly. 

 

And they didn't know how to, but actually just giving people a whole lot of stats doesn't tell them very much. So I think it's also really important that some sort of succinct little narrative accompanies a measure that actually tells people, how do you interpret this? What does it actually mean? What's its implications? And it needs to be tailored for the people that are going to receive it. So it needs to speak to those decision makers in a way that actually really helps inform their decisions and makes it easy for them because let's face it, nobody likes to really look stupid. 

 

We don't like to ask questions that actually maybe expose us. So we might go along going, yeah, yeah, yeah, but actually we don't. So I think we need to make it easy for people. 

 

That's true. Just in terms of how to actually get some practical advice for people, when we were preparing for this, one of the things that I thought of that could be quite useful is a number of years ago, I came across the work of an author by the name of Stacey Barr, and she wrote a book called Practical Performance Measurement, and I think I got that right. And I remember when I read it, it has been a while now, but just finding it incredibly useful. 

 

So if you're working in that measurement space right now and you're wanting to learn more, you're wanting to grow, but also you really want those practical tips, her resources are fantastic. And one of the things I particularly like, and I'll draw it back to unintended consequences at the moment, but when it came to measure selection, she had this really good tool around a one to seven scale of strength, like validity, and also a one to seven scale of feasibility. And I think on the screen, you could just see one to three, right? So it's definitely, that's only a snip of it, but it gives you a feel of the kind of ways they frame it. 

 

So those scales go all the way up to seven. So if you're looking for practical tools, I think there's the link there, but definitely Google Stacey Barr Practical Performance Measurement for some good practical advice. And with specific respect to the unintended consequences, what I think is quite useful is just that really structured approach. 

 

So what I loved about Stacey's model is the structure of this, then this, then this. And when you get to a stage like this, where you're doing the design and the measures, this is a great place to just systematically stop and ask you a question about, wait a minute, what are those unintended consequences? Am I clear about all the people who might be interested, involved, or affected by this measure? Am I aware of how they might use the data, or potentially other people might use it in a way that affects them? And what signals do we get? What warning signs do we get about potential unintended consequences? Because the sooner we can understand them, the sooner we can potentially mitigate them. So those are things, and there are practical things you can do, like we talked about before, like maybe sometimes you can have a measure, but not have a target, because you've anticipated how people might respond, and that's a conscious decision you've made. 

 

Cool. That's good to know. Yeah, that's the unintended consequences, but also being more deliberative about making sure that those important populations of groups are visible. 

 

So this is back to population health again, which is me. Make sure everyone is especially, make sure everyone, especially those at risk, can be seen in the data. And coming back to who designs and who decides, those who are most impacted by the outcome of the unintended measure need to be at the table, so avoid the assumption that what matters to you also matters to them. 

 

Can I throw you a question from the audience too, Brendan? Bridget's just been asking, and this might be, a few people may have heard this term and just want to unpack it a bit, a layman's definition of Māori data sovereignty. How we should be considering it in data collection. It's a good question. 

 

I don't think there is a layman's version, and I'd hate to be talking about other people. Give it a go. But it comes down to who controls the data, who decides what's collected, who decides what's done with it, who holds it. 

 

So it's a big part of it, actually, where is it stored? And it's the understanding that the repercussions are much broader. They impact whole whānau. So it's the idea that there's a collective underneath each statistic that represents, each number represents a person, and that that number still carries the Māori of the individual. 

 

So no matter how far removed it gets from the person, it still has a residual impact or direct impact because it can continue to influence policy and change and other interventions. So it might be of its own, disconnect from the person, but it is connected in the sense that it's going to have an impact on them or their whānau or their hapū or the broader collective. So that's why it's never not connected to the person it came from. 

 

And I think one of the things I took away from the work that we did on Māori data sovereignty that really captured my imagination was actually how much data is collected about Māori that's actually not of great use to them. And Māori data sovereignty actually creates an opportunity to change that and to actually start thinking about what sort of data would actually be really useful for Māori and how do we actually start talking about things in a really different way? Because there's really unintended consequences from the way that we use our current statistics and the current data we collect. And going back to that website, I've forgotten the name of it now. 

 

I would recommend you have a look at that because when I looked at it, they've got some great videos on there that really provide some really nice explanations and talk you through what is Māori data sovereignty and talk you through different things. So it's a great resource. It's got some great resources on it. 

 

That's where I learnt loads. Yeah, Hector's just sent you a thumbs up. Thanks Hector. 

 

Yeah, and look, we've got a few questions about challenges to do with unintended consequences. We had a question earlier as well about what do you do when you just don't seem to have capacity in your team to actually be really focused on, you know, evaluation, monitoring, measurement activities. So this is a lot of what comes through for various kinds of organisations. 

 

You know, we've got limited resources. So how do we get the most value from measurement activity? Yeah, I think that's good. And that is the real world challenge. 

 

So for me, like Fiona said before, a lot of it's about being realistic in your scope, right? There's simply so many things you could want to measure and you've got to be really sharp thinking about your resources and actually going, OK, what can we do? And how do you get that balance of coverage versus practicality? I've definitely seen, I've got one in my mind, one piece of work at the moment where it started off where people designed all these measures. They look so fantastic on paper, but once it started to actually get in the motion, it was just too unwieldy. And when it's too hard, it's just not going to work, even though it'd be great to do. 

 

So you've got to be really pragmatic. The things I would, some of the things I would suggest is, well, A, as I said, talk about measure selection and maybe using things like that tool that I said from Stacey Bowden can help you make sure you get your measures right at the beginning. But another thing that I draw attention to is around tools and processes. 

 

So some organisations are awesome and have already set up really good, their tech in such a way to create a lot of process automation and dashboards and so forth. And I think those things are great. I think they're fantastic. 

 

I think not only do they create capacity, but they also help with things like, can help with standardisation and ensuring data quality. Because when you've got people involved, sometimes there's manual errors and things like that. There is that. 

 

But I do know that a lot of people may not feel that they can walk out of this workshop or this webinar today going, cool, I'll just do that. So one thing I think anyone can do is around process improvement. It doesn't, in my mind, doesn't matter how small you are, how limited your resources are, there will be opportunities to further improve your business processes. 

 

I had this great opportunity for some time working at Stats NZ, where I was working with teams that were directly involved in collecting, processing, analysing and disseminating data. And it was about doing value stream mapping. And that was an incredible experience. 

 

And it was just some amazing ideas came out. And through that, I've really learned an appreciation of just how much, I guess what you might call that hidden waste can be. And that if you look through your processes with the right tools, you can actually get some really great improvements there. 

 

And you can scale that to something which is really going to suit your, I guess, your own situation. So overall, it's really about, you know, definitely maintaining that real cost benefit for thinking and everything you're doing. Because sometimes we get stuck trying to measure everything about everybody. 

 

So which is kind of opposite from what I said about measuring your body earlier. You and those most affected need to decide on what matters most. Also, and it's common to many health surveys, which are trying to capture, say, prevalence of rare events or conditions. 

 

You cannot afford to collect a sample big enough to commonly say how prevalent something is, especially if that's rare. Once you try and break it down by age, gender or ethnicity, the numbers get too small and become nonsensical. You have no confidence. 

 

So in that case, it might make more sense to simply concentrate on those populations at most risk, give up on trying to capture the entire population, which is kind of related to the tyranny of the majority. So this is, you want to understand who matters. Yeah, yeah. 

 

And we've had a few questions about this kind of thing too. We have one from Andrew, which I think sums up a lot of what people are dealing with, saying, you know, when we collect information, when you do surveys or data, you know, government agencies actually have a lot of data. A lot of it is anonymised. 

 

A lot of it is available. But then sometimes people are trying to do measurement purposes to find the data is either extremely specific or it's too high level. Like, how do you find that Goldilocks zone? How do you do that when you're, it depends on your measure really, doesn't it? It depends what you're trying to measure. 

 

Yeah, it is difficult. And I remember, I'm not sure if it still exists, but I remember a few years ago, Treasury, maybe it was Treasury or Education, or, but there was some centrally accessible resource that you could do. There was a big spreadsheet and it actually highlighted a whole range of different useful data sources from across government. 

 

So I think there is a real opportunity there to think about it from a design thinking lens, from a whole of government perspective, thinking that, well, actually, if we want all the agencies, based on what the Auditor General's report was saying, to get better, maybe there could be quite a strategic approach to actually thinking about, well, what can we put in place that's going to make it really easy for all those agencies by making improved awareness about all the different data sources and maybe getting some good feedback loops so that people like yourself that may be struggling with those issues, those things get surfaced so they can be kind of tackled, not just as like a point solution, but it's something which could benefit others as well. I suppose the Goldilocks Zone is simply what's going to be the minimum information you need to make the change to understand what needs to happen, what needs to change. And there might be something in between what exists, so you might have to actually collect some additional data, but you'd be guided by those, or there might be, that would help you decide on what's needed to understand the context. 

 

Yeah, I think one of the challenges is in government, that it just keeps on being collected without questioning what's the purpose or use of that data. So really, unless you're really clear about its purpose, you shouldn't really be collecting it, unless you're clear that you're going to use it, right? And so some of this stuff has also been maybe collected for a different purpose and is actually now not working well in a different place. Yeah. 

 

And so that's one of the challenges when you bring it through, and that actually can also lead to distortions as you do it, that's a risk. So yeah, I would agree with you. Yeah, because the forced use data, there was never designed to answer the question that you want to ask of it. 

 

Yeah, and that's the risk when you do have quite a bit of data available, is that people can go, oh, we need to measure something, or let's just measure what's there, and that'll do. And sometimes that's great in terms of it's really efficient and it's fit for purpose, but sometimes it does mean you just end up, people end up choosing the wrong measures. Yeah, and this actually ties in with one of the comments we've just had about some of the ethical issues. 

 

We've had a couple of comments, Paula mentioned using words like vulnerable populations is not ideal, maybe it's underserved, that sort of, it's the framing of the data, it is actually quite important. But Melanie's also pointed out wasted measurement is an ethical issue, don't collect data on measures unless you've got a plan on how you're going to use it, right? So we're going to get into a few more questions. Before we do that, I'm just going to let you know about our next webinars. 

 

So next week, we're running an introduction to design thinking. Craig Griffiths will show you how to use this methodology. He's a Stanford executive coach on our team. 

 

How do I overcome these challenges? And after that, our strategy team will be discussing mastering business change, and that's approaches to navigate change and thrive in that. And I would say with that, actually, there's elements of change management that can actually be really useful when you're thinking about, you know, getting that buy-in and going through the process of developing measures. So that may be of interest to you to join that. 

 

Cool. And thanks again to everyone who sent questions when they signed up. We have a few more. 

 

And do you know what? I think we've actually covered this one, the burden of reporting on community organisations and back office staff. We've sort of covered that in terms of how do you do it with limited resources. But just to reframe that, how do you manage the burden of reporting? I actually think maybe, Brendan, you talk to it. 

 

Of course, some of it. So I'm thinking about some other work we've done. Often, they develop too many measures at the beginning. 

 

There's too many stakeholders, again, who are coming in there and trying to tackle the measures they want in there. So there's some tension there. That sets you up for this vast torrent of data, which is really hard to pass and make sense of. 

 

So it's better to go, for example, broad and then nuanced later. I guess it's also understanding what you need to know to make the difference you need, to make operational decisions, to understand you make a difference in the communities you work in. It's that long runway again. 

 

You've really got to spend some time making sure you've got the understanding of measures well and that they will change. And there's that feedback loop again, make sure that you're not just measuring things that have become wasted data that won't actually use. So it's part of the answer. 

 

Yeah, and I think the other one I've seen is that some small community groups, so they often, one of the tensions, isn't it, is that the different funders actually have their own reporting requirements. They all have their own measuring requirements. And there can be a real lack of oversight over what that mix is. 

 

Some of the really savvy organisations I've come across, it comes back to your point about process improvement, they've stopped and they've made the time to actually create spreadsheets and pull all of it together. Work out where are the synergies? Where are things actually similar enough? Yeah, what have you already got? What have we already got? How can we take one thing and use it in multiple places? And actually kind of done a bit of an automation process as well. So they've actually looked at how can you streamline the capturing of that data? How can you make it easier for yourselves? And so actually getting that whole streamlined approach actually meant that they actually went, I remember one agency talked to me about how it made the difference between feeling like every time they had to report every quarter, it was complete chaos, to actually it being just a walk in the park. 

 

That that was how significant the difference was. So it's worth making that time to sort out those processes and get that good oversight of what's actually required. Yeah, and I guess I would just add to that. 

 

It's also there's a piece about trying to be aware of what is available and accessible for you. So you might be doing a specific piece where you need to target engagement and collect data. But even for that broader macro, understanding the context, some great information you can access from places like Stats NZ, which can give you a great source of information to help frame up some of that stuff. 

 

So there's a wealth of information out there and just good for people to actually know what's there. Yeah, totally. And we've had a question about data quality. 

 

How do you ensure data quality? Well, there's a few ways to talk about this. We've covered a bit of it. We have. 

 

And I think, look, data quality is such a big topic in and of itself, and it's such a challenge. But I think some of those simple things, and these actually come again out of that guidance I talked about right back at the beginning, such as Treasury or in Australia, the Victorian Department of Treasury and Finance. The simple things like internally, have a clear owner, somebody who has responsibilities for those measures or specific measures, and that will have the oversight of it. 

 

Ensure, just as Jeremy was mentioning, ensuring that there's good buy-in. So the measures need to make sense. Make sure that people can make sense of them. 

 

Don't collect data that you don't need and will have a clear use for. Be clear about the purpose of the data that you're actually using, about what, how, and where it will be used, and ensure that there are clear data definitions. Make sure there's a good data dictionary. 

 

It's very clear. Don't leave it to choice, people's choice. Yeah, love a good data dictionary, because otherwise people will interpret it differently. 

 

Yeah, having that work done for them beforehand, so everyone knows what the measure is called. Yeah, and it is true. There are places where you do have the same thing. 

 

There's quite a variance in the different ways you can measure it. You might naively kind of go, oh, well, it's all going to be the same, but actually there can be quite a big variation. So you're defining that, documenting it, and even if possible, automating it so it happens each and every time. 

 

It's great. And being aware that they'll answer differently by different people. So there's always going to be variance in how we will interpret the measure and the questions. 

 

That could be part of the data dictionary as well. So this worked well for this group of people. It works rubbish for this group of people. 

 

So you don't want to think the consequences. Yeah, and it also touches on, I think, the role of analysis. So you can take the approach to say, I'm just going to primarily focus on getting the data out there. 

 

But then there's to what degree is the process around analysing the data and making sure that the analysis is fit for purpose. Because quality analysis can just add huge value and really make sure that not only the data is good quality itself, but that people are actually drawing the appropriate inferences from it. Yeah, totally. 

 

And this kind of relates to one of the questions we just had to come through now. How do measures tie into key evaluation questions? I'm looking at you guys. I think the thing about measures from my point of view, when I've thought about key evaluation questions, that'd be what I would call a secondary data source. 

 

So if I think about a primary source, that's the information you go out to collect specifically to answer those key evaluation questions. But measures can provide really useful, longitudinal, longer-term information that can underpin the evaluation itself. It can provide really... So again, it was collected for a different purpose. 

 

Now you drag it into the world of evaluation. But you can use it actually to enrich your findings, I think. So I've seen it used. 

 

It can be used really well, just to really add depth. Yeah, and the problem with key evaluation questions, of course, is they're lovely to help guide and shape things. But often when you go out in the field through whatever data collection mechanisms you've chosen, what they're saying may not map back. 

 

So they constrain you often because you end up trying to report back, which is a sort of a target problem, trying to report back on the key evaluation questions. But the most important study could be completely missing because often the question might be designed by a committee or too many people, and you've had to go do brogue and then come back again. And there's sort of confused thinking in those occasionally. 

 

Or if they're nice and clean, then they work really well. But you have to really go back to things like the individual logic and what's the point of it when you're trying to measure? And those drive your key value questions and therefore what you need to measure. And related to that, Brendan, we had a question up directed at what you'd said earlier about examples of your point about collecting that sort of minimum information that you need to show change. 

 

Like, are there examples in New Zealand where that's been done well? Shall we come back to it? Yeah, I mean, there will be, right? But it's just, yeah. I mean, in qualitative, you know, you have the idea of saturation, don't you? So that's one way of getting to it in qualitative. And saturation, just to clarify, means the same points kind of coming up again and again. 

 

You're done. You're not getting anything new. Yeah, basically nothing of significance is coming up anymore. 

 

So that's one way of doing it. But I can't. Hmm. 

 

There probably will be something. We just haven't quite thought of it. We're going to come back to it later because we are going to check if we've missed any questions and we'll record a little video later because we are kind of running up on our time. 

 

I really did like looking at, you know, some of that information about the Office of the Auditor General, that idea of what Fiona mentioned before about, you know, there's a story, there's a narrative, right? You know, part of what we're talking about is remembering to be aware of that story, right? I think so. I'm a great lover of stories. But I think stories are really powerful. 

 

When you just have anecdotal information, what you were saying at home, what do you have with anecdotes? You just have a bunch of anecdotes. So I think the real power comes when you can put that quantitative with the qualitative to tell a really strong story. And in the valuation world, we call it triaging. 

 

So you're not just relying on one source or two sources of data. You're now relying on multiple sources. And that's how you can also really start looking to the differences between different population groups and help focus it and really enrich your findings. 

 

But I think a short, succinct, good narrative stops the perverse behaviours and it really gets people to understand. What's it really telling you? Understanding, yeah. Now we're getting into some of the questions that we didn't quite get around to in our live webinar. 

 

So firstly, we had a couple of questions about measuring prevention. So how do we show the impact of something we've done that means a bad thing has not happened? Yeah, it's an interesting question. And that's really important, particularly in terms of the specific example you've given there. 

 

I think there's a strong interest in that. And most of us had a real attachment to that issue. I guess my first tip is actually use your networks. 

 

So I know that ACC, for example, they do a lot of work in injury prevention. They've got a really good setup in terms of the data that they have and really good processes and systems. So if you've got any contacts in that, it may not be necessarily in that specific context you're talking about, but they may have some tips, some ideas that could be relevant to you. 

 

Maybe another thing that jumped to my mind is thinking about if you begin with the end in mind and you think back from the end and go, okay, what do we think would need to happen in society to actually make a meaningful difference in the area? What are the biggest things that would need to shift? And then if your thing that you're talking about is one of those things, you can look at it in context of how do I think its relative impact might be relevant to those other things? So that doesn't necessarily guide your reporting, which there may be ways to use self-reports and things like that. But I think that that could be a good way to start. That's my thoughts. 

 

Yeah, and there's a point actually when we were talking earlier about COVID and the idea of excess mortalities where you had to actually, you understood what happened now, but sort of a what if, what if this had happened at a population level? So those are very powerful and useful. They're also very complicated and very few people can do that. So you're sort of winding it back to, it's a smaller but organisation end of things that what's practical for us now is it's self-report. 

 

You know, what you ask them, did this ideation occur in an event or not? And then also you might be asking people around them more broadly, which is an idea that you mentioned about the whānau and the networks they have. Yeah, and also I was just wondering like, so when it comes to mental health crisis, it actually doesn't start with the system. It actually starts with people's own networks, their families and friends, okay? And there's something that's often, so it's hidden from view. 

 

And so I think it's really hard to get there with mental health prevention without actually getting those self-reports and what difference it actually made. Yeah, you won't be asking the whānau rather than the individual. So, you know, there's the by proxy measures that could be really powerful in this instance. 

 

Yeah, yeah, absolutely. I think you're right there. And then I was just wondering too, I was wondering about your thoughts about whether, you know, we're seeing an upsurge, or it feels like it, an upsurge in young people with mental health problems, having mental health crises. 

 

If there was to start to be a change in the trends, so you saw a whole lot of programmes that were actually targeting prevention of mental health, these crises, to try and stop them. And you saw actually the trend change over time. Could that also be an indicator of the success of those programmes? Yeah, they get the attribution problems. 

 

So, yeah, so you might contribute to a portion of it, but, you know, there's so much going on that it's always a fraught analysis to do. And actually, we did get a question about attribution problems too. You know, correlation's easy, but causation can be much harder. 

 

Yeah, so any final thoughts on getting from correlation to causation? Longitudinal studies, or self-report again, because you're asking, this is a self-report into the past. So there's retrospective measures, which are a form of longitudinal, you've travelled back in time. So you can get an idea of what happened there, and how did that impact their life going forward? So, but yeah, causation was a particularly difficult thing to claim. 

 

Yeah, it is. But I think also just really actually thinking about the difference between those things that within your sphere of control that you're measuring, that you can actually have stronger, so you can more strongly link to actually the work that you've been doing. It's quite close, the linkages, to those that are perhaps more of areas of influence, which step out and recognise there's a whole lot more players in that area, right out to spheres of interest, where you could say, oh, actually, that's really out there, and it's likely to be really, really small. 

 

So just differentiating, I think too, in your performance measures, and what you're measuring is what's really closest to what you've got in control. That's the areas that you're going to be able to most attribute some sort of change to. Yeah, and sometimes that's another way of saying attribution versus contribution, right? It's like, there are lots of things that might have influenced the change, but can you at least drill down and see what your contribution to that change has been? Yeah. 

 

We've had a couple of questions about stakeholders, and who signs off on things, who actually understands the logic. Are people agreeing to measure things they don't understand, or are stakeholders getting particular tunnel vision on what they want? We did cover this a bit in the webinar, but did anyone have any further thoughts about just how to get those stakeholders on board? In the world of intervention logic, the things I've found most powerful is when you actually have those stakeholders in the room. And then I'm really big on actually using, kind of making it, demystifying it. 

 

So we've used here a visual approach to it to break it down into a much more visual space for people, rather than just getting people confused over language. Because typically what I find is if you say to people, put one of the medium term outcomes, what you find is actually quite often a whole lot of outputs, and you end up taking them there and put them in the outputs. And then you usually end up with some long-term things. 

 

Worst case scenario, they're world in peace, so they're too far out. And then you've got outputs and you've got this big gap. So I think the more that you can have people as active participants, and I have had... Yeah, skin in the game. 

 

You've got to get them, because they've got skin in the game, so they understand. Yeah, yeah. What the point of it is and what difference it's going to make. 

 

Yeah, and that's... Yeah, so I mean, I'd just advocate for just really applying, I guess, that change management mindset, where you're getting really clear who are those people, profiling them, do you understand their needs, their drivers, and then how can you pitch things in such a way that it's going to be really relevant for them? Like, what is the hook for them? And that also includes ongoing. It's one thing to sort of... I turned up for one workshop once and I provided some thoughts, and then I didn't do anything with it for like three months. So what's the level of engagement over time, and how do you think about that? Yeah, it's really... The engagement is such important. 

 

It's that long runway again. You've got to build that trust. So do they trust, the stakeholders trust that you're going to actually deliver on what you promised, that you're going to keep going back to them, that it will make a difference. 

 

So it's that building that trust with them so that they will be more invested in it, and therefore it'll go smoother because of that. Also, I'd say don't overcomplicate things. Sometimes I read some intervention logics and there's an enormous number of boxes, and I'm like, where do I even start? And then I look at the sea of words and I'm even further lost. 

 

And so I think keeping it simple, and actually if there's a lot there, put it in a narrative or add some footnotes to it and add some descriptors at the back. But keep it simple. So often you can end up with all that detail, but you need to lift it up. 

 

And so it actually makes... It's kind of, you know, keeping it simple, I think. You can make it personal as well. So they can relate to what's happening. 

 

You know, there's the idea of bringing... of a story, like there's an actual family in there, there's a person, or you create a vignette around a person. So you can make it so people say, they can see themselves in the results. They can see themselves, what's going on. 

 

So there's a more personal connection to it, not some abstract exercise. Yeah, I think that's really nice. It's all about helping people understand more as opposed to get confused, isn't it? Yeah. 

 

Yeah, that's right. And I guess just as a final thing on that, like I often get frustrated sometimes with documents and strategies and things coming out that are expressed at a high level that I get and I buy into. But I'm always asking the question, well, so what? Like, what does that mean in terms of what I need to do today or tomorrow? So if there's any way that you can kind of translate that, because that's, as a reader, you know, that's what I'm caring about. 

 

Like, well, what do I need to do now? So anything that helps make that connection is really useful too. Cool, yeah. All right. 

 

We have one that's a wee bit different to what we've been talking about. But how do we include internet of things, devices and automation to bring more data to bear on complex decision making? I'm quite wary of that one because that is data often collected without a consent. So there is a slight ethical greyness to this. 

 

I also understand it's particularly powerful. But I know it's happening and it does show some amazing things. I understand that, for example, Google can predict a higher risk of a heart attack based on other behaviours. 

 

There's all these micro, immense amounts of data that show up that you couldn't measure any other way. But I don't know how to read that because it's not consented data. And if you're eyeballing it yourself, to connect that to actual outcomes, it starts getting quite a long bow to draw without using machine learning or artificial intelligence techniques. 

 

And then that opens up a whole other set of ethical concerns. But it feels inevitable, right? It's just like somehow we've got to navigate these issues. We know that there are sources that are collecting information. 

 

And yes, there's so many questions about how it could be used. But ultimately, to tighter humanity, we're going to hit this somewhere. So I guess the best thing we can do is to help shape the approach. 

 

Maybe. I don't know. I don't know. 

 

Maybe I'm old. Maybe. But I think at the very least, actually, that we need to be a lot more transparent because we have to ask ourselves, what's the definition of privacy today? What's happening to our privacy? How's it being eroded over time? Because some of this actually speaks to that. 

 

It's about taking data from us and using it without our consent. And that raises for me questions around privacy. So I don't know if we should be sitting here going, yeah, it's going to happen. 

 

It's probably a bit outside of what we do. We don't have much say about it. But I think that there needs to be at least a great deal more transparency about how it's used and what ways. 

 

My concern about it is all the bias that's already built into the system and the black boxes that already exist. And when you don't have those explained in any way, they're actually potentially quite dangerous. Yeah, because some populations have more data collected about them, simply because they have more interactions with the system, with the government or other collection devices. 

 

And therefore, the data is already biassed because it doesn't represent the whole population. It's representing certain groups more than others. But I think the consent, if you could ask them that if you can collect data about how many steps they walk and their heart rate and how many times I've turned the light on using the Amazon server thing, that's fine. 

 

Well, that's a good question. I think we'll move on a bit. But this is one of those good questions that raises more questions, I think. 

 

Yeah, we have another one from Tony here. Do we want to discuss a good rule of thumb if we want to ask the same question in different ways? Like to elicit a balanced result, can you find different ways of asking the same question? The problem is burden, of course. You're asking, if you're trying to minimise burden, you're asking the same thing in different ways as a check that they're not just going down the middle and saying agree to everything or disagree to everything. 

 

Yeah, and as a researcher, I'd always come back to saying it means you need to actually concept test your surveys. You need to pilot your surveys. You need to actually check it with your audience and make sure that people can make sense of the questions you're asking before you go and ask it at a whole bunch more people. 

 

Yeah, there's a technique called cognitive testing where you sit down with somebody opposite and you get them to think aloud. So as they answer the question, you say, what are you thinking when you read it? What are you thinking when you answer it? So you're trying to unpack that in whatever set of questions, whole variations on the same question. So you start understanding what it is, how they differ in a person's mind, which is usually difficult to get at. 

 

Yeah, or rephrase the question in your own words to explain what you understand you're answering. I mean, that's sometimes what it comes down to is if you're going to be taking a lot of big measures, you want to actually make sure that you have tested them first. We also had a couple of questions come through about KPIs. 

 

So that's key performance indicators. So how does this impact measurement fit in with the routine KPIs? Is there a good practical guide for how to create these sort of KPIs and associated measures and use them effectively? As Perrin, one of the questions has said, it's quite a topic and it's increasingly important. What do you think? So during the webinar, I talked about that reference to Stacey's Bar, Stacey Bar's book. 

 

So from a KPI perspective, check that out, right? It's got a really good process about how you can particularly get into the KPIs, at least from memory, that I remember at the time when I got it, that seemed to be particularly where it had a big emphasis. But I wonder if either of you, because the impact, Fiona or Brennan want to talk about the impact measurement component. The problem with many KPIs is actually what they're focused on. 

 

And they're actually focused on activity levels. So they're not actually useful for telling an impact story because they're actually about the activity. So you then have to think about whether there is a KPI, and I'd agree with Jeremy, that approach was really good. 

 

But whether there is a KPI or combination of that would shine a light on impact. Or whether you could take that and supplement it with some qual. So yeah, it is increasingly topical. 

 

Totally agree. But I think that's my thoughts around KPIs. And yeah, the problem with generating KPIs is it's too easy to go proportion of this, percentage of this, number of that. 

 

In terms of impact, what does it tell you? Not much. I think that the challenge is that with KPI, what you're wanting to do, you're thinking from a KPI perspective as a decision maker, you're really thinking about, when I look at data, what do I need to know? What do I need to do? There's huge amounts of data that exist out there. So you're really thinking, okay, I don't have time to think about all this. 

 

So what are the few things that I could look at that are going to tell me, give me a real good sense? And one of the things that's really important with that is you want a good degree of timeliness, right? Because you want to be really responding really well and quickly to your environment. And sometimes the impact measurement, you realise that that's important, but that's happening later. So you want to know, there has to be feedback loops. 

 

There has to come in somewhere. But right now at the beginning of the process, you need that early feedback to know that certain aspects, which at least intuitively are going to deliver to those longer term outcomes. At least that part of it is working really well. 

 

Yeah, I was going to say, actually that takes me back to what, we kind of skipped over a little bit in the webinar. But when we were talking about things like investigations or audits or that sorts of thing, that's not necessarily what this person's talking about. But actually the whole point about that was to nuance the KPI. 

 

Too often KPIs lack nuance. They really are just a proportion of, a number of. And so what that whole point was, is that yeah, timeliness. 

 

Were they timely? How long did they take? But there might be a good reason why they're taking a long time. So you want to maybe try and start capturing in your KPI things like, something like the level of complexity, depending on what you're looking at. You might want to actually pick up on differences in variability and size or scale that you're dealing with. 

 

There's some of the things that you might want to pick up on and actually start drilling down a bit more. Or, you know, like I've seen it before where somebody will say, I don't know, there's X number of audits carried out. And I'm like, great. 

 

It doesn't tell me. But what I'd like to know is, do they know the percentage of recommendations that were actually implemented? Because that will tell me actually that there was some follow through from actually that action. Or if there was, what's another one? You know, something like that. 

 

Or, you know, if there's, I don't know, if it's an area where there's a number of deaths each year, obviously you want it to reach zero. But if there's some, do you have any information at all that you could add to that narrative about the cause of death? You know, was that being? Yeah. Yeah, and then KPIs are driven from measures. 

 

So if you go way back to good measure design from which your KPIs end up being, you know, that's half your battle there. You get good measures and they can be used to drive better KPIs even though I'm a bit wary of KPIs generally. Because they do all sorts of perverse behaviours. 

 

Yeah, it is a big question. You're right. Yeah, Google it. 

 

No. No, thank you for asking it. It remains a big question. 

 

We are, it is interesting. Okay, one last question from Jeanette. Some people say to measure only things you can get data for right now. 

 

Is this right? No. I don't think so. Cool. 

 

And the reason why I don't think so is partly because I was part of Stats New Zealand led a big project looking at the development goals, meeting the, what is it? Sustainable development goals. Sustainable development goals, yeah. And what that process involved is actually what we thought about and what we brainstormed around or what would be the key measures we'd want to put in place. 

 

Then the question was what data already exists with them and highlighting which ones that data existed. What that enabled then was that you found all the ones that didn't exist. It doesn't mean you ditch those measures. 

 

It tells you actually that there are gaps in what's been collected. So then that raised questions like is there opportunities to collect some of this data through some of the current survey tools? Is it possible to just add a question without adding a lot of burden? Is there something else? Would we need for some other survey? What other opportunities or what other ways could that data be collected? And so the reason why I say no is because it just is too constraining. So from my point of view, when I'm doing measurement work for people I start with, what do you collect now? What do you already know? What's going to be usable? Let's not reinvent the wheel. 

 

But also it's really important to have that window of opportunity to say actually this is really important that it's measured. But we don't have the data so we need to do something about it. And I think from memory one of the things that struck me about it was how little information was actually collected around relaxation. 

 

I think it was one of the ones that they didn't actually have as much data as they would expect to have had. They used to do a time use survey. So just some of those things and I could be wrong, it was particularly that but I remember something like that where I was like, oh, I didn't expect that. 

 

So I think actually what we need to put up front is what's really important here to be measured and then we'll come back to feasibility or plans over time. Yeah, you might be doing other activities like actually understanding the measure until the opportunity to collect the data and begins. So you might be doing a lot more talking with people about what you're going to measure the right sorts of things and is it going to mean the same thing for everybody or it's going to be a different thing. 

 

So there's all this runway work that you could take opportunity to do. And so that when the opportunity does present itself you're collecting better quality data. I mean, bear in mind though people do have limited resources. 

 

So in some organisations or some settings you might be like, what's the best we can do with what we've got, right? You might be, but I'd still encourage them to still go through that question, questioning yourselves and say, I've heard it before actually, once came across where somebody said, we're just too busy. We couldn't possibly find time to collect any data and the question I really had for them was, but are you collecting the right data? And so, yeah, you might be really constricted. You might have really limited resource to actually do much. 

 

So it might be pretty much stuck by it. But the question you need to really go back to is, are you collecting what's most important? Yeah, and I guess my answer is, well, first it is valuable as Fiona was saying to be clear on your data gaps and then think about what is the cost of not having that data? Like the fact that you don't have that data, how is that constraining you? How might it negatively impact on decisions? Because it may be from, you need to apply the cost benefit thinking. It may be actually to be pragmatic. 

 

Now, yes, you might only measure what you've got, but it's important that people don't get the sense that that's all there is. You can use that alongside that there needs to be the commentary about where that data gap is because expending a bit more money to collect that may deliver better return on investment. You can not tell you're being busy doing, you could be busy doing other better things but you might be busy doing the wrong kinds of things. 

 

The vision might say, oh, actually you should be busy doing over here because that'll make a much greater impact. So that helps unpack the busyness. A lot to think about there, isn't it? Well, look, that's our questions for today. 

 

Thank you again to everyone who sent in more questions in the chat. It's been really great to have your engagement. It's been really nice to see your comments. 

 

We've enjoyed talking to you about this and we hope you'll join us again another time. So one more time, thank you very much, Brendan, Fiona, Jeremy, I'm Rebecca. Thanks to our team in the background and we'll see you another time. 

 

Ka kite. 

45 Topics +160 Resources

Book a time with our experts

Book time
+50 Webinars
Resources

Download our free webinar resources

A+C Tips for Mastering Measurement
Download
45 Topics +160 Resources

Sign up to our Resource hub & Unlock all content

+50 Webinars
Resource Hub

Explore our latest resources

A+C WebImage AI Toolkit 1000x1000px
AI Risk Management 1000x1000