We’ve noticed you’re visiting from NZ. Click here to visit our NZS site.
We’ve noticed you’re visiting from NZ. Click here to visit our NZS site.
Months of planning and thousands spent, but what you get are dismal response rates, and survey questions that are of no help or are too complicated when you come to write them up. Sound familiar?
Whether you’re running satisfaction surveys, evaluating the impact of your work, or gathering insights for complex policy consultations, the same fundamental mistakes typically keep organisations from getting information they can use.
Join Allen + Clarke’s survey experts as they reveal the tips & tricks that transform data mud-puddles into decision-making goldmines. We’ll share eight of our best tips to maximise response rates and give unambiguous insights for you to act on.
Our survey experts Jason Carpenter, Dr Bo Ning & Brendan Stevenson discuss:
Government agencies, councils, and organisations already running surveys who want dramatically better results. We assume you’ve run surveys before – this focuses on improving your results.
Kia ora koutou, good morning everyone. I'm Jason Carpenter, Director of Business Development here at Allen & Clark. Many of you have been to several of our webinars before, but for some of you this is your first one, so you may not be familiar with Allen & Clark.
So we're an Australasian-based consultancy dedicated to making a positive impact on communities throughout Aotearoa, Australia and the Pacific. We specialise in strategy, change management, programme delivery, policy and regulation, research and evaluation, especially surveys, just to name a few. And as an organisation, we give a damn about empowering our clients to overcome society's biggest challenges, which is why we regularly run these free webinars, create desk guides and provide expert advice wherever we can.
So for something fun today, we're playing jargon bingo. So you'll see terms like cognitive testing, post-stratification, non-probabilistic sampling pop up. When you hear one, mark it off.
There might even be a prize for the first full house. I don't know if we've decided what that is, but we're sure we can get something out to you. And if these terms do sound intimidating, they're actually quite useful.
There's a reason they exist and they're a reason we might talk about them, but we'll try and demystify as we go. And just to note that we are covering a lot of ground today, so trying to find that middle ground, but please do reach out if you want to discuss anything simpler or more complex. Nothing would make my colleagues happier than having a much more complex discussion with you.
So with that, I'll ask my colleagues to introduce themselves. Brendan. Yeah.
So I'm a Senior Consultant here at Alderney Clark. I've been here over six years now in multiple roles of my life, but basically I've been playing around in the survey space for over 25 years now. So Kia ora.
Kia ora. I'm Bo, working as a Senior Research Consultant at Alderney Clark. That's Brendan.
I'm a mixed method researcher with a focus on numbers and statistics. I have a PhD in marketing research and my career path has guided me into the research domain of mental health, addiction and social well-being since about 80 years ago. So I have about 15 years experience developing survey instruments and analysing survey data.
I'm not quite experienced as you. Let's talk surveys. And so we're here because surveys are deceptively complex.
They look simple. You ask some questions, you get some answers and you make better decisions. But as many of you know from experience, there's a huge gap between running the survey and then getting the insights that you can actually use.
Many of you shared in the sign up for the webinar that one of your biggest challenges is around the volume of responses. So today we're sharing what we've learned through our ANC survey process, much of which is aimed at getting better response rates to better surveys. So we're sharing what we're calling the Eleanor Clark Survey Success Process, which is eight practical steps that consistently deliver strong response rates and actionable insights.
We have around an hour today, 45 minutes. So we'll try and focus a bit more time on the two or three really critical steps. And that's where most surveys do fall apart.
So the critical failure points, but then the flip side, if you nail those, then everything else tends to fall into place. And we know that most of the people here are not beginners. You've probably run surveys.
So today is really about trying to take them from that data collection exercise to something to support genuine decision making in tools that work in the Australian and the New Zealand context. So let's get started with the ANC survey success process. First up, you have to start with some real clarity of what you need to know.
A key problem is that people don't spend that time up front. You need to spend that time to understand what you absolutely have to know, what you're trying to change and for whom. If you get that right, that sort of becomes your guiding star from then on.
The next bit I'm going to whiz through, since I'm sort of setting up some examples. A pathway to clarifying this would begin with developing your problem definition or information requirements. Then on to a theory of change, which I often call a TOC, which we've covered in previous webinars.
Then you could develop a monitoring evaluation framework to keep track of all these things. Or if you come from the outside to evaluate or review the programme, you may not even do this step. Regardless, you'll then develop your more specific questions.
These could be research questions, hypotheses or key evaluation questions, which we call CECs. Right now we have a much clearer idea of what and why. We can start getting to the how.
We could use a monitoring evaluation framework or add the data source to our key evaluation questions. It's a nice way of sort of organising it. So similar to the TOC, the theory of change, this could be developed collaboratively, or should be developed collaboratively.
And you may often detail going back to the theory of change or the KEQs, the CECs, to further refine those. We're going to concentrate on those CECs for effectiveness for this table, showing a mix of admin data, interviews and surveys to be used to answer the key evaluation questions. So now we've sort of landed on what it is more precisely, what we want to know, how it fits within your theory of change, what you're looking to change or understand, and the methods you're going to use to get the necessary data and insights.
And so sometimes survey is not going to be the most appropriate mechanism to get that information? No, it's always interviews, admin data. Oh, admin data, sector data, yeah. And if someone's time poor, they've got a compressed time frame, is there any tips on like where you might be able to speed this up? It always pays to spend as much time as you possibly can on the setup stage, because if you get that right, then less errors fall out at the end, and you'd have to go back to the beginning and things can just spiral out of control.
So maybe 20% spent getting their things really sorted, really done well. Yeah, time well invested. And then if we slide through to where surveys do go sideways, one of the things we do see a lot is that people aren't really honest about what's actually on the table for change versus what's already decided.
So we see it a lot in policy work in particular, but if your decision's already made, we really recommend that you don't then go and consult on whether or not to do something, or if you don't agree with the fundamental policy driver, you should really focus on how to implement it well. So your respondents will see straight through fake consultation, and you can actually lose credibility. One example, we brought into support a policy process with really, really tight time frames.
The original draft that we worked with the client on had some really fundamental questions about what people thought of the policy direction. But when we had our planning sessions, we pushed on that a little bit, and it became clear that those things weren't actually on the table. Cabinet decisions had already been made.
So what we did was we recommended that we refocus the survey on implementation. So instead of saying, do support this approach, we said, what challenges do you foresee in implementation? What support would you need to comply? So really focussing on those things. And as a result, we got much faster, high response rates, faster analysis.
And the quality of the insights was actually better, because we were actually asking about things we could influence. So the people that responded to the survey could see their responses then flowing through the process from there. Respondents appreciate honesty.
They could see that we're genuinely interested in making it work, not pretending that their fundamental opposition, which there was some, would actually change anything. So match your questions to your actual decision space, and be honest about what's actually already locked in. Thanks, Jason.
So after we talk about the KEQs and the decision context, before we write out our first survey question, we need to ask a fundamental question. That is, who are we trying to reach? And what do we need to learn from them? So understanding your audience isn't just the first step in cyber development. It's a foundation.
So audience relationship and their knowledge of the topic, their preferred channel, and reducing participation barriers all influence service experience and participation. The type of research or evaluation you are conducting should be connected to who your audience is. Some examples are, if you are conducting exploratory research with your employees about their job satisfaction, you might use a longer, more detailed survey with open-ended questions and emphasise anonymous responses.
But if you are facing external stakeholders, such as government policy, public submissions, you would assume that your audience have limited time varying their technical knowledge. So a more concise survey to reduce respondent fatigue and plain English will be recommended. So also, in a nutshell, we recommend using a demographic-specific approach and being audience-focused.
If your target audiences are young people, have a mobile-friendly format of the survey ready and use the incentives to increase their participation is a good idea. If you are conducting a health care needs assessment with low-income families, you may need to consider that your audience may face literacy challenges, have limited digital access, are from diverse ethnic backgrounds, and are likely to experience survey fatigue from multiple agencies. So this requires a culturally appropriate design, multiple delivery methods, and community partnership approaches.
This is especially useful for hard-to-reach communities. It's always a good way to collaborate with the grassroots organisations, early from the design of the survey to the dissemination. And those organisations can help provide language skills, cultural competency, logistic support, and access to population that you may struggle to reach, just name it.
So to wrap it up, you can check about the assessment framework we provide you. Understanding your audience is a fundamental step. If it fails, it will create a cascade of problems.
Have you got any examples of where it's gone wrong? Yeah, we've got so many different types of problems, like a survey that aims to collect data from migrants community, but the translation versions of the major languages were not even ready when this started to collect data. So it will cause serious coverage bias problems. Yeah, and I think looking back to Brendan's point around, if you're really clear about what you're trying to do and build those foundations, and then make sure you have enough time to do things like get translations, easy read versions, etc.
And so on here, we've got some of the horror stories here, because sometimes that is the best way to learn. I promise we do have some positive examples too, but bad examples can be illustrative. So one up on the screen here is what we call question 86 syndrome.
So we've had this real life experience client with a 90 plus technical survey. We delivered the analysis and they said, where's the information on question 86? That's really, really important. But by question 86, most people are gone or they're just clicking through.
So they're not putting really rich qualitative answers in. If it's really that important, the fundamental question is why is it buried that deep? So really look at like when you're going to present information. Double barrel disaster.
So when you're not clear about what you're trying to find out and why, you can often really easily end up with the double barrel disaster. And it's a classic case of trying to be efficient by cramming multiple questions into one, but it does not work. And it makes real nightmares on the analysis end.
Another one that we often see is what we call the open text nightmare. So this one can be a real killer because if you've got 10 open ended questions with 600 responses, if you take 6,000 text responses, five to 10 minutes per, you're looking at something like 500 to 1,000 hours to read, code and extract insights from that information. And if you're looking at one FTE, that's three to six months full time work.
So simply mission impossible. Yeah, and normally that's coupled with tight timeframe. So obviously, a lot of it's how much time do you have, but people often underestimate the amount of time needed to do the qualitative analysis.
That one in particular ended up with a smaller response rate because the open ended questions were harder for people to engage with. But also the volume that was available, just couldn't be analysed in the timeframes that were required to get things to cabinet. So you end up wasting respondent time because you get data that you've been not using properly.
So just really factor in the time to do the analysis at the point when you're writing the questions. We talk about success stories. So one on the opposite end of the spectrum, we're working with a peak body who had a very busy but engaged workforce.
And so we did some mini surveys where they were pushing for relatively complex legislative change and they wanted to get a mandate from their members. But instead of a normal survey, big, long, complex thing that tried to work through problems, options, preferred approaches, we set up three mini surveys. And so each one was sort of this radically simple distilled version of what we were trying to discuss.
So each one had five questions max. So you're talking about a couple of minutes for a busy person to flick through and respond. But what happened was we spent more time up front distilling the change, the problems, the options down to really fundamental questions that resonated with the audience.
And so we ended up getting really, really great response rates because people could see the value, it was in their language and it was easy to respond to. And then they bought into the whole process because we could so quickly give them insights from the first survey that they were listening to the, looking at the results from that and then responding straight into the second survey and then into the third one. So just really thinking about your audience and then how you can extract the insights that you need.
Sometimes less is more. Absolutely. And it's just really being clear about what you're trying to change, what you can understand and your audience.
And so that one, I think was, we ended about three minutes per response with really, really high response rates. Yeah. And I think about a survey we did, which is the household survey, paper-based.
Again, well-prepared, but the response rates were abysmal. But we got insights from people who'd never respond to any survey you'd send out. So the insights that you did get were invaluable and impossible to get any other way.
It's just spending that time up front to understand who to get it out to. And there was language translations all set up well in advance. And a lot of that, what's successful for a peak body with a really highly educated, engaged workforce, response rate will be quite different to I'm trying to get into households of people that have other priorities.
Your success metrics are going to be quite different for each of those. 1%, yeah, yeah. It varies greatly, yeah.
But rich data from that 1%, you know, and yeah. Absolutely. So today we will share with you the six most common mistakes in survey question development with some specific examples you can learn from and avoid, hopefully.
So since we are a little bit pressed for time today, we'll walk you through three of them and ask you to read the rest of three by yourself. So we're returning to the double-barrelled question. So the example we've got there is how satisfied are you with the food quality and staff friendliness at this restaurant? So someone might be happy with the food but frustrated with the service or vice versa.
So how do you actually want them to answer? So really straightforward, but the fix is how satisfied are you with the quality of the food? How satisfied are you with the services of the staff? And this is a simple question, but it's a really typical error in survey development. And so this is a simple answer, but you'll often find that you're conflating two things together even if you don't mean to. So really taking the time to make sure you're separating out the questions.
Yeah, let's check out the second one. Given the serious traffic congestion problems in our city, do you support congestion pricing to reduce traffic? So this question actually is a typical leading question. The question assumes that there are serious problems already and implies congestion pricing is the solution.
How do we fix it? We can just break them down into logic questions. First, ask about how would you rate the current traffic conditions in your city? And then follow up by several questions about potential solutions, like to what extent do you agree or disagree this or that is a solution. So these leading questions are very common.
They are written in a way that push people towards a certain answer. So instead of being neutral, the wording makes one answer sound more right, more expected, or more attractive. So this will make people to give you your answer that don't reflect their real thinking at all, and end up having biassed or unreliable data, which leads to unreliable or biassed findings.
Yeah, absolutely. Another example is the responses themselves. So you've got the questions, but you need to get a good spread of responses in the area you're most interested in.
For example, get a good spread of responses for satisfaction, but rubbish for dissatisfaction. The obvious fix is to extend the responses on the dissatisfied end of the scale. So having a fully decided response set is particularly the case where you often expect responses to be clustered at one end.
For example, if you ask, does money make you happy, but your response only includes yes or no, does not give you any new insight, and you have to add sort of more responses, especially inside the yes category. Apologies, but actually the question is a bit loaded, so you probably ask, how happy does money make you? Yeah, so after this, we will have another slide that shows you how to make your survey look better in terms of format and style, making them easy to read, easy to answer, and easy to navigate. So take a look at this slide.
This includes maybe using simpler, straightforward, and shorter sentences or abbreviation and jargons, and avoid double negatives as you are not doing logic testing. Use page breakers, add a progress bar to make your respondents feel a bit sense of control. And also consider accessibility issues for special groups.
Okay, after we talk about those questions about the surveys, let's talk about testing your survey before you launch it. Think of this process like you won't serve dinner to guests without testing first, right? Yeah. So we transform our drafter survey questions from something that looks good on paper into something that really works in practise through cognitive testing and pilot testing.
Cognitive testing is simply about understanding how people process your survey questions. It's conducted internally, mostly, by experts on the topic like us, or sometimes a small group of people representative of your target audience. So we could use detailed examination with a technique called Think Out Loud.
So you essentially ask them to narrate their thought process as they read and answer each question. During the sessions, you might hear something like, I think when this question asks about my household income, does that include my teenager son's part-time job? Or say, wait, I'm not sure, but what's the difference between somewhat satisfied and moderately satisfied? Are they the same thing? So this process is actually useful in identifying some unclear terms or words, memory issues in recalling, sensitive topics, or some confusion with questions. So yeah, the Think Out Loud thing, we have used that in the past.
And a great example is when we had a set of questions around wellbeing. We worked with a final water service provider, and they brought some clients in, and they managed the whole process. They were there while we did it.
And they worked through those questions. And what they thought, when they read them out loud, was quite different to what we're thinking. And it was a fascinating process and so invaluable.
Yeah, it's so good to blend in different perspectives in this stage, right? And then after you refine your survey internally, it's time for piloting. So it's essentially a full dress rehearsal. Piloting involves administering your survey to maybe 30 to 100 people from your target population.
Those piloting process focus more on real world problems and the survey performance data, such as completion rate, time taken to complete, response patterns of the data, as well as technical troubleshooting. So take an example here. Response patterns may show that 40% of your audience, of your respondents, abandon the survey at question 15.
That might be a red flag. Maybe question 15 is confusing or it's too sensitive. Or maybe the survey is simply too long at that point.
So this process will help you to identify those problems to further improve your survey quality. But if you have a limited time or resources, you can combine the two processes. Yeah.
And one of the things we talk about is that testing should really be with someone fresh because there's so much assumed knowledge in those questions that you need to make sure that somebody is quite different. And often the challenge would be, if you ask your colleague who sits on the same floor as you that you talk to a lunchtime that has a similar background, you may get a different response to, as Brendan's example, where you went outside of your bubble and you put in a more diverse set of experiences. So if you're able to, that's something that's really, really gold for making your survey better is by putting some more effort into making sure that it's working.
And then one other thing with that is we have got an AI webinar, which you can watch. No surprise, I'm a bit of a convert, but you can actually set up your AI tool to be one of your persona. So you can prompt it to be a tester for someone with low literacy or from this kind of background or whatever it is you want to do.
And while it's not perfect, it can just give you some extra insights to make sure that your wording is clear. So if you're short of time, that is one thing you can try as well. So now you've got a survey questionnaire ready to launch.
What survey platform do you choose? If you have a choice. So Google Forms and Microsoft Forms are both free, easy to use and very similar in functionality. So we've got different ways of approaching this, but you have got an example? Yeah, I think being free is not typically a free choice.
Sometimes this has a price. I got an example from a client who used Google Forms to collect data and all multiple response data clustered as one big column and it took us so much time to record the data into multiple binary columns suitable for analysis. So I think when you use a cheap tool, always think twice.
You may end up having heaps of time doing data preparation and analysis. Yeah, but if your survey is complex, which is quite common, you have to use a survey platform like SurveyMonkey or Qualtrics. If the survey is particularly complex, something like Qualtrics is a better choice.
Another very important consideration is how best to get your survey in front of people. This is a big deal and it's the first step in sort of getting an optimal response rate, which over the 24 years I've been doing this, just keeps on dropping. It's harder and harder.
So common examples are email, network approaches, so sort of like snowball sampling to push it out there, having a QR link in a newsletter, a URL link on social media, or even a paper questionnaire. You may end up using a combination of all these, each tailored to a specific population or stakeholder group. So a good example would be an email out to stakeholders and a push to have an approach for households where you send a postcard with a QR code, and this will scan and go into your online survey, which could be linked to an address so you know where they came from and who responded.
Send a reminder postcard when they don't respond, then a paper version with a prepaid envelope at the third touch. You may supplement this with having a link to the survey on social media groups for populations who don't traditionally respond well to surveys, and especially paper surveys. So of course, sort of mixing up your data collection methods for data you've got to combine later will give you some serious statistical headaches.
There are ways and means to adjust for that, but they involve sort of more advanced stats or some clever waiting tricks, which is a whole webinar on its own. The big four survey platforms listed also differ in the tools they have to distribute your surveys. While they all allow you to send an email to an email list, SurveyMonkey and Quadrix sort of have better reminder and pull up features.
So finally, make sure you have the skills in your team to match the survey platform. If you don't have that expertise internally, then you need to either outsource all or some of the survey or simplify the hell of that survey. Make it easy.
And then in terms of that tips response, we're not going to sort of promise some secret formula for 90%. As Brendan said, over time, survey responses are dropping, which I'm sure many of you are very familiar with. But there is some techniques that make a real difference between 300 and 3000.
And so those are actual numbers on surveys that we'll run with, you know, one following the principles, one not. One of the key ones, we have talked about it, but survey design that respects respondents time. So connecting back to Bo's point about reducing respondent burden.
If your survey looks like a wall of text, then you've lost them before they start. Things like clear headings, logical flow, progress indicators, basic stuff. And then the other thing we would add to that is minimise the open ended questions.
So each text box is a speed bump that slows completion. And it's something we have to factor in the analysis for. Yeah, we always keep trying to get it within 10 minutes on average.
It's just a nice rule of thumb to aim for, yeah. And you can obviously test that through your piloting to make sure that 10 minutes is not me speeding through with context. You know, someone coming at it fresh can get through it in 10 minutes as well.
Exactly, yeah. And I always get the most important questions up early. You know, it's always important to making sure that the themes of your survey questions are mutually exhaustive and use shorter and clear questions as much as you can.
Does that mean, so when you analyse it, you can get quite distinct responses? Not two questions that are too similar sort of thing. Yeah, we come across lots of examples like they ask different, like they're sort of, they are different aspects, but in the measurement they are quite similar. So that will cause problems of analysis as well.
Yeah, classic example in the measurement, you know, think of the response scale, they go 1 to 10, then 10 to 20. And so your little 10 there is sort of on both options. So you see that so many times.
Yeah, it will cause lots of problems. Yeah, you've got to make sure you exclude, yeah. Each item's got to be exclusive.
When you're talking about emailing, you know, really talk about strategic email wording. So skip any corporate waffle. Don't say things like your feedback is valuable to us.
You can scrap that. What you should do is use a motive, more like marketing language, like five minutes to influence this decision, you know, or you, you know, because you've got a personal email list. Help shape our views on this thing by date, you know, so really be specific about the time commitment and the impact that you're hoping to have from the survey.
When you're in an email situation, really try to have that email, the link as close to the top as you can. You don't want somebody to have to read the context and then get to the link at the bottom. You've probably lost people.
So the more information, the click bullet, survey link, and then extra experts, if people want to keep reading, they will. Sender credibility is a massive thing. So you can have the same survey with two different send outs, one from no reply at survey platform and the other one from the CEO or some trusted partner.
You get much higher response rate if it comes from that trusted partner. People sometimes delete generic survey invitations because they get so many and your spam filters are something that we increasingly see to be careful with. They can pick up generic emails with links, for example, and think that it might be phishing.
Seems obvious, but timing your sends. So understand your audience when they're likely to be active and then position your sends to go out at those times. So Friday afternoon, probably not great.
Time zones matter too. We work across Australia, the Pacific. So just being aware of who you're trying to reach and when and then optimising the time for that.
Can I just pick up on the spam filter? I just saw a survey just came through today and shout out to the person behind the survey for ANZIA as they sent out the survey with the links in it. Then they waited a bit and sent out a second survey with no links. Describing this previous survey sent out might have been your spam filter, but perhaps you want to check there.
So that's a cool trick they hadn't seen before. Yeah, interesting. It is a common thing with the links and the field that we'll talk about shortly, picking up in spam filters.
The follow-up sequence really works. Initial send, very important. Position it well, but then do send the follow-ups.
But when you send a follow-up, different subject line. Don't just re-forward the same email. Shorter email.
Final one, urgent but respectful. Shorter again, different subject line. You want people to get the prompts because sometimes people do just forget, but don't send the same email and do send a shorter, more assertive one as you go.
Incentives, Beau did talk about. They can work, but you do have to get them right. Monetary incentives can double response rates.
But be really clear about what it is. So is it complete by Friday, you get a $25 voucher, or are you going in the draw if you complete to win something? Just people can get a bit frustrated if it's not clear. So just be really, really crystal clear.
Non-monetary incentives can work. So things like early access to results or recognition, entries into a prize drawer, whatever it is, but just make sure you then deliver on whatever it is you promise. And spam filters, incentives, free money.
You know, you're looking at a cocktail for getting picked up in a spam filter. So just make sure you test it. Be really, really clear.
Make sure everything's working together nicely. Platform impacts completion a lot. If it breaks on mobile, you've probably lost 60% of your respondents.
So whatever platform you choose, Brendan's just talked about some of the options. Test it. Phone, tablet, different browsers.
Do it on Android. Do it on Chrome. Do it on the other one.
Like we've had examples where it's worked on, you know, Chrome that we were using and then someone uses an Explorer and for some reason, something didn't work. So just really do spend your time on that. And, you know, one broken question can be all it needs to get someone to be frustrated and they just never come back.
And so all of those elements, they can either help or hurt your survey. So get them all working together and you get a much, much better response, right? Yeah. So once you have collected your responses, here comes the exciting or boring part.
That's turning those responses into insights. So we have two processes, data cleaning and data analysis. To properly clean and prepare your raw data is a technical issue.
We normally check about those incomplete responses, inconsistent responses, duplicate entries, outliers like extreme values and an invalid response such as, you know, a speedy survey taker or like a careless survey taker. Examples can be, if your survey takes about 10 minutes to complete, someone finishes within 16 seconds, you need to treat those, you know, specifically look at those responses to see whether they are just replying in an unreliable way. Or such as an example of a careless respondent is like, you know, the same response, like a 1-1-1 or 5-5-5 throughout the scale questions.
It's called a straight lining or pattern the response is like a 1-2-3-4-5 or 1-2-3-4-5 in the rating scales. You need to double check out about this. Remember the garbage in, garbage out rule.
I think Brendan have some deep cleaning knowledge to us. Yeah, clearly you want to spend that time cleaning the data before you analyse it because it's going to mess with it otherwise. I mentioned earlier about mixing up your different sample frames.
There's tricks to deal with that. You can either analyse them separately so you don't mix them up or we can use some really flash tricks to join them up. And weighting is another way to try and compensate for those for low response rates for particular populations.
There's a tyranny of the majority of the problem here. So you might want to lift the voices up those who responded at lower rates and drop the responses of those who responded at large. I think weighting and those bias adjustment can be quite useful.
But maybe you can only save part of your bad data. If your data sucks, you know, it's very shitty and overall they cannot save it. We cannot be a full saver.
We can partly save it. I don't think we have time to elaborate on analysis, but here we provide you a table to offer some ideas of guidance. I think in the short term, the tip is about to link your data type to your research purpose.
But you don't have to become a statistician overnight consider partnering with someone who has the expertise like us. And finally, like the very quick run through, but we do want to acknowledge that survey methodology is a very deep field. And so given our time constraints today, we can't cover more in-depth content such as analysis, sampling techniques, error and bias.
We do have a little bit of time for some Q&A. So we'll jump through to that now. So first question Dion has asked, how do you piggyback of other people's surveys, e.g. questions? Our sector has so many surveys and people are fatigued answering them.
Is there another option other than surveys that get out to loads of people in diverse places? So one way is to sort of look at how this, just to go back to my discussion about the theory of change, being really clear on the pinnings of a survey. One other thing to note is that keeping direct line of sight to the user experience and keeping the survey short, there's always a natural desire to put more questions in. This could be the only time of the year that the organisation is in direct contact with this group, but doing the groundwork and being able to explain why you are doing the survey and what you hope to achieve can be a defence against a dreaded and what else can we add to this question about topic Y, which is why you get this creep into a less focused, less user-friendly survey.
One thing we do find with being external consultants is that going to battle with the other parts of the organisation to hold the sanctity of this particular survey can be one benefit. We know there's always going to be pressure for people to get their survey question in because it might be the only time they get in front of that population. There's another organisation where you can talk to that organisation about adding a link to your survey to their one or some other arrangement.
We've got a question here from Kelly. We've had clients push back about receiving survey requests via SMS. What method would you recommend about sending out surveys without drastically reducing the response rate? Anyone want to jump on that one? I think it's about a combination of two elements here.
First is about the selection of your channel, of your survey dissemination channel and the second is about knowing your audience so that you can send more pattern-made messages to increase the survey response rate. Yeah, it might be via social media in that case. Obviously, using SMS is for a reason because it's the only contact you have.
So again, it's knowing your audience and how else could you contact them? It's always a better solution to mix up all the different methods of data collection. You can send them messages, you can send paper-based and also you can use online tools. You do have to control for duplicates so you might have a question, have you done the survey before? Which is the simplest way you could do it and then if they pass it before, you just exclude.
You can do the magic of finding all those duplicates later. And maybe sequence them so you can actually don't get them all tangled up. It's always hard to deal with nothing when you have so many to deal with.
You'd rather get more things. So Margarita has said, how secure are these survey platforms, especially if collecting customers' personal information? Oh, well, yeah. Google is rubbish.
SurveyMonkey is a bit rubbish. So Microsoft Forms, you can specify but it does distribute the data around the place. That's a bit rubbish.
So Qualtrics, you can specify. And in Australia, that's where their server lies. I think it's an ongoing problem.
I mean, the building data centres everywhere. So they're hoping they'll get better at this stuff. Yeah, I think the process about safety, maybe there are two elements to consider.
First is about data storage. I think Qualtrics generally is safer. The other thing is about data transfer or data collection.
You need to ask this question; do we need to collect so many or so much personal information from the clients or from the customers? If we need to collect them, do we need to report them? There is a difference between an anonymous survey and some data or some survey that protects your privacy data. So I think just some further ideas on that. Yeah.
Hannah has asked, how can survey questions be updated or refined from year to year while still allowing for comparative analysis? Oh, can I say this one? I've done a lot of longitudinal surveys and that's always a challenge. So at some point, you might keep that core question. At some point, it becomes less useful.
So you might have to drop it out. You always have to test whether it's still useful. There is something called crosswalk.
So you might be questions where you might add in a similar question, which works better as knowledge changes or people sort of behave differently. And then you can tweak the responses, the scale so they are equivalent. Yeah, I think you can make some minor changes that will not affect the overall analysis for comparison in general.
But when we talk about the measurement itself, there can be structural change and unstructural change. If there are structural changes to your survey, that will make it a little bit hard for you to do year-on-year comparison because the total survey questions are so different. But if it's just like a minor changes to the wording or to the scope, you can just add some methodological caveats after your comparison.
You always have to be careful. Yeah, you need to be careful. But sometimes you have to do it.
Sometimes people just behave as different. So you're not measuring the same thing anymore. Yeah, yeah.
There is ways to do it, but do it very deliberately and make sure you're building in the proper pathway to get there. Rula has asked, please discuss in the context of local government commissioning a supplier to run surveys. What do you consider in the survey design in managing the ongoing relationship? I mean, that's something I get involved with quite a bit into that commissioning phase, but really running a workshop to try and understand what you're trying to achieve is that first step.
So that can be done quickly, as Brendan noted. But the more time you spend at that stage, the better information that partner is going to have and the better they're going to be able to design a survey that meets for your needs. And then the key consideration for that is the earlier you can bring someone in, the better it's going to go.
So even if you talk about, you know, before RFP stage, do an RFI, ask about the types of people that are going to be around in your region to run that survey. Meet with them, find out who you're going to mesh with and not, how they process information, how they partner with you. And you'll just get a better idea about what the options are before you then take the formal approach to market.
So that's always something that we'd recommend. And then ongoing relationships, range of factors, everything from they'll do the survey through to they'll work with you within the team. So it really is just horses for courses.
So have a discussion, feel empowered to do something like an RFI and have a discussion with the potential suppliers is something that we'd always recommend. And Dave has asked, how do we survey people with limited literacy and people with high suspicion of surveys? Does anyone want to grab that one? Yeah, I think that is a typical question related to my previous narration about the audience. You need to make your audience aware of your survey and the importance of taking the survey.
It's quite like a marketing technique. This is one side of the story, but also we might consider using a mix of online survey, paper-based survey with a survey administrator, maybe translated versions of the survey or accessible versions of the survey for full coverage of people from different levels of literacies and backgrounds. And also for the second one, for those who are sceptical about your survey, if your messaging cannot win their heart, I guess that will just be in the category of those non-respondents.
But please be sure that non-respondent bias exists across different types of survey, even if your survey is a good quality one. And then we'll be ready to go and talk to some people. So it's a different kind of insight.
So it won't be about your survey, but you get their voice through other means. Focus group intervals, right? Yeah. And there is things like easy read versions you can partner with people who can facilitate the simplifying of your survey into pictographs and those sorts of things.
Or you can just ask really simple questions is often the other one. We've got a question here from Deepak. How do you balance a survey that has multiple methodologies such as online, paper and focus group interviews to restrict someone who wants to overly influence the survey? For instance, online versus paper versus panel.
Yeah, I think that relates to our data cleaning process. If you can recall what we said earlier, one side of the data cleaning will be to identify those duplicate entries. I know that people want to increase their waste by voting multiple times, but we have the rigorous approach to identify those maybe within a channel and across different channels.
And also we will see for these different entries, we may compare their time of completion and their status of completion, whether it's completed or not. You know, somebody may enter once and so, oh, that's quite not quite completed yet. I will re-enter, resubmit a thorough, more thorough, more complete version.
We'll compare those metrics and then delete those irrelevant ones to leave only one channel. If you were worried about someone or a group trying to influence your process, there is ways to try and preempt that. Bit more technical, but again, if you think that's an issue, something you might want to have a discussion about.
It's indeed a technical issue. I just want to say a bit more about the different data types. If it's like a categorical data, we normally just leave one set.
But if it's like there are three text responses and the person is quite keen to express their opinions, they may submit multiple times with different rules, different scripts. Our approach for qualitative data will be to merge those under one name. Yeah, and it's a problem with smaller, if you've got a big enough survey, then that's the kind of noise that will wash out in the results.
And it is also, we talk about the risk of over-emphasising quantitative things or the tyranny of the majority you talked about. You can often get processes where there's a large group and then it doesn't necessarily mean that that's the most interesting insight from it. But I think it's just being really clear about what you're doing.
And if you're going to weight different things differently, just be very, very transparent. And there is ways that you can show the breadth of opinion without being too bogged down into the specific numbers, for example. And the focus group would be different kinds of data anyway.
So you'd sort of complement each other, I suspect. Yeah. Question from Lee.
Are there any whānau order service satisfaction survey templates that are designed to be culturally appropriate and reflect whānau-centred values? Speaking of culturally appropriateness, my recommendation is you don't seek templates to begin with. You seek opinion from people, from the cohort or the community organisations or even the public about the culturally appropriateness. You may set up some process about cultural review of questions.
Because some questions, they may read normally if you ask me or Brendan to read it. But if it was like, for example, we are in a setting of sending this to a different group of people, am I Asian? For example, you send a survey question to an Asian, that may sound normal. But some of the terms may sound stigmatising in the Asian culture.
So you need to work with the communities rather than work with templates. That's our first suggestion. Second one is the whānau-centred approach.
Maybe Brendan have some say here. It's difficult because often they may be responding on behalf of a group or another person. So it's understanding again, which might go back to that cognitive testing.
So you might develop your own measures. If you are using a templated one, you'd have to test it and see who are they responding on behalf of? Is it them? Are they thinking about a group, a whānau? Whānau whānui? Is it them personally? What are they thinking about when they answer that question? It's not easy, but you do have to do that work up front. I think all this links to the first question in our step one, that you need to define your problems and your research questions properly, and then link them to those culturally appropriate or whānau-centred design.
Template can be helpful, but please use it with caution. But sometimes those satisfactions can be linked to some health New Zealand funding or something. There might be reasons why someone's trying to use a template, but there's always going to be trade-offs from using a template.
Question from Hong, what is the recommended survey completion time? Some surveys require up to 20 minutes to complete, which is too much for participants. Bo touched on this earlier, but 10 minutes is a rule of thumb, especially if it's a member of the public who's not really involved in your process. But there might be times, Bo talked about workforce wellbeing surveys, they might be 45 minutes and really detailed to get really deep analysis.
So it really depends on what you're doing, who your audience is. Yeah, it varies so greatly. And I know that we have come across some technical surveys that take about one or two hours to complete.
So generally, I think this links back to your research questions, the key research questions, and thinking about some of the terms we use, so they're about how we use jargons, mutually exhaustive themes, with making sure your themes in the question you want to measure cover most of the research questions, good research areas, but don't overlap too much so that you can actually reduce the number of questions so that you reduce the number of survey completion time. I think a long survey is fine if they're really into it. If there's a population who really wants, they want to get this stuff, they want you to know this stuff and they'll dig in.
But if it's a group that's like, eh, another survey, you can't do 20 minutes. We have done them before, like with Qualtrics, for example, you can almost have a branching thing to be like, do you want to do the basic one or do you want to do the full in-depth one? And, you know, with the warning that the implication being that we're going to ask much more detailed questions. One other thing is with more technical subject matter, there's often a tension between a more technical long survey and someone's just going to send you an email anyway.
And so I think, again, it goes back to understanding who's likely to respond. Like peak bodies don't like doing surveys. They like to have a process where their chair or their board or whoever can sort of review the response and then send it off.
If it's in that survey thing, they're not all going to sit around a computer together and click through. And so it's often that tension of like surveys can be excellent for gathering really strong information about technical information. It can sometimes be a trade off.
If it's going to be a really long survey, they might not answer. Peak bodies or other groups may not like to do surveys by default. And so then you end up with multiple types of information anyway.
So again, courses have a really long, hard think about what you're trying to ask and why and if there might be multiple approaches. Yeah, you might have a set of questions across just a small set that can across all those different really key bits. All the different response, all the different ways of getting the data.
You get a whole bigger picture, but then each would be more specific. Yeah. I've got a question here from Juta.
Do you recommend asking for name, title, et cetera, at the beginning or end of a survey? And what do you see as the advantage or disadvantages for both options? I think it depends what you're going to do with the name or title. If you have to have it to link to data across longitudinal or some other ways, you need those names to link it or to test that people are responding. You might ask at the beginning.
If it's some way of feedback or it's not as important, you probably want to drop it at the end because some people might not be happy asking their name. We just leave it as they can skip it if they want to. Well, as you say, we often get that is it a radio box, like you have to give us your name or is it an optional and then you get quite different responses.
We have had surveys where you have to give us your name because there's a requirement for it for some reason. If you have to collect the name, we recommend collecting it in the beginning. If you need to, those demographic information, for example, used for subgroup analysis, collect them in the beginning.
I still remember when I was an undergraduate, I read some golden rules written by some textbook experts saying that, oh, always collect the demographic information at the end of the survey. But actually, in reality, it didn't apply this perfectly. Sometimes if this links better to your KEQs and your analysis methods, you want to do.
Here in Aotearoa, we often want to do the subgroup analysis. It's important. It's very important.
Yeah. Cohort analysis. And if you have it at the end, you might miss out on some quite rich partial responses, for example, that you don't know where they fit.
If you don't collect it, you don't have it. That will be the non-response bias. Then you have to report those, how many didn't report that.
This can cause more problems than talking about sensitivity, asking those in the beginning. And you just have to be aware that you may well miss out on respondents from asking the name and the title, so that people may not trust you to have that information. And that just has to factor into your discussions.
And then, I mean, do you then have other alternative ways as well? But it's always a trade-off. I remember we have come across some examples in a public consultation. We need to use this name and email to validate the identity of a submitter to measure whether they are really serious or they are just a fictitious name or random input.
So that is also maybe another point to consider. And one of the questions I was asked around how do you, people try to influence surveys. So again, you'll lose some people, sure, but you might make sure that your respondents are genuine.
Question here from Claudia. What percentage of time do you spend on survey design versus cleaning versus analysis on average? Oh, so there's a 20% for the upfront stuff. Yeah, it's a little bit hard to set up a more accurate estimation.
But I think, yeah, I agree. Maybe 20% to 30% on survey design. The data cleaning actually varies a lot.
Remember, we started with the example of it's very unclean data that needs lots of recording work. That takes a lot of time, maybe up to 20% to 30% or 50% of the time. But normally it would take 10% to 20% of the time.
If it's a really, really complicated survey, they've got multiple methods, all this, you've got to join it all up because there's so much cleaning to do. So you'd be assigning a lot more at that point. And there's a point here that's between survey design and cleaning, there's a process called a survey data collection and a data monitoring that also takes about 10% to 20% of time.
Yeah. And when you say data cleaning, it's not just culling things out that have been completed, it's a much more fulsome process than that. It's about to get the cream out of your coffee.
Yeah. It's probably a good place to wrap up. So thank you all so much for staying on and for all your thoughtful questions.
This is the kind of discussion that makes these sessions really valuable for everyone. So thanks for joining us. We really appreciate it.
Have a great rest of your day. And we'll see you... Jargon bingo. Oh, hopefully we've got a winner.
I'm informed that we have a winner for jargon bingo. Excellent. Yeah.
Did you even say merging probabilistic and non-probabilistic samples? I don't think you, yeah. We need to get that in there. Next time, vote for that one.
All right. Thanks very much, everyone. And we will see you again.