Published on 30 Oct 2025

AI Toolkit: Creating 4 Personalised AI Tools

55 minute watch
Adam Emirali Group Marketing Director Contact me
Jeremy Markham Senior Consultant Contact me

You’ve watched our previous webinar on Getting Started with AI and have been practising the core principles, but you’re still wrestling with over-enthusiastic generic responses, time-consuming prompt writing, and concerns about data security. There’s a better way.

This action-packed session shows you how to build four personalised AI tools that will completely transform your AI experience. Just like our Getting Started session, we won’t bore you with theory – we’ll cut straight to showing you exactly how to set these tools up so you can become truly AI-assisted at home or at work.

What you’ll see demonstrated:

  • Personal Prompt Agent: Watch us build a custom AI assistant that understands your context and automatically transforms a simple prompt into a comprehensive one that gets you closer to your desired output in one go.
  • AI Data That Supercharges Your Outputs: – See five methods for generating realistic contextual data and witness how it supercharges AI outputs without exposing sensitive information.
  • Virtual Expert Board: Watch the creation of a five-member advisory board featuring global experts and see how they can advise on and critique your work to improve quality dramatically.
  • Voice AI for Strategic Thinking: Learn the setup process and behavioural changes that make AI voice conversations genuinely productive anywhere – from morning walks to airport delays.


You’ll also get the sample data we use so you can create your own versions in in minutes. For in-depth tutorials of each tool, click the above bullet points.

Webinar Transcript

Read transcript

Kia ora koutou. My name is Jeremy Markham, Senior Consultant at Allen & Clarke, and I'd like to warmly welcome you today to our webinar, AI Toolkit, Creating Four Personalised Tools. I'm joined by Allen & Clarke's resident AI expert, Adam Emirali, who'll be taking us through some of the tools.

Give us a wave, Adam. Yeah, there he is. Just in case anyone was confused.

Anyway, before I do that though, just a little bit about us, Allen & Clarke. That is, we're an Australasian-based consultancy, really focused on making a positive impact to the communities, not only across Aotearoa, but also Australia and the Pacific. We specialise in strategy, change, policy regulation, research and evaluation, and more.

We really give a damn about empowering our clients to make a big difference and overcome society's biggest challenges, which is why we regularly run these webinars and create desk guides and so forth, and we do that whenever we can. Anyway, on to today's topic, right? So whenever you think, whenever I think about AI, I get a little bit torn because I'm kind of hearing two narratives come through. On the one hand, I'm hearing people say, it's amazing, it's wonderful, it's the future, you better be careful because if you don't get on this train now, you're going to be gone bigger tomorrow.

It's really important that you do that. And then you've also got this other sort of dissenting voice saying, actually, we think it's been a bit overhyped. We're still worried about some of the sloppy outputs, hallucinations, and actually also a concern about losing, I guess, some of the, you might call it the friction that happens with when people engage people to people, right? And so if you are like me, you're having to navigate and make a decision about, OK, so how am I going to respond to these, the praise and the criticisms and to navigate my way through this? And so I know myself, as I've cautiously tentatively tried to use some of the tools, sometimes I've kind of hit the jackpot.

And sometimes it's sort of, you know, slot machine, give it a pull and you're lucky. Yes, I've got the slot machine, right? Yeah. Anyway, so, but with that, even when I get the outputs I don't like, I'm going, actually, I must be missing a trick here.

Like, there's got to be a better way to do this so that I get the efficiency improvements, but I don't have any of the sacrifices or the bad stuff we're trying to avoid. So if that is you, if you're at all like that, this webinar is going to be absolutely for you. So we're going to get into it and hopefully going to give you some really practical tips.

With that said, though, I want to give my friend here, Adam, an opportunity. Tell us a little bit about yourself and your journey with AI. Yeah, thanks, Jeremy.

Kia ora and hello to everybody. Look, as Jeremy said, I've been kind of leading AI here at ANC for a little while now. And my experience started with AI way, way back in late 2022, which is awfully, not that long ago.

Which, with the launch of ChatGPT3, I started using it, loved what I was seeing, and since then have kind of gone deeper and deeper into this space, building up a network of kind of people to chat AI to, but also, you know, practising prompts, building workflows and things like that. And really, for me, like, I don't claim to be an AI expert in building models or model training or any of that sort of stuff. For me, it's more about helping kind of humans interact with this technology, and how we as humans can get kind of the most out of it.

And obviously, the most out of it is evolving as the tools evolve as well. So that's kind of my area of interest. So that's what I kind of help ANC do.

Yeah, that's awesome. And yeah, things are moving so fast. But I know I've definitely appreciated having you around.

Really appreciate it and the advice you've given me around it. Anyway, so you might be wondering, four tools, what are they? So we're going to choose four tools that are really focused on what we think are four of the biggest problems that users seem to face. Right.

So the first problem that people talk about is the quality of outputs not being very good, right? This is where that term, you know, AI slot comes in, right? And to address that, we're going to talk about creating something called a prompt agent. You're going to explain that in a little bit. But fundamentally, what it takes is it takes something really simple, a simple prompt, and it turns it into something more detailed to get you a better output.

The next common problem we're going to talk about is the lack of accuracy, detail, and nuance that we get in responses, right? So we're going to tell you how you can provide some of that additional context without compromising or doing anything inappropriate with your own sensitive data. So that's going to be good. And the third one, which I'm particularly looking forward to, sometimes people talk about AI being a hype machine that just sort of is, you know, sort of reinforcing your own ideas and telling you you're amazing.

But we're going to address that and tell you how you can build a virtual panel of experts that can really challenge your own thinking and biases and so forth and also support ideation. So that one's going to be really cool, I know that. And finally, many of us, if you're like me, you know that some of your best ideas actually come not when you're in the office, you're out and about doing other things, and sometimes you have a moment and you sort of forget what that idea was.

One of the cool things you can do is use AI when you're out in the run to work things through. So we're going to talk a little bit about using AI voice. Now, as we get started, just want to say that if you're really unfamiliar with AI, we do have another webinar as well called Getting Started with AI, which I really enjoyed and definitely recommend.

So check that out. We've kind of designed this for people that are maybe a little bit further and have tried to dabble with some of the tools, but are like me kind of going, surely we could do a little bit better. Yeah, that's it.

Cool, perfect, perfect. And we do want to engage with you all, we do want to hear what you think. You've heard just now about four problems, so you should see pop up on your screen a little poll that we've got.

So what we want you to do is we want you to select which one of these four would you say is your biggest problem. So number one, poor quality of outputs. Number two, low accuracy, detail and nuance.

Three, AI being overly positive or a hype machine. And four, it just doesn't enhance your productivity. So we're going to give you about 30 seconds to quickly answer that.

And while we do that, I just want to throw to Adam, what do you think? Yeah, so well, we had a little bet, didn't we? And so my view is that poor quality of outputs is probably going to be the top one. My two cents on that is most people are experimenting with AI, they're putting in a simple prompt, and they're quite dissatisfied with the output that they receive. And so I think that's probably that one, or perhaps the hype machine one.

So let's check it out. What was yours? Yeah, I was a bit torn actually. I kind of thought actually that we would have really strong representation across all of them.

Yeah, so you're on the fence. But I can see the results. And actually, and this is brilliant, the top one that came out was about 47%, is actually number two, the low accuracy, detail and nuance.

So number one was a very strong contender there, but actually the top one was that low accuracy. And I think for you to address that, we've got something really cool today. Yeah, absolutely.

Yeah. So for each one of these tools, there's literally a step by step video that shows you exactly where to click, what to do, where to put the contextual data. Some of them include like templated prompts that you should use.

And so it shows you how to do basically everything. So that will come out afterwards. So you can have two screens up and on one screen, you can kind of pause the video while you do it.

It's going to be going to be great. So don't stress about the pace that we are going at. So let's start with the first one, prompt agent.

That sounds incredibly jargony to me. So in plain language, what is it and how can it benefit us? Yeah. So you'll all be aware that a prompt is effectively the instructions that you give your AI tool.

And a prompt agent is essentially an assistant that takes your simple prompt and turns it into a detailed kind of complex prompt that is structured in a certain way to achieve the best results for the task that you're trying to achieve. So, you know, there's a default kind of prompt style that we prefer that's aimed at writing tasks typically. But there are different styles if you're prompting for image or video or, you know, these sorts of things.

So instead of you as the user having to kind of become a prompt expert yourself, a prompt agent actually does that for you. That's pretty handy. Yeah.

Yeah. So I just want to be clear, right? So what we're saying is that the quality of your output is going to be dependent on the prompt. But in order to create that, there's quite a lot of effort or work that needs to be done to ordinarily to create a sophisticated or a tailored prompt.

Yes. Which takes time, which is obviously isn't great. But you could use this thing.

So you do a bit of setup work and then it means over and over again, you can use this prompt agent and you just need to put in what, like a one or two sentence mini prompt. It's going to then take that, make it into a bigger prompt. And then you take that bigger prompt and then you run that and then get a better response.

Absolutely. Yeah. So a lot of you will have experienced, you know, you drop a simple prompt in and then what ends up happening is you get a basic response and then you end up going kind of backwards and forwards and trying to iterate on it.

The prompt agent kind of shortcuts that. So you'll get a much better output in the first pass and then you still have to, you know, refine it. But it just gives you that much better output in the first pass.

Awesome. Awesome. Let's have a look.

We've actually got a video we're going to play for you. So let's just play that now. Yeah.

OK. This is a quick demo of the prompt agent in action. We're just in Claude here.

It doesn't matter if you're in Copilot, ChatGPT etc. The prompt agent works the same. And we're just going to run this basic prompt, which is summarise the strategic planning process.

So we'll just whack go on that. That's our simple prompt. And what we expect to see here is a relatively simple output from Claude that details the strategic planning process.

So here it goes. And it's detailing the strategic planning process kind of in front of our eyes. So it's not going into a great level of detail, but, you know, it's a decent summary of the strategic planning process.

Now, we're going to jump over to the prompt agent and we're going to chuck in our original simple prompt and we'll press go. So here is a detailed prompt being produced. And as expected, it's produced.

It's selected a persona. It's detailed the context we're operating in. It's outlined what the task is and what the objective of the output is and also the structure of the output that we're looking for.

So we just take that prompt and we're going to chuck it into this window here. And what we're expecting to see is if we jump back to our original output. So there is our simple prompt.

Here is our simple output. Now that we've put our simple prompt through the prompt agent and got this detailed prompt here, what we're expecting to see is Claude to really start diving into the strategic planning process and give us a lot more information. So you can see here, first of all, Claude's actually opened an artefact, which is this window on the right, because we've specified what the output is.

So the output here is word ready strategic planning process summary. So because this isn't an artefact, it's word ready. We can literally just download it directly into Microsoft Word and edit it ourselves.

Now you can see Claude's doing quite a bit of work detailing out the strategic planning process. We've even got phases. We won't keep watching it generate, but you can see here the power of and the difference between a simple prompt and a detailed prompt and the impact that that has on your outputs.

That's the power of a prompt agent. Back to you, Adam, in the studio. Nice, yeah, there's a good video.

Really good introduction. As I said, though, lots more information is going to be accessible. You can see the extended video, I think, and other resources as well.

So yeah, all the resources there for you to set it up for yourself. Quick question though, right? So I'm conscious that you could have a false sense of security, right? Because you've got such a big prompt and it could look really good. I know in the past I manually created a prompt where I tried to be really, really clear and say this is really important that you check the sources and provide links and do all this kind of stuff, thinking that that was going to make sure that the system provided me avoided hallucinations and so forth.

But when I ran it, at first read, I thought, dang, this is awesome. It's exactly what I need. But then when I did the fact checking, it all kind of fell to bits as some of the stuff was made up, unfortunately.

So no matter how good your prompt is, you've always got to do QA, quality assurance. Do you have any tips about doing QA? Absolutely. So the first thing to say is that there's quite a lot of research out there that shows that you can reduce hallucinations by increasing the level of context that you provide.

So that doesn't mean that it removes hallucinations, but increasing the context does help reduce hallucinations. Also, as the models are getting better, the hallucination rate is also coming down. So back in ChatGBT three days, it was really, really bad.

A couple of tips for you just quickly. So try adjusting your prompt to require more detail on the sources. So instead of just asking for the sources, ask for direct links.

Ask for the page number and maybe even the paragraph number on that page of those quotes or whatever it might be. And what you'll find is that A, that makes your QA way faster. But also B, the models are getting better at being like, oh, hang on, I can't find a page number for this.

Oh, that's because it's not actually real. So that's one quick tip. A couple of other super quick ones for our listeners.

So for summarization, you should use tools or features that are specifically designed for context grounding, which is basically the model focusses on the context that you give it. A couple of examples of context grounding is Notebook LM. It's a Google product.

Again, that's purely designed to focus on the context you provide. You can get similar results though in Copilot or ChatGBT. So in Copilot, I'd use the Notebook feature if it's related to a project.

We've got a little bit more information on the difference between Notebook and Agents in Copilot because there is a difference. And it's the projects feature in ChatGBT. And then lastly, a right tool for the task.

So there are tools that are specifically trained for research-based tasks. So if you're doing web-based research, I'd really recommend using Perplexity. The hallucination rate is way lower on Perplexity for web-based search.

So I definitely recommend that. And then for academic research, we use Illicit. Again, you can Google these things.

Those are my main tips. Yeah, no, that's awesome. And we will share a list.

So you've done some thinking in your brain to figure out what the right thing is, the right tool for the job. And we've got a resource on that. So people can have a look at that.

That's in the slide pack. So people can download that. It's my top eight tools, what to use them for, that sort of stuff.

Yeah. And is there anything specific people need to know about the setup? Well, definitely. So today, what you'll see, the tools that we're using, they all use quite a lot of context.

And so what I really recommend people to do is to use the agents feature or the project feature in agents in Copilot, notebooks in Copilot as well, but also projects in ChatGPT and Chord. Now, people are probably wondering what on earth are you talking about? These are specific little features within these various tools. Again, in the stuff that we'll send out, there's a little video on how to set up an agent in Copilot.

And it just helps you with that context grounding. Yeah, OK. And so people will be able to see in the video where in the user interface, you know, agent project.

And it's the same thing in all the how to set up videos. There's kind of a Copilot version and a ChatGPT version. So you should be covered.

Awesome. OK, cool. So that's PromptAgent.

Do check it out. There is more to get your head around, but Digest it. It's great.

To the second thing, which was our number one thing. And this takes me a little bit more time to get my head around, I know. But a lot of knowledge work, as you would know, revolves a lot about analysing specific issues which have a lot of really contextual background.

So unless you are able to provide that deep context, it's likely that, or in my mind, it feels like it's likely that whatever products I'm going to get are going to miss some really important nuance. It feels like a missed opportunity, but then I'm really sensitive about, I'm cautious about my data and making sure I'm not putting things in the wrong place. But you believe that there's actually something we can do to help get around that problem.

Yeah, definitely. So if your organisation has an enterprise level product, then you can, for example, Copilot, you can use your context within SharePoint or OneDrive. There's heaps of demonstrations of how to do that.

But often the main challenge that people have at the moment, right now, is that they're not able to use their kind of real data for privacy reasons or whatever it might be. And that's where synthetic data comes in. So I define synthetic data as kind of any data that mirrors or is similar to your real stuff.

So that could be AI generated, which is quite meta. Or it could be content that is kind of found on the internet that is similar to your context or might apply to your context. Or it might be like templated data that you have within your system.

So you can kind of set up dummy data yourself, you know, based on the real information that you have. So these are all kind of examples of synthetic data. Cool.

Okay. I think I need to look at this demo video. Yeah.

So I think the main thing, so we'll play the demo video in a second. But the main thing here is there are ways to create synthetic data. So we'll come to that in a second.

But what the video is going to show you, Jeremy, is the impact of using context versus no context. Brilliant. Okay.

Let's play it. Hi, everyone. This is just a quick demo to show you the power of context and why even synthetic data is really useful when you're working with your AI assistant.

So I've run this prompt here and it's generated this report here for a fake stakeholder engagement plan. We're just going to jump down to the project background. As you look at this, you do get the sense that it is quite generic and quite high level, sort of the level of content you would expect from a more junior analyst.

We'll just jump to the other version. So this version is the one where there was context. So we loaded some synthetic contextual data into this version.

And let's just scroll down to the project background area. You can see there's a lot more detail in this project background. It's got things about the pollution legacy.

It talks about nine power stations. Again, we're just getting much richer information. I'm just going to quickly highlight one other example.

So here's the second example, and it's the stakeholder mapping and prioritisation section. So again, you can see here in the version without that contextual data provided, we are getting some stakeholder mapping and prioritisation and some useful content. Again, it's quite light touch.

It's quite high level. If we jump over to the version where we did provide that synthetic data, you can immediately see the difference. It's listed out specific iwi or hapu that could be affected.

You know, we've got councils split into the various different types. We've got industry and economic stakeholders listed out. So that's the difference.

In terms of totality, the version with the contextual data is actually twice as long, so kind of twice as detailed, as the version without. So another example of the impact of that context. OK, that was cool.

I definitely want to look at the resources and see the longer version and the detail, because it will take me more to get my little brain around it and actually think about the devil's in the detail. How can I personally get this to work for me? In principle, I kind of get it. I've just got to work it through.

But it does sound like it could have a really big impact. So tell us a bit about how to create this synthetic data. Yeah, so there's five kind of main ways to create this synthetic data.

Obviously, there's a lot of content on the slide, so you can download this afterwards and kind of go through it. I'd really focus in on those top two that are on there. So there's a deep research function within most AI tools.

And when you combine that with Prompt Agent, you can actually get synthetic contextual data pretty fast. And then there's a kind of my preferred method is the second one down, because that gives you much more control over the actual context, the synthetic context that you're using. Again, the step-by-step video goes into way more detail, including the benefits that literally show you how to do the first two.

But really, what you're thinking about here when it comes to contextual data is there's a three-tier kind of structure to it. So for a low-complexity task like a simple question, then you really don't need context. The model has enough context kind of built within it these days that it will do a pretty good job of answering your question.

It's when you get into a more complex task, like the one that we just showed, drafting a stakeholder engagement plan, that's where the context really matters. And in that example, that was all synthetic contextual data. But you could really see the difference in depth of quality between those two drafts.

So I would encourage you to check out the content. But yeah, context is super important, and we use it all the time. OK.

OK. Yep. OK.

So the cogs are whirring. This is good. This is good.

Just on that, before we move on, and it feels like we've moved on too fast, but we do need to move. But when the registrations, there was someone, Elaine captured something which was a bit of a theme of what came through. How do I create context that will make the tool create an output that writes in the way that I personally write? Yeah.

So instead of like US English and kind of robot voice. Yeah. So I love it.

Great question, Elaine. And you're thinking about it exactly the right way, which is around that kind of giving the model context to help it write or do the thing that I want it to do. So a quick tip for you there is to use a writing style guide.

So most organisations have one. Your marketing team will definitely have one. We have one.

You can download it. It's in the documents you can download after today. And that is style guide.

And it's the think of it as instructions for the tool on how to write its responses. And it's amazingly effective. So, yeah, as I say, check out ours.

Free to download our one. But there's tonnes of them online. Cool.

Yeah. And I think you can do some things as well. Upload some content about like my own writing style, like a little sample or something.

Exactly. Yeah. So if you want it to write kind of more like you individually, then what I would recommend is if you kind of take a writing style guide that you're happy with and it might be, you know, the NZ government one or whatever.

Use that as a base. Upload. Obviously, pay attention.

You can't or shouldn't upload personal data or things like that. You know, your own personal AI policy. But upload some samples of your writing and then ask your AI tool to create a writing style guide based on the format and the structure of the guide, existing guide that say, you know, rewrite it to based on my personal style.

And it will literally do that for you. And then obviously you can review it. Yeah.

Cool. Cool. OK.

That's great. OK. We're going to jump straight to number three.

So I'm really excited about this. I got so excited about it. I went away the other day and tried to set it up myself and had quite a lot of fun actually exploring it.

So it was really cool. Has to do with the virtual panel of experts. So tell us a bit about that.

Yeah. So this is a tool that I use every day to challenge my thinking and get different perspectives. I'm sure some of my colleagues listening in, our colleagues are listening in.

And yes, I do actually have personas of, some of my internal stakeholders built into this. And look, the point here is that you put in a prompt, the board of experts will choose a mode. There's four modes built into it, challenger, consensus, foresight and rapid fire.

Don't need to worry too much about that. That just helps direct the sort of responses that you want. So it chooses a mode based on your initial prompt.

It then feeds that into your advisors and your advisors are obviously AI, but it's AI that is channelling a certain persona. And so it will be like a finance director or operations director or something like that. And what that means is that you're getting this kind of range of perspectives back at you to kind of consider.

And that really helps me kind of again, challenge my ideas and broaden my perspectives. Yeah, that's awesome. I care a lot about, you know, things like group think and unconscious bias and things like that.

So I see some real good potential in this. We've got another video that we're going to do, give you a little bit of a demo on that one as well. So what can a board of AI advisors really do? So what I've done here is I've given it a document.

This is a fake document. It's not an actual guide of ours but I've asked it to critically review this document and feedback anything that I may have missed. So I've asked it to critically review, now that engages challenger mode in the background so you can see here it has engaged challenger mode and what it's done in challenger mode is it's gone through every persona that we have given it in the background. So we'll just look at one quickly so the transformation lead you know it notes that there's narrative scenarios in our document but that those aren't repeatable frameworks.

So obviously it tells you what you should do to make those frameworks more repeatable. So that's one example I'm just going to jump over and show you another example of the board in action. So this is fake consulting’s P&L what I've got here's the P&L here and I've attached like a little write-up summary as though I was a finance person reporting to the board and I've asked in quickfire mode to give me a critiques of my summary and you can see here it has engaged quickfire mode and you can see the style of feedback is very different to our previous style but in quickfire mode it's much more succinct and you can see here I've asked it to expand on one of the points, it doesn't really matter you get the idea it's now expanded on that point.

The great thing is with your board is you can also do things like this so as a financial analyst create an interactive dashboard to visualise my data. So if we click this preview button it's you can see here it's given us this visualisation of our P&L data so you know you can hover over it we can switch to view the finance the costs operating expenses so this is pretty pretty cool stuff considering all I did was ask it to do this as the financial analyst. See you soon.

So that was in action now just a super quick hack for you if you are using Copilot or Gemini you can there is a prompt in our downloads that means that you can actually have Copilot generate those personas based on real people in your organisation as obviously it works provided it has access to your emails and your teams and that sort of stuff. It will be imperfect but it's a really cool little starting point. Yeah cool cool yeah no I mean I love it.

Just one question though right so in terms of the hype thing and bias if my natural perspective aligns more to if there's someone on my virtual board whose position actually is pretty similar to my own and I respond you know in the interactions when I get the out points surely the system's just going to work out oh this is what Jeremy wants to hear I'm just going to give him that so any thoughts on how to get around that? Yeah so my main tip is think of your board as a board that reports to you so you're the chair right so you don't want a board of people that think exactly the same as you you're the chairperson so what you want is you want people that don't have the same views as you on the board that's the whole purpose of a board so make sure your AI board does not include people you know personas like you that's tip number one. Tip number two just be careful about showing preference because yeah the tools do pick up on preference and so you want to make sure that you are kind of going back to those kind of board personas that don't align with you. Okay yeah okay that's really helpful thank you for those tips we've got to move really quickly because of time final tool voice AI I have used this found it really valuable often I'm out and about I pick it up I have a spark of inspiration and pull up my phone and begin to use it tell us more about it.

Yeah so advanced voice mode is not just dictation like the robot Microsoft voice that we all will know and love so much it's actual dialogue and conversation with AI so you can interrupt it and my favourite thing to do is actually use it in conjunction with your board of advisors because that gets you that kind of challenger mode that kind of the different perspectives are kind of pulled in there. Nice really cool tip again let's watch another video that we've got about voice. So here we are we're in the park and we're about to start using advanced voice mode for some ideation on the go out for a nice walk so I've obviously done the setup in the office which I showed you earlier and now we're going to turn on advanced voice mode so we're just going to push the button in the bottom right hand corner the one with the little squiggly sound bars so as voice mode boots up you'll get a blue swirly thing in the white dot and now we're good to go.

So hi ChatGPT thanks for reviewing my stakeholder engagement plan what is the top risk that you see? Hey there absolutely happy to help so the top risk I'm seeing and this is me putting on my challenger hat is really around that treaty relationship if the plan doesn't fully honour the treaty principles or doesn't give the hapu the resources and voice they need that's a really high likelihood high impact scenario basically it could cause a serious fracture in those relationships which could derail the whole plan so that's the number one risk good flag let me know if you want to dig into that or anything else a bit more. Okay and how would our personas tackle or mitigate that risk? Absolutely let's bring those personas. So you'll notice as I'm talking to ChatGPT I'm keeping my questions quite short and quite direct that really helps it stay focused on the specific task if you start having a bit of stream of consciousness it can struggle to maintain the context and therefore you'll get kind of drift in the conversation so you'll notice I'm quite direct I'm probably far more direct to ChatGPT than I would be the rest of the team but that is on purpose.

The other thing you'll is that I use mute quite a lot and so the good thing about using mute is that it will pick up on any surrounding noise and that can disrupt the conversation and that makes it quite hard for you to kind of go backwards and forwards so I do use mute quite a lot when I've asked a question and I pop it onto mute and that way the conversation isn't interrupted accidentally. Particularly important if you're kind of in a city environment where there's lots of noise because even like loud cars and things like that can interrupt it. The other thing is you can also interrupt it so we'll just give you a quick demonstration of that.

Okay Chat thank you for that what is the second risk? Of course so with the first risk covered the second major risk we identified is really that potential for litigation or appeals. In other words if environmental groups or iwi feel that the engagement... Okay I'll just interrupt you there we've actually had our legal team look into it and they've decided that there is no risk of that so let's move on to the third risk. Oh that's a great update all right so if the legal team has given... So you'll notice it's no harm no foul to interrupt it abruptly and move on to the next kind of part of the conversation that helps you get the keep the conversation going keep it flowing and you can do that at any time.

Again using mute muting things on and off quite a lot keeping my questions short and sharp. Okay that was a really good discussion I think we came up with some pretty good ideas and solutions there to fill the holes in our stakeholder engagement plan. Now what I want you to do is to write up a summary of our discussion and the key actions so that I can share it with the team.

Absolutely here's a nice concise summary for you to share with the team. Summary of discussion and key actions. Okay so that was that in action but how do people access it Adam? Yeah so there's two simple ways to access it at the ChatGPT mobile apps.

You saw it on ChatGPT the Copilot access is literally the exact same thing so in the app it's just that little kind of squiggly line that was in that video. There's other ways to access it but they're painful and require a lot more setup so this is just easy. Okay that's cool and I know that people will get access to some tips as well.

Yeah so there's kind of five behavioural shifts that I recommend people kind of have in mind when they start using voice mode so unfortunately we're really tight on time so those are in the slide pack take a look but they're also in the video so there's a longer form video of what you just saw that actually goes through the tips and what to do. Yeah that's cool and definitely check it out because those tips are good, things about being concise, the mute button etc I found them really useful. Anyway so those are the four tools and how they can help solve some of our common challenges with AI.

We've covered a lot in a short period of time but as I said there's lots of resources so what I would encourage you to do is to once you receive the email have a really good look, have a think about these tools and go which is the one that I'm going to want to make a start on and then go to the videos and the other instructions and give it a go and also again do reach out to us if you want because we'd love to have that conversation with you as well. Anyway we've got so many questions and limited time let's move straight to them. Adam can talk for a long time about AI so we're going to keep trying to be really succinct because we know there's so many so let's just start with this one.

So Laura, Stephen and Sarah and many more actually asked about automating admin tasks. Any particular advice on that? Yeah so I would definitely start with prompt agent in that instance. It means that you can kind of develop a quality prompt that you can reuse over and over so that's kind of the very start point.

Once you've got that kind of very start point then from there think about the tasks that have high repetition and do not require a lot of context switching, easy for me to say. So just for clarity that is switching between systems so those tasks are the best ones to start automating with AI. Once you've identified those tasks I'd really recommend setting up agents or projects for each of those tasks because that means you can add in default instructions, you can add in templates, writing guides, all those sorts of things which means that your outputs will be much more consistent and consistently high and you can provide feedback to the system, to the agent or the project to say oh you know this one wasn't good, this one is good and it will actually improve over time.

Okay that sounds really good. Sorry one last tip, if you're a more advanced user you can actually automate entire workflows. I recommend using a product called n8n, just google it.

Okay yeah okay that sounds great. Hey there was a question Gemma I think was talking about summarising submissions. So this is a actually a real topic, I'm sure I saw something in the media about you know what's the potential for AI for this but yeah submissions but is really worried about quality, she wants to know she can trust it, she doesn't want to have to redo things, what advice do you have to someone like that? Yeah unfortunately my advice is not going to be good.

Look this is a problem, on the surface it seems you know pretty obvious that AI should be good at coding submissions or you know open text survey responses, those sorts of things. In reality what we've found is that it's actually very difficult to get a consistency across all of your submissions, so if you've got like a thousand submissions the AI will do a really good job of the obvious ones where someone says you know I agree, it's like okay cool happy days but where there's more nuance which as we know from submissions there's a lot of nuance in the vast bulk of the submissions you receive, the AI will actually change its mind and the challenge with that of course is you'll get different numbers in your coding framework and so this is a challenge we've been kind of grappling with ourselves, we do a lot of summary submissions and so you know to meet our internal quality standards we are not confident in using Copilot or ChatGPT or any of the existing models, we just found the results too inconsistent. I recommend if you are using it to try it for yourself, put the same submission in a couple of times and you'll be surprised at how often the AI changes its mind.

The challenge here is that there's no audit trail so you can't go back and be like oh but why did it get it wrong this time around and so what we've been doing is we've actually been working with an R&D organisation in the background for like the last six months to try and come up with a solution to this and only now after six months of like really intensive work, mainly by them to be clear, are we getting close to kind of human level coding accuracy so I know probably not the answer that people want, I wish I had a simpler answer but from experience I can tell you it gets complicated really fast. Happy to chat more about that if people want to have a chat about it. Yeah but at least it's encouraging that someone, us, is doing some real work in that space right? Yeah.

Another question from oh please forgive me if I pronounce this wrong but Jota or Jata, sorry about that, do you recommend using the same query across different AI tools for comparison? Yeah absolutely so that's one of the methods for creating synthetic data that I recommend because it helps you because the models are trained on different data sets so you will get different answers from different models so absolutely and then what you can do is kind of use that varying context to kind of triangulate the best answer for your situation so yeah absolutely is my response. I don't know if I'm doing something wrong here but I've taken sometimes taken outputs, exported outputs from one model and then I've put it into another one and given it a prompt saying can you critically prompt this and tell me how to get it better and actually almost every time like it's I get fantastic. Yeah so that's a really cool thing to do as well and I'd strongly recommend that.

Cool feel good. Kara asks have you tried getting AI to QA AI outputs? Right so this is using AI for the QA, if so what approach have you taken how did it work?

It's still unreliable is my response to that so yeah I mean not too similar to the question that we literally just had and what you just said Jeremy you know taking an output from one tool putting it into the next tool and asking it to kind of check and QA and that sort of stuff is a good way of doing it that will reduce the hallucinations and the errors.

Certainly the challenge that you'll find is that it's still kind of imperfect and I know like for our organisation obviously you know we're doing evaluations, literature reviews, policy writing those sorts of things.

I mean accuracy for us just has to be kind of perfect so we will always well at the moment we have a human in the loop always verifying those pieces of information because you know even on super low hallucination tools like a Perplexity or an Elicit you can't the hallucination rate isn't zero, so for our own comfort you know we are putting the Allen and Clarkee kind of quality stamp if you like on our on our deliverables.

And so we just the only way for us to do that at the moment is to verify these things ourselves.

We also had a question from Gary, I would like to know actually how to input those style guides and templates which are used in the output.

Yeah so that's a good question. So you'll see in the in the step-by-step instructions how how to do that you know, again, the best way to do it is to actually use the if you're using Copilot use the agents feature or the notebooks feature the challenge can be that once your kind of context isn't the good thing sorry about those features is once your context is in there the tool will reuse that same context over and over that is different than if you add it just to a chat.

Yes so it's important distinction yeah no I think that's really good thank you.

Jeanette mentioned that you spoke about notebooks versus agents and Copilot. They seem really similar – any other guidance about which ones you should use when?

Yeah, I myself found it quite confusing, so I've looked into it to try and understand it a bit more. From my understanding, the main difference between the two is that a notebook is more designed for a project. Think about it like this: your context in a project tends to build over time and it tends to remain grounded in a specific task.

So if you're doing stakeholder interviews or something like that, then obviously you want to be grounded in best practice stakeholder interviews. You're probably collating your stakeholder interview transcripts in there – those sorts of things. So your context is going to build over time, but in that instance it's all around stakeholder interviews specific to that exact project that you're doing. That's kind of how I think about notebooks.

Agents is a little bit different. Agents is more designed for repeatable tasks where the context varies. An example of that is the prompt agent, right? So the task of the prompt agent is to generate a prompt, but the context for that prompt will vary depending on what you're trying to achieve. So if you're trying to generate an image, write a document, or whatever, that's where you would use an agent. Because the agent has more memory of your chat history, so it learns in a different way, basically. That's how Microsoft have built it, to my understanding, and that's how we're using it.

Cool, okay. So this is one of those bigger kind of questions, but maybe we'll hit it real quick. The ethics of using AI – obviously people are concerned about energy use, water, etc. So yeah, the environmental impact. Is it ethical for us to be using AI so much?

Yeah, I mean I can't really comment on the ethics side of things because I think that's a personal or an organisational call, but I can comment on the environmental piece. So there was a study done a little while ago – I forget if it was MIT or one of the other big US institutions – and they did it on ChatGPT-4 at the time. It's probably about a year old now. And what they found was that the electricity and water usage of 60 prompts a day was the equivalent of about a one-hour Zoom or Teams meeting, or driving like an extra kilometre in your petrol car.

So what I would say to people who are concerned about the environmental impact is that it's about balance. If you are a heavy user and you are doing 60 prompts a day and you find that really valuable, then there's probably things that you can do elsewhere that will balance the environmental impact out.

 

16 Topics +130 Resources

Book a time with our experts

Book time
+50 Webinars
Resources

Download our free webinar resources

AI Toolkit Slides
AI Toolkit Resources (Guides, templates, prompts - everything you need to get started)
16 Topics +130 Resources

Sign up to our Resource hub & Unlock all content

+50 Webinars
Resource Hub

You might also like