Published on 5 Jan 2026

Stop Learning AI Prompting (And Do This Instead)

1 hour watch
Adam Emirali Group Marketing Director Contact me
Julian Alban Senior Consultant Contact me

You've been told you need to master AI prompting. Learn the frameworks. Study the techniques. Keep up with the latest methods.

Here's the truth: by the time you've mastered today's prompting advice, it'll be outdated. Besides, why are you spending hours writing prompts when that time could be invested in actually doing the work and the document would be the same?

There's a better way. Instead of becoming a prompting expert, build a prompt agent that does the prompting work for you. Think of it as your personal prompting specialist that knows your style, understands your context, and transforms simple requests into sophisticated prompts automatically.

In this session, we'll show you how prompting advice has evolved (and why that matters), the few universal principles worth knowing, and most importantly, how to create a personalised prompt agent that delivers much better outputs from day one. You'll see a live demonstration and leave with everything you need to build your own.

Stop spending time learning prompts. Start getting better results immediately.

Webinar Transcript

View transcript

[Julian]
Welcome to this Allen and Clarke webinar, Stop Learning AI Prompting. We're filming in Australia today, so as is customary here, I'd like to acknowledge the traditional owners of the lands we're meeting on today here in Melbourne, the Wurundjeri, Woiwurrung and Bunurong peoples of the Kulin Nation, and pay my respects to their elders past, present and emerging. I pay my respects as well to the Tangata Whenua and Tangata Tereti of those joining us from Aotearoa New Zealand.

For those of you for the first time and wondering who we are, Alan and Clark is a consultancy that ensures complex, high stakes decisions are made with evidence, defended with confidence and built to work for the people and communities affected. We specialise in strategy, change management, evaluation, policy and more. And we give a damn about impact, not just deliverables.

So my name is Julian Alban. I'm a senior consultant here at Allen and Clarke and I'm joined today by Adam. Say hi and introduce yourself, Adam.


[Adam]
Hi everyone. I'm Adam. You might recognise me from other AI webinars like Getting Started, The Four Tools and AI Risk Management. I'm really looking forward to our chat today and I'm hoping people can chuck us some difficult, challenging questions.

I'll do my best to answer them. I don't have all the answers, but definitely welcome some challenging questions.

[Julian]
Absolutely. Thanks, Adam. So before we do get into the session, just a little bit of housekeeping.

We love to take questions as we go. So pop them in the chat as they come to you. What's really great about these sessions is how you all share ideas and resources in the chat.

So we really encourage you to take part to get the most out of your time today in that way. From today as well, you'll be getting the recording, the slides, the full step-by-step instructions and all the content to set up. Right.

So that's the housekeeping. So let's get into it. Adam, I have to say the title Stop Learning AI Prompting was interesting for me.

Usually every other AI training I've seen is teaching people prompting frameworks and techniques. So what's different here?

[Adam]
Yeah, look, that's exactly the point. I talk to a lot of people about AI and a lot of people say, can you please stop talking to me? Seems like all you talk about.

But for those that I do talk to, they're finding AI useful, but they're kind of like, I'm not getting the results that I want. I'm not sure that it's worth my time. And the issue here is that we've all been told that prompting is one of the key ingredients to getting good outputs from AI.

And so we've all been trying to learn the techniques, study the frameworks and kind of keep up with the evolution of prompting methods. The trouble is that the prompting advice seems to change as the tools improve and your prompt style needs to change depending on the tool that you're actually using. So this means, completely understandably, most people feel like they're investing tonnes of time learning how to use a thing and practising their prompting, etc.

Instead of doing the actual work. And so there's kind of this balancing act where people are like, well, I'm spending all this time learning to prompt, etc. Why wouldn't I just do the work and the output is the same?

And I have complete sympathy for that point of view.

[Julian]
I see. So are you saying then that learning to prompt is a waste of time? Oh, yeah.

Yep. Okay, so that's the end of the thing. We'll just finish there.

[Adam]
Got more to say, I think. Yeah. So of course, I'm not saying that learning to prompt is a waste of time.

What I am saying, though, is that learning to prompt is optional. With the right setup, you can actually have your AI assistant create your prompts for you. And I know that sounds like hype or like weird meta thing.

But stick around and we'll show you exactly what we mean and exactly how useful it can be.

[Julian]
Absolutely. Thanks, Adam. Okay, so thanks to those of you online who emailed in your examples.

Today, we'll discuss how prompting advice has evolved and why that matters, the few universal principles worth knowing, and most importantly, how a prompt agent can help and how you can create one for yourself. Yeah. So Adam, to give us some context, let's start with a quick recap of how prompting advice has actually changed over the past few years.

[Adam]
Yeah. So, you know, I think the best place to start is November 2022, which seems like a million years ago in the AI world. But essentially, that's the launch of chat GPT-3, the kind of launch of generative AI as kind of we all know it.

And really, for the following 12 months after that, prompting was all about kind of magic keywords, loading in tonnes of context. And specific phrases like let's think through this step by step and that sort of thing. And it kind of boiled down to clever tricks, if you like, to get better outputs.

Then kind of the second bucket over, by 2023, we had the launch of chat GPT-4 and Claude and some other models that were dramatically improved from chat GPT-3. And these ones got much better at following direct instructions. So that meant that new techniques emerged like chain of thought, prompting, Google it, and you could give it specific formatting requirements, that sort of stuff.

So it was kind of the end of the magic keywords era. And so I, for one, was like, OK, I'm not like the things that I just spent 12 months learning, not really that valuable anymore. And now in the last bucket over, the models themselves can handle huge amounts of context data all at one time.

They also kind of learn from your use. So what that means is that those kind of specific prompting techniques are kind of evolving into more of good communication and clarity than anything else. Thanks, Adam.

[Julian]
So interesting, that idea of just continuous, always evolving. So given that, where do you think we're heading now? What are your predictions about what's next?

[Adam]
Yeah, well, you know, I say this to everybody, right now is the worst that AI is going to be. So it's only going to get better from here. And so I believe at some point in the future, prompting AI will be much more like, you know, when you go to your most experienced, most capable colleague and ask them to do a task, you know, they inherently know the context, they have the experience and the expertise.

And so they kind of just get the job done. And that's where I see AI heading at some point, is that kind of, you know, that level of capability.

[Julian]
I see. So that's what you mean then, when you say that learning to prompt is not necessarily the best use of someone's time.

[Adam]
Yeah, exactly. You know, you can see it's evolved. And, you know, there's no reason not to believe that it's going to keep improving over time.

So there's kind of six main reasons, I guess, that I believe learning to prompt is not a great use of people's time. And I totally appreciate that that sounds like, oh, I'm so smart, you couldn't possibly, you know, learn to do it or anything like that. I'm definitely not saying that anyone can learn how to prompt.

It's just a matter of where do you want to spend your time. And so that's why I say I don't think it's worth most users' time. So just I'll touch on the first three.

I think they're the most important reasons. So you're kind of, at the moment, you're spending time being an intermediary between your expertise and AI's capability. Not a great use of time.

Best practise expires frustratingly fast. We've all felt this, including that timeline that I showed you. And the consistent principles are disappointingly basic.

And kind of boil down to good communication. So it's essentially how you're briefing your team at the moment.

[Julian]
Really interesting, Adam. I love those principles. So you said the consistent principles are disappointingly basic.

Can you tell us a little bit more about that and what that means? Yeah.

[Adam]
So I guess prepare to be under underwhelmed. As I said, it kind of comes down to good communication techniques. So for those that are interested in learning more about prompting, I've kind of summarised those principles on a slide that we're not showing.

It's in the pack. So feel free to download it. But essentially for today, just very briefly, I encourage you to think of prompting more like this.

So give your AI assistant a persona. Provide as much context as you can. State the task clearly, including steps if there are specific steps.

Explain your objective and why it matters. And show or describe what format you want your output in. So we kind of summarise this as persona, context, task, objective, output.

But really, it's just kind of basic communication.

[Julian]
Thanks, Adam. So that's it. After all that evolution that you've described to us, it really just comes down to be clear and give context.

[Adam]
Yeah, I mean, there is nuance between the models. So if you're using Claude, for example, Claude likes these things called XML tags. Again, google it.

But essentially, good communication is useful across all the models and will help you improve the outputs. And again, I'm focussing on the principles that have stayed consistent over time. Because I think if you are learning how to prompt, taking that option, then I think I'd start by focussing on those key principles.

And I think the challenge here is not that the principles are hard. It's that there's the nuance between the models, plus there's the amount of time it takes to kind of apply those principles, you know, persona, context, task, etc. And I, for one, don't want to spend 20 minutes drafting a prompt just so I can get some AI outputs.

[Julian]
No, of course not. And so that's where this idea of an AI tool or a prompt agent came from.

[Adam]
Yeah, absolutely. So this isn't my idea. I've just taken it and kind of adapted it for our context.

And of course, ANC's context is very similar to our audience's context, which is why we're here today. But a prompt agent is fairly simple to understand. It's essentially a tool that is designed to supercharge your prompts.

So it works very simply by taking a simple, unstructured prompt overlaying your context and the internal knowledge within the agent about, you know, what a good prompt looks like, etc. And it turns your simple prompt into a detailed prompt you can either edit, refine, or use immediately.

[Julian]
Excellent. So that sounds easy enough to use. So let's see it in action.

But before we do, I just want to mention that we've pre-recorded all of today's demos. That's not because we're using trick photography. It's because we have a short amount of time together, and we don't want to waste your time watching AI loading or with Adam and I fumbling around on the spot with the technology.

That's right. So with that in mind, we'll move on. Let's see this prompt agent in action.

[Adam]
Okay, this is a quick demo of the prompt agent in action. We're just in Claude here. It doesn't matter if you're in Copilot, ChatGPT, etc.

The prompt agent works the same. And we're just going to run this basic prompt, which is summarise the strategic planning process. So we'll just whack go on that.

That's our simple prompt. And what we expect to see here is a relatively simple output from Claude that details the strategic planning process. So here it goes.

And it's detailing the strategic planning process kind of in front of our eyes. So it's not going into a great level of detail. But it's a decent summary of the strategic planning process.

Now, we're going to jump over to the prompt agent. And we're going to chuck in our original simple prompt. And we'll press go.

So here is a detailed prompt being produced. And as expected, it's produced. It's selected a persona.

It's detailed the context we're operating in. It's outlined what the task is. And what the objective of the output is.

And also the structure of the output that we're looking for. So we just take that prompt. And we're going to chuck it into this window here.

And what we're expecting to see is if we jump back to our original output. So there is our simple prompt. Here is our simple output.

Now that we've put our simple prompt through the prompt agent and got this detailed prompt here. What we're expecting to see is Claude to really start diving into the strategic planning process. And give us a lot more information.

So you can see here, first of all, Claude's actually opened an artefact. Which is this window on the right. Because we've specified what the output is.

So the output here is word-ready strategic planning process summary. So because this isn't an artefact, it's word-ready. We can literally just download it directly into Microsoft Word.

And edit it ourselves. Now you can see Claude's doing quite a bit of work. Detailing out the strategic planning process.

We've even got phases. We won't keep watching it generate. But you can see here the power of and the difference between a simple prompt.

And a detailed prompt. And the impact that that has on your outputs. That's the power of a prompt agent.

Back to you, Adam, in the studio.

[Julian]
Okay, so what we saw was the same basic input going into regular Claude. Versus a very detailed prompt created by prompt agent. And the dramatic difference in output quality.

The regular Claude output looked like what I think I've seen myself. When I've attempted to use this sort of tool. Simple and relatively generic.

A useful starting point. But needing quite a lot of work. Whereas the other output gets you far closer to the finished product.

It's much more, a lot more strength to it. I mean, I don't think I've ever bothered to write a prompt that detailed myself. So I'll be getting that forward in future.

[Adam]
And that just comes down to that time it takes to write a detailed prompt, right? And so that's the beauty of prompt agent. Takes something very simple.

Turns it into a detailed prompt. That you can then drop into another chat. And that means that your outputs are much closer to final draft quality.

It doesn't mean that it is final draft quality. But if we think about it in terms of kind of percentages. You know, the simple prompt in that example.

Probably got you to 10 or 20% of what you needed. Whereas the detailed prompt probably got you closer to kind of 40% of what you needed. And just by adding like a small step.

That only takes, as you saw in the video, a matter of seconds.

[Julian]
Absolutely, really helpful. Okay, so we've got some more demos too. On things like drafting ministerials.

Implementation plans. And tricks to generate detailed prompts from your existing exemplars. But first we should show everyone how to actually set up the prompt agent for themselves.

Quite important, I think, yeah.

[Adam]
So looking at the registrations. I've assumed the majority of you are using Copilot at your workplace. So let's briefly run through how to set up prompt agent in Copilot.

Okay everyone, here we are. The moment you've been waiting for. Setting up your very own prompt agent.

So we're just setting up the generic agent today. There is complete instructions on how to do it. So what you want to do is you want to go over here to new agent.

Now you might see that there's an existing Microsoft agent. The Microsoft agent is fine. The difference though between these two is that you can customise your own agent.

And the generic one that we're setting up now is actually tuned to the OZ NZ context. And tuned to the sort of work that Alan and Clark does. Which is also very similar to the sorts of work that you, our audience, does.

So you'll find that you get much better prompts from this version. So this is how you set it up. Just click new agent.

And you wait patiently. And just give our agent a name. And it takes a simple prompt.

And turns it into a detailed one. Cool. And now we're going to add the instructions.

Think of this as like the master prompt for the agent. It tells the agent how to behave. And how to interact with the knowledge that we're going to add to it.

The good news is that we already have created instructions for you. So you can literally just copy these and drop it straight in. So that's our instructions added.

Now we need to add the knowledge. So this is things like examples. Those sorts of things for our agent to use.

So you can load these into OneDrive and just share the folder with your agent. For today we're just going to upload them. It's just easier.

So these are all the files that you'll get to download after the session. There's a step-by-step creation process. In there is the instructions on how to personalise the agent even more.

But today we're just showing you how easy it is to set this up. So we don't need the creation process. That's not going to be very helpful.

But we'll load those other documents in. Just while those are uploading, we are going to turn on search websites. And the reason we're doing that is I sometimes like to ask the agent to search the internet to help improve the prompt.

We do want to only use the specified sources. Because that grounds the agent in the context documents that we've provided. And we do want it to reference my profile information.

Again, that helps with the customisation of the agent. We don't need it to create documents. It's just creating prompts.

We don't need it to create images. Again, it's just creating prompts. And here we just want to chuck in a few example prompts for our agent.

So here is in this document here, which we've just loaded into the agent, is a bunch of example prompts. And what we're going to do is we're just going to take these and we're going to drop them into the agent. We're just going to take a few at random.

You can, of course, check these prompts out if you so wish. Some of them are probably reasonably useful for people, given what's been sent through. When people registered.

So you can see we're going pretty quick here. But the point is that we're literally just trying to get the agent set up for you. So you can see how it's done.

And this will be the last one that we do. That's it. And now we're just going to, once it stops auto saving, we're going to click create on our agent over here.

So we just click create. And as that's creating, once it's created, it will then appear in Word and PowerPoint, etc. And you can share your agent with your colleagues and your team so that everybody doesn't have to set it up kind of individually.

You can just have one team member set it up and then share it with everybody. Great. So that's our agent set up.

So we'll just head over to it. And now it is ready for us to use. Back to you in the studio.

[Julian]
Okay, so hopefully people found that useful. And just a reminder that these materials and that video will be made available after the session. So you showed us, Adam, how to do it in Copilot.

What if you're using ChatGPT?

[Adam]
Yeah, if you're using ChatGPT, Claude, Gemini, the setup is essentially the same. They all have the same sort of feature. In GPT, in Claude, you'll be using the projects feature.

It's on your left hand control panel.

[Julian]
So that means it's applicable no matter what AI tool you're using? Yeah, exactly. So before we move to the detailed demos, we had over 400 questions and challenges submitted at registration.

So let's tackle one of the main themes now. There were a lot of comments about how hard it is to know what tool people should be using for what task. Do you have any advice there, Adam?

[Adam]
Yeah, so first of all, I have a slide that has my favourite tools and what I use them for. We're not showing it, but it's in the slide pack from today. So download that.

And I completely understand the frustration. I've spent ages trying to figure out which tools work best for me. And what I would say just quickly is that I use Claude for, I prefer Claude, for coding and for writing tasks.

I prefer Perplexity for searching the internet. And I prefer Notebook LM for understanding long reports or detailed documents. Just super quick, a couple more tips.

As I say, in the slide pack is those list of tools. I think there's 10 or something on there. And then the next tip that I would say is, you can actually put your list of tools into Prompt Agent.

And have Prompt Agent recommend the tool based on the prompt that it generates. So if you were putting in a simple prompt that involved, you know, environmental scan or something like that, Prompt Agent would generate the detailed prompt that we've all seen. And it would also say, oh, by the way, Julian, you should probably use Perplexity for this task.

[Julian]
Great, really useful. So I'll certainly be sure to download those tools myself so that I can see the slides, so that I can see the tools and all of that information myself too. So let's see some more examples though of using Prompt Agent for different tasks.

So thanks to those who wrote into us with real tasks and prompts that you're using. Let's take a look at whether Prompt Agent can help out.

[Adam]
Okay, here we are. We're going to do a quick demo of Prompt Agent in action. So this is example one.

So first example is reverse engineering. So you can have Prompt Agent reverse engineer a detailed prompt from an exemplar that you have. So here is a ministerial briefing that one of our team drafted a while back.

This one's been anonymised, et cetera, just for today's session. So we've attached that and I've just got some basic prompts just to help speed up the demos. But this one in particular just says, please take the attached document and create a detailed prompt that will enable me to create a similar document on another unrelated topic.

So we'll push go on that. And what we're expecting to see here is that the Prompt Agent scans the exemplar and then drafts a detailed prompt for us to then use. So here we are.

Here it is. It's drafting the prompt for us based on that document. Now, obviously, best practise would be to review that, test it, update it, improve it.

Obviously, for today's demo, we're not going to do that. But that is the first example. Pop your exemplar in, put a simple prompt in like this, and then Prompt Agent will convert it into a detailed prompt that we can then use to generate another ministerial, which we'll do right now.

So we'll copy this prompt and we'll move over to example two, which is creating a ministerial from a report. So here's example two. And we're just going to drop our prompt straight into a copilot chat.

And we're going to have it draft based on a 300 page evaluation report that our Australian team did for the family relationship services. And so we'll just press go on that. And while that's processing, we'll just jump over to this window.

So what I would recommend is for something like ministerials, where it's likely that your team is doing this over and over again, I would really recommend setting up an agent for this task. Setting up an agent means that you can load in the cabinet recommendations on the different types that your team need to be able to draft. So you can build that nuance in.

It also means that you can add your exemplars directly into the agent. So it can draw from the best examples that are most relevant to your organisation. And it also means, as I've done here, you can load in your prompts into the agent so that your team don't need to remember or don't need to generate new prompts each time.

And that will significantly improve the consistency across the whole team and the whole organisation. So we're just going to say that this one's policy advice decision brief. So we're going to click that.

And we're going to drop in that very same evaluation report. And we'll have it draft that one. And you can see just how easy, from the user perspective, that is because the prompt's already built in and it's doing its thing.

So here's our ministerial briefing that was purely based on our prompt, our reverse-engineered prompt. And you can see the structure's not too bad.
And perhaps the, you know, the content's probably a bit light for what we're after. But again, I'm not an expert in these things. So what would typically happen is I would sit down with one of our experts and we would develop the ministerial drafter together so that the outputs were consistently of a quality they would expect.

But anyway, here is the output from our ministerial drafter. You know, again, it's probably a bit too succinct, but that's something that we can develop over time. So that's the second example for you.

We'll move on to the third one. Okay, now this is example three, and we're going to put prompt agent to the test. So one of our audience members sent through this really great prompt about reviewing cabinet papers.

So you can see this here. They've included examples and sources from the New Zealand government. So, you know, this is a strong prompt, and we can tell that the person who sent this through has spent a decent amount of time, you know, considering what they're asking AI for.

So we're going to put prompt agent to the test here, and what we're going to do is we're going to take this simple prompt, and we're going to see if prompt agent can create a prompt that is similar in quality to this one here. Obviously, what we're looking for here is similar quality, but also speed, because it would have taken our audience member a little bit of time to draft this prompt, and here is prompt agent here drafting the prompt. It's gone and found DPMC exemplars, also some from Treasury and the Ministry for Environment.

So that's pretty good. So we'll just copy this prompt, and we'll put it side by side with the one that our audience member submitted, and you can see here that prompt agent has picked up many of the similar sources, but actually it's built out the prompt a little bit more detail. Now, again, obviously, you could just use this prompt agent prompt straight away, but, you know, a quick review and a little bit of editing would mean that you end up with a prompt that's at least as detailed as this one here that's been submitted.

Obviously, we can generate that using prompt agent in an absolute fraction of the time using a very simple initial prompt. So that was example three. Right, example four for you.

So this time, we're creating an implementation plan from the evaluation findings. It's the same evaluation we've been using in the other examples. So here's our simple prompt.

This time, I've also added in this exemplar, which is a sample implementation plan, and prompt agent has worked its magic, generated a detailed prompt for us, and you can see here it's referencing that implementation plan, the example that we gave it. This really does help the outputs. So then what I've done is I've given Copilot that sample implementation plan and the full evaluation report.

It's then drafted this implementation plan for us. I then had Copilot put it into Word. Here it is here.

This is the Copilot version in Word. Now, at the same time, I gave Claude the exact same prompt and the exact same contextual data, and this is what Claude generated. So I'll just draw your attention.

Claude generated 24 pages and nearly 4,000 words, and this is just one example of a section. So you can see here key dependencies. I mean, we've got things like budget numbers in here, you know, and another kind of specific detail that obviously these acronyms would mean stuff to the stakeholders, and if we compare that to the Copilot version, this is the exact same section, just a Copilot-generated version.

You can see here that the content is much more generic, and this is just a simple example of the difference between the tools that you can use. Obviously, the Copilot version is still not useless, you know, but maybe it's only 20 or 30% of what we need versus the Claude version, maybe that's closer to 50% of what we need. So that's just a quick example.

I've got one last example for you, which is how to solve the annoying formatting problems that you see within these Word documents. We'll jump to that next. Okay, I've got one last tip for you, everyone.

So if, like me, you're using AI to generate starters for your drafting or anything like that, so kind of like the examples we've been showing earlier today, you will end up with Word documents that look something like this. So they're using the Microsoft default styles, and I'm sure, like us, many of you will have organization-level templates that you should be using that have your colours and branding and that sort of stuff on them. So there is a trick.

Instead of copying all this over into one of your templates or going through and manually applying all the styles, there is a bit of a trick. So first, you need to add developer to your ribbon, just Google it, it's super easy. Click on the template, tick that button there, and go OK.

Now what you'll notice over here on the right is my styles have switched into A and C styles, and we've got green headings and that sort of stuff. So the good news is that if we then go over to design, and we go themes, and we go reset to themes from template, it will automatically update all of the content in the entire document to match all the Alan and Clark themes. So now you can see there's a few errors because in this case, Claude has hard typed in the section numbers, but obviously that's a quick fix.

You can also ask Copilot to fix that for you, but you can see now I've got all of the Alan and Clark styles to choose from. They've already been applied to the document, so I'm good to go. And that's the last tip for our session.

Hope you enjoyed that, and back to you in the studio.

[Julian]
But I have to say that as someone who's still learning, Adam, I was a bit surprised and a bit disappointed when we were using these tools, particularly around the ministerials and the family and relationship services evaluation that I worked on and testing these tools around that. But I guess that goes to what you were saying earlier about the principles that you outlined and the need for a human overlay and that additional work and lens as well.

[Adam]
Yeah, that's right. I mean, the outputs that you saw obviously aren't publication ready, but the point is that they get you from zero to, you know, you experience maybe 50 or even higher percentage of what you would need.

[Julian]
Yeah, absolutely. Very helpful. We need to move to questions now, but first, if you're wondering how to get these kinds of results in your own work or how to personalise your prompt agent further, you can book time directly with Adam or with any of our experts as well.

These sessions are not a sales call. They're a working session where we take the time to understand what you're trying to achieve and give you practical ideas and advice on how to move forward. Just like these webinars, we share our frameworks, walk through your use cases and give you the tools to implement this properly.

It's how we build capability, not dependency. So if you're keen, click the button on your screen and one of the team will be in touch. Yeah, I love chatting to people about AI stuff and it's just great if we can lift everyone's capability.

Absolutely. Thanks, Adam. Okay, now let's get into your questions.

We had several about getting accurate usable outputs. So Adam, walk us through how you achieve what we saw in those demonstrations.

[Adam]
Yeah, so we'll touch on this very briefly. If you want more information, jump onto the Getting Started with AI webinar. It's a bit more detail in there.

There's kind of four key principles I followed to get the best possible outputs. They should be on your slide now. And so for me, the areas that most of you out there will be able to control is prompting and the context.

Often the model or tool has been selected for you by your organisation and the review and refine stages, just that kind of expert lens over the top. So prompt agent, obviously massively useful for the prompting section of that. And there are a bunch of hacks for the context piece that I'd encourage people if you, again, in our back catalogue of stuff is a webinar on four AI tools.

In there is a section dedicated to context. So using context and also creating contextual data. And I kind of think about context in kind of three tiers.

So there's kind of good prompt, no context or limited context will get you kind of this level output. Good prompt, kind of AI generated or kind of generic context will get you better outputs again. And then on top of that is kind of your own context.

But there can be a bunch of reasons why you can't use your own data. And that's where I wouldn't immediately jump to no data. I'd then use that kind of what I call synthetic data.

[Julian]
I see. Thanks, Adam. That's great.

I've heard you mention as well that it's important that you choose the right tasks to focus for your focus around how you're using AI. Can you tell me a bit more about what that looks like?

[Adam]
Yeah. Yeah. So, you know, AI as it stands is super useful for a lot of tasks.

But really, you should be prioritising the tasks and the workflows that you kind of look to either get AI assistance or kind of automate it with AI. And it feels like I'm shilling all our previous stuff. I'm not trying to do that.

It's just there's more, a lot more information than what we're going through here in those other sessions. So in the AI risks webinar is a kind of framework about how you choose the right workflows and tasks for AI assistance. Just kind of briefly, there's five factors that I personally look at, which is impact.

So how much will it help? Risk, what happens if it goes wrong? Change, how hard is it to adapt from where we are to the new AI assisted world?

Data sensitivity, you know, can we use the data and what data does it touch? And judgment, how much kind of human judgment is involved in this workflow or task? So obviously, what you're looking for there is high impact, low risk, easy adoption tasks to start on first, and then you kind of go down the list.

We've got like a ranking tool that you can use and that sort of stuff.

[Julian]
So great. Okay, so if people want more information, they can get in touch or check out that risks webinar that Adam's reminded us of a few times. So we had a question as well about running environmental or jurisdictional scans using AI.

Have you got any tips about that as an issue?

[Adam]
Yes, somebody did write in about this. I will message you back with a bit more information or if the others are keen, just pop, let us know in the chat and maybe we'll run a session on this specifically. Very briefly, I'm assuming that the jurisdiction scan is repeatable, either you're doing the same scan often, or you're doing scans for multiple departments kind of frequently.

Assuming that's true, then what I would do is I would create a research brief for my AI assistant and a prompt that goes along with it, a detailed prompt obviously. And then I would use Perplexity Pro and Gemini to do the initial scan. And what you're trying to do with the initial scan is focus on breadth of sources, rather than getting too much into the is the source relevant for my context.

The reason for that is you're using an AI assisted workflow. So to have a huge number of potential sources is fine, because the next step is going to be using your organisation AI tool to then kind of input your strategy or your really relevant context to then kind of determine what bucket, if you like, the sources fit into. So highly relevant, not so relevant, and not relevant at all.

The reason I do it that way is that's where the human expert can actually look at that output data and go, oh, hang on, the AI has said that the source here isn't relevant, but I know this nuance, and therefore it actually should be in this other bucket. So that's why the breadth is really important to begin with, and then you use AI to kind of narrow it down.

[Julian]
Happy to chat about that, this very whistle stop, but yeah. Thanks, Adam. A great summary, really helpful and practical, I think, for quite a complex task as well.

All right. So that brings us to the end of our scheduled time. Although we're keen to keep answering questions for people.

So our next one is how can we make AI writing sound more human? Have you got any tips on this?

[Adam]
So that's the next question? Okay. So yeah, absolutely.

My number one tip here is to create a writing style guide. So your marketing team would probably have this for your organization, or should have one for your organization. But you can also create one for you personally.

I have my own one. And what I do is I've set it up in Copilot as an agent, so it's like Adam writing agent. And what I can do there is then I can call on the agent to update kind of draft text or whatever, to make it sound more like me.

This just dramatically improves the output's writing style. So yeah, that's pretty much what I do.

[Julian]
Excellent. Thanks, Adam. Really helpful.

And if any of you would like to see Adam's style guide, let us know in the chat, and I'm sure he'll share it. So moving on now, we have some questions that we've received from viewers. So we'll move to those.

Okay. We have a question that's come in from Grant. And it is, if we're thinking about prompting AI as briefing a colleague, a good leader spends 15 to 20 minutes briefing a person.

So why should we expect prompting AI to be any faster?

[Adam]
Yeah, that's a good question. Thank you. I did invite challenging questions.

So thank you, Grant. Appreciate that one. I guess it depends on the task that you're briefing somebody on.

So if I think about myself, would I expect to brief someone for 15 to 20 minutes if they were going to complete a task that should take an hour? Probably not. Whereas would I invest that 15 to 20 minutes if it was a day's work or more?

Then probably yes. And so the way I think about it is I use AI in a similar fashion. So if it's the one hour task, then I kind of use prompt agent, generate that detailed prompt, maybe a couple of quick edits, and then off you go.

If it's the more long-term task or workflow, then absolutely, I'd invest way more time not just in the prompt, but also in the contextual data. And again, thinking about whether I would set up an agent specifically for that task, and therefore what instructions the agent needs. And so I guess, Grant, I'm agreeing with you.

But for me, it comes down to that, how much time am I spending briefing versus how much do I expect the task to take? And so yeah, that's it really. The last thing I would say is that your prompt agent is a shortcut.

That doesn't mean that the outputs of prompt agent are the final prompt. So to your point, Grant, maybe the prompt agent prompt is 70% of the way there. A couple of edits, now you're at 80 to 100%.

And you've got to that 100% mark way faster than without prompt agent.

[Julian]
Great. Thanks, Adam. And thanks, Grant.

And we encourage people to get in touch as well, those who've asked questions too, if they wanted more information on their question outside of the session too. We have another question that's come in from Denise, and that is, as a not-for-profit, I'm also wondering about cost. Are prompt agents free?

If not, what would you recommend from a cost-effectiveness perspective?

[Adam]
Yeah, good question, Denise. Prompt agent, 100% free. So if you have an organizational AI level, AI tool, you can build, use this guide and all the documentation to literally build prompt agent for yourself.

Again, if you've got Copilot, use that. I know some people that I've met have just set up a prompt agent in ChatGPT in the free version of ChatGPT. So you can set it up in the free version as well.

It's free. All the resources are there for you to use, including the video and the step-by-step instructions.

[Julian]
Wonderful. Thanks, Adam. And thanks, Denise, for your question.

And another question we have is from Lorna, and that is, once you've created a prompt agent for your organization, can you share with other team members so each person doesn't need to create it themselves?

[Adam]
Love this question, Lorna. I love it. Short answer is yes, 100%.

So what we do is we set it up at the admin level, which means that it instantly flows down to the team members in the organization. The great thing with that is, as I touched on earlier, you can load your organization level prompts into prompt agent as well for people to use as exemplars. You can load specific context documents in for specific tasks.

And what that means is for the average user, they know I need to do something in AI. First, I go to prompt agent. I generate my detailed prompt.

It recommends a tool for me to use if there is a different tool I should be using. And you can add all this kind of nuance into your prompt agent. So from a user's perspective, it just makes it so easy.

So instead of having a prompt library saved over here and your context saved over here and you've got to remember where everything is, prompt agent can bring it all together at an org level. It can also do it at a team level as well. So it depends on how big your organization is.

[Julian]
Yeah, fantastic. Thanks, Adam. And thanks for the question as well, Lorna.

A question from Miriam. Is it possible that our system administrator has disabled this function? I have a completely different copilot menu that does not include agent as an option.

[Adam]
Oh, then the answer, I'm afraid, is yes. Your admin has almost certainly disabled that function. So I would have a chat to them about off the top of my head, I can't think of why they would do that.

It might be that you're in the early stages of your copilot implementation. And so they're kind of rolling out the features gradually. But I'm literally guessing like without speaking to your team.

Sorry, I'm not sure that was all that helpful.

[Julian]
It sounds like a confusing moment. I'm not sure that comes up, I think with sort of implementing some of this. Yeah.

Um, thanks, Adam. A question now from Svetlana, which is creating a new agent is not enabled for us. Okay, another one of those.

Yeah, similar. Any tips for using other tools that can enhance commissioning in the PCT double O style?

[Adam]
Yeah, well, like I just said, you can you can use the free version of chat GPT. Again, the setup is identical. The only difference is you're going to use the projects feature.

That's just a different label for the same thing. So I know lots of people that use that generate a prompt and then obviously use their use their approved tool.

[Julian]
Great. Thanks, Adam. And a question from Jata.

Does ANC assess the return on investment of using AI? If yes, how do you do that? Which criteria and components are included or excluded?

This is a fabulous, fabulous question.

[Adam]
So we are in the process of doing this. So some of our clients, there's a bit of a transition period, I guess. So some of our clients.

So we always disclose AI use, I should say, and where we would use client data. We obviously seek express permission to do so, because we're kind of in this transition phase. So different organizations have different expectations and rules.

Totally fine. So we have a number of projects where clients are happy for AI use and a number where clients aren't. One of our advantages as a services organization, Jata, is that our team complete time sheets.

And that's how we know where time is being allocated in terms of projects, etc. So we have this significantly rich level of data that we can then compare and contrast to say, OK, this was an AI assisted project doing an environmental scan. And, you know, last month, or at the same time, we're doing a similar environmental scan for somebody else who has preferred us not to use AI, we can directly contrast those two things.

So that probably doesn't massively help if you're not using, if you don't have time sheets. So I have to give it a bit more, I haven't really considered it outside of our context. So I'll have to give it a bit more thought.

[Julian]
But I really like that question. I do too. And thanks for the answer.

And really helpful linking it back as well to the jurisdictional review that we reflected on before and a comparison of how that might be done. All right. We have a question from Amanda.

And that is, is the style template option available in Copilot?

[Adam]
Yes. So I'm trying to just, I'm kind of guessing a little bit as to what Amanda means. So in the example, what we showed is, you've got a word output that Copilot or an AI tool is generated, they generally always generate it in the standard Microsoft formatting.

So you can use Copilot within Word and that will automatically apply your styles. What I've personally found is that trying to use Copilot in Word actually doesn't do as good a job as for the first draft. So what I'll often do is what I showed in the demo, I'll generate the document and then I'll use those couple of tricks just to convert the styles over.

And then once I've got the first version, then you can use Copilot within Word. And when you're using Copilot within Word, it will automatically apply the styles. And another quick trick for you, Amanda, as well, is if it's like small amounts of text, then you can literally just copy and paste it into Word and then kind of highlight it and then ask Copilot, please, you know, do not edit this text.

Please only format it using headings, blah, blah, blah, blah, and it will actually do a decent job of it.

[Julian]
Great. Thanks, Adam. And I think this will be our last question.

And it's a question from Sarah, which is, I understand that we should always be reviewing what comes from AI prompts and the prompt agents. How much of an issue is hallucination in inverted commas there? And is there anything that we can do to mitigate it other than just checking the output?

[Adam]
Yeah, fabulous question, Sarah.

In one of our previous sessions, I can't remember which one, unfortunately, we talk a bit more about hallucination. And we actually show a chart that shows that hallucination rates are astonishingly low. That doesn't mean that the accuracy is good.

There's just a little bit kind of nuance around the word hallucination. Essentially, to stop AI making stuff up, you provide it more context. That's essentially the best solution, because AI tends to fill in the blanks.

And so that's where you get at making stuff up. That's why I touched on earlier the importance of context. But ultimately, we are, the individuals are responsible for the outputs that they kind of sign out, just like if they were doing the work themselves.

Without AI assistance, the expectation is essentially the same. But you will find, Sarah, the more context you use, the less inaccuracies will be in the outputs.

[Julian]
Thanks, Adam. I love the question and the answer as well. Really interesting.

Thank you. All right. So that's probably a good place for us to wrap up.

And thanks so much for those questions that came through. Thanks very much for staying on as well. For this length of time, we hope you found it useful.

This is the kind of discussion that makes these sessions really valuable for everyone. So we've enjoyed it. I know I've enjoyed talking with you today, Adam.

[Adam]
Yeah, it's been great. And so remember, if any of today's discussion has sparked ideas for your own work, we're always happy to continue the conversation. Just click the button on the screen.

Thanks again for joining us. We really appreciate it. And have a great rest of your day, everyone.

 

16 Topics +130 Resources

Book a time to chat with our experts

Book time
+50 Webinars
Resources

Download webinar resources

Slides: Stop Learning AI Prompting
How To Create a Prompt Agent
16 Topics +130 Resources

Sign up to our Resource hub & Unlock all content

+50 Webinars
Prompt Style Guide
Prompt Agent Example Prompts
16 Topics +130 Resources

Sign up to our Resource hub & Unlock all content

+50 Webinars
Prompt Agent Instructions
Prompt Agent Generic Personas
16 Topics +130 Resources

Sign up to our Resource hub & Unlock all content

+50 Webinars
Recommended

You might also like