Published on 3 Nov 2025

AI Risk Management: How to Avoid Things Going Wrong

45 Minute Watch
Adam Emirali Group Marketing Director Contact me
Sean Stack Senior Consultant Contact me

Your teams are using AI daily and your governance framework may not be keeping up with their use and the tools’ capabilities.

One confidential document pasted into ChatGPT. One fabricated statistic in a report. One biased recommendation published to the public. Any of these could trigger an OIA request, a privacy complaint, or a media story — and many organisations have no protocol for preventing and responding to these risks.

AI incidents are happening now — don’t wait for yours. Join us for the practical playbook everyone needs. You’ll learn:

·       The five critical AI risks (with real examples)

·       Our 5-step incident response protocol

·       The SAFE checklist to stop errors before they happen

·       How to build a no-blame culture where staff feel safe raising concerns

Who should watch? This is suitable for anyone using AI, especially if you’re a leader.

Webinar Transcript

Read transcript

Kia ora koutou, I'm Sean, a senior consultant to Allen and Clarke and one of our in-house AI sceptics. Welcome to this session on AI risk management, how to avoid things going wrong. So I'm contractually obliged to give a bit of a spiel about Allen and Clarke.

So we're an Australasian based consultancy dedicated to making a positive impact on communities throughout Aotearoa, Australia and the Pacific. We specialise in strategy, change, policy and regulation, research and evaluation and more. And as an organisation, we give a damn about empowering our clients to overcome society's biggest challenges, which is why we regularly run these webinars, create desk guides and provide expert advice whenever we can.

So today I'm joined by my colleague Adam, who is doing his best to ensure that when the singularity arrives, our new AI overlords will look favourably upon him. In his wisdom, he's decided to have me join him to host this discussion on how to mitigate key risks organisations face when utilising AI. So Adam, over to you to tell us why you've decided to run this session.

Yeah, well, kia ora koutou and hello everyone, including my AI overlords, apparently. Yeah, if you're watching this in the future, you know, I'm your guy, bring me back. So yeah, you probably have seen me talk about AI quite a bit recently.

So yeah, I'm really keen to cover some of the risks. And so because I think AI is great, but it's not risk free. So yeah, yeah, cool.

So for a bit of context for today's session, we see AI risks as symptoms of underlying organisational issues. So for today's session, we're going to identify what the key risks are, then we're going to have a look at the underlying issues that give rise to those risks, so that we can explain how your organisation can address those underlying issues and mitigate the risks. So what that means is that we're going to have probably around 10 to 15 minutes of theory, which will be interesting before we get to the practical steps that you can take to mitigate AI risk.

And if you haven't seen our Getting Started with AI or AI Toolkit sessions, I strongly recommend watching those. So Adam, let's get started with a brief overview of the risks associated with utilising AI. What are the symptoms that people should be thinking about mitigating in their organisations? Yeah, so the slide will pop up soon, that shows you the seven biggest risks when teams of people are using AI today.

And look, based on the questions and challenges submitted when people registered, it feels like most people are familiar with these. So what I'll say is, you know, if you're unfamiliar, then recommend you kind of download the slide afterwards. I think what's probably more interesting is where we've ranked the risks.

So we have, so some of them will obviously be higher or lower depending on your organisation right now. But I think kind of in general, that's how I would rank them. Yeah, cool.

And so I find the fact that you've put de-skilling at the top really surprising. And that's because I don't think that AI is anywhere near good enough to be replacing the skills that are valuable for most organisations. So what do you say to that, Adam? What do I say to that? Yeah, so look, what I would say is that de-skilling is really impactful.

And the challenge with de-skilling is that it compounds over time as both your team get more capable in using the tool, but also as the tool itself gets more capable. And so there's kind of that compounding nature of that risk. Plus, there is a real challenge around detecting and also mitigating that de-skilling risk.

So to give you kind of just a brief example, you know, 12 months ago, if you were to look at an AI-generated image, you know, you would spot it pretty much straight away in a kind of lineup, because there'd be blobs for hands, weird shadows, and kind of the vibe would be off. Now, if you're, you know, generating images using NanoBanana, if you don't know what NanoBanana is, I'd strongly suggest chucking it into Google. It's amazing.

But if you're generating images using NanoBanana, I mean, they're really, really realistic. And so that's just an example of where, you know, AI as a tool has got way better. And so it's much easier for kind of people to take the easy path.

Yeah, right. And also the other thing, the way that you've ranked those risks, that took into account a time horizon. Yeah, that's right.

Yeah. So the time horizon is kind of three years that I've looked at, which is where that kind of compounding nature comes into effect. Yeah, right.

Cool. And then so another one on your list, which you put at number four, which I think some people might find a bit surprising as well, is you put accuracy and hallucination at number four. So in our preparation, Adam and I discussed how these are actually two unique things that are often confused.

So what's going on here? Yeah. So first of all, yes, it is ranked fourth. But, you know, these rankings are more like a general guide as opposed to kind of, you know, steps.

But ultimately, what's happening with hallucination and accuracy is that, and there should be a slide on your screen now, is hallucination rates are actually already really low. And so hallucination being, you know, the model literally making things up. And those rates are getting lower.

So the chart shows you how the models perform on benchmarks. Just a quick example, because there's obviously lots of dots there. GPT-4 had a 50% drop in its hallucination rate in just 12 months.

So it started from amazingly 3.5-ish percent down to like 1.7 percent. So again, I mean, we're talking small rates here. Yeah.

It might be helpful for people to know, do you know why models hallucinate? Like what is it that sort of leads it to do that? It's believed, it's hard for people to research this to know exactly, but it's believed to be part of kind of the functionality of the models themselves. And so it's kind of like being likened to the creativity element. So if you kind of train all creativity out, then the model won't be creative enough to know how to lay out a meeting minutes, for example.

So you still need that creative element. And so, yeah, that's why it still exists. Cool.

And so that hallucination rate, like we sort of showed, I think it's much lower than what I would expect and probably what a lot of other people would expect, which indicates there's something else going on, that people are confusing with hallucination, which I guess is the accuracy part, isn't it? So can you speak to that a wee bit? Yeah. So that's exactly it. I mean, we have all experienced AI tools making stuff up, right? And obviously, and sorry, the rate at which you will experience that comes down to our use of them.

So choosing the right tool for the job, prompting it effectively and providing it enough context. I know that makes AI sound more complex than we've all been led to believe. We're all kind of led to believe that AI is this like magic box.

You chuck something simple in, you chuck your sentence in and out comes this perfect 40 page report or something like that. I think of it more like Excel. So if you think back to using Excel for the first time ever, you didn't start writing really complex formulas straight away, right? You kind of built that skill up over time and now you can use Excel really efficiently and it's a really powerful tool.

So AI is kind of the same as that. Yeah, cool. And so that was sort of a brief look at the risks because people can download that slide that had them all on if they're interested in more.

Adam loves to talk about AI risks, so you can hit him up if you want to hear any more. He genuinely does. I just like talking about AI really.

He does, yeah. Anyway, so now we're going to talk about the underlying causes that we need to treat to stop those risks popping up. So the risks that Adam discussed actually map to the four key elements of any organisation's operating model.

The reason that happens is because all organisations develop their operating models in a pre-AI world. For most organisations, their pre-AI operating models do a great job at mitigating pre-AI risks. However, AI is here now and that means that suddenly there are widening gaps in organisations' governance, culture, capability and value creation processes, all of which increase the likelihood of the risks occurring and the severity if they do occur.

So on the slide now, you can see those four model elements along with brief descriptions of what they mean. You'll also see how the AI risks map to those elements. So mitigating the risks requires addressing those gaps.

And something I will just say is that even if you're a team lead or someone working within a team, these things apply equally within your team as they do if you're the chief executive of a large organisation. So Adam, I'm curious about the order that you've ranked these elements in because it's, yeah, anyway, so can you give a brief overview of your rationale before we move on to what people actually need to do to mitigate the risks? Yeah, so first I need to apologise to our strategy and optimisation team. I guess I was going to apologise for you.

Oh, were you? You'll let me apologise? Because I bet if they're watching, they're looking at that slide going, what have you done? So I've reordered their slide, so I'm sure I'll get told off later, just to make it kind of applicable to this conversation. So governance is at the top, and the reason I put governance at the top is that this is where your AI policy, incident management framework, and enterprise level tools sit. So if you don't have those things kind of set up at the moment, then your AI confident staff are making it up as they go along, and you kind of have the highest level of risk you can imagine.

So, and the more cautious team members, without those governance things in place, using that uncertainty as a reason to moderate their use of these tools. Yeah, cool. And then you've got, so after governance, you've got culture.

So why have you put that second? Yeah, so culture is the most important thing to get right. So it's also the hardest, unfortunately. And getting the culture right is the thing that will deliver you the long-term success for your people, and also for your organisation.

But you can't start working on your AI culture until you've got that governance piece in place. Yeah, right, cool. And then you've got capability and finally value, which I guess some people might think is a wee bit counterintuitive, but can you explain those rankings? Yeah, so capability comes after governance and culture, because it depends on having both of those things in place to have real impact.

So obviously people need to know how to use the tools, but they should only be doing that once you've determined what tools they should be using, how they should be using it, and starting to set that culture up. Otherwise, you're not really going to set yourself up for success and you're going to invest in this capability stuff that's going to go nowhere, if you like. And then I've put value last, because when it comes to AI, the value piece comes more naturally once you've got those other elements in place.

There are things you can do to speed up the value creation, but they need those other elements. Yeah, awesome. That makes sense.

Cool. So our audience will be pleased to hear that that's the end of our theory section. So now we're going to be getting into the practical steps that people can take to close those gaps and mitigate risks.

And again, these steps don't have to be done on an organisational level. They can be done within teams, they can be done within smaller groups of people. So what are we starting with, Adam? Yeah, so obviously, we'll just go through them in order.

So we'll kick off with governance. So as I said, that's the most important thing to get under control first. So on the slide, there's a continuum there to help you understand which bits to do first.

So the first thing is, set up your governance group and pick a technical lead. So your governance group needn't be a big group of people. Just some people with some privacy, IT experience on the actual doing of the work is really important to have in that governance group.

And obviously, you need AI skills, generally from a technical lead. But when you are picking that technical lead, again, you need a technical lead that understands the core work that your organisation or departments are delivering. Don't assume that person is coming from IT.

Because going back to what we said before, culture is the hardest and most important thing to get right. And so you might actually be better off choosing someone who isn't from IT to get that right. Yeah, like a marketing director like we did with my friend Adam here.

So once you've got your governance group and your technical lead sorted, what's next? Yeah, so the next thing is to create an AI policy. Now, I'm not a policy expert, Sean. But my two cents is that you can move really quickly on getting a base level AI policy set up for your organisation.

And when I say really quickly, I mean, literally, you know, there's six kind of core elements, which I think should be on your screen now. And if you get those six core elements in your policy, kind of quick, smart, then your policy, you know, it might only be like four pages to start with. The couple of quick things that I will mention from that slide is that you really want to have a clear decision tree in your policy.

So this makes it really easy for your team to know what to do. So things like, is this, am I loading data in to the tool, i.e. what context am I using? You know, and therefore, you know, if yes, then, you know, what checks and balances, that sort of stuff do I need to have in place? You know, is my final product going outside the organisation? You might have a different level of kind of requirements around things like that. So decision tree massively helps with that.

And then the last thing is that in your policy, I like to have what I call kind of caution triggers. So these are little catchy phrases that leaders in your team can repeat over and over until they stick. And the point here is it creates moments for the team to be kind of extra cautious.

So if you think kind of of your decision tree, you might have one caution trigger that's like, hey, if you're loading stuff in, have a him. So it's like, stop, think before you load that document in, have a think is, you know, and then obviously you can have some sort checklist or something like that. Yeah.

Another example could be, you know, pause before you publish. Yeah. Yeah.

Right. Yeah. And I think if your initial AI policy has these six things in it, you've got a really effective safety net that you can iterate on over time in a safe way, which is really important given the pace of change in this area.

Absolutely. Yeah. And then so what after you've got your AI policy sorted, what should people be thinking about next? Yeah.

So a few people dropped in the comments about risk registers and that sort of stuff. So I actually wouldn't advise having a separate AI risk register. I would be adding AI to your existing risk register because realistically, a privacy breach because of AI use is still a privacy breach.

The kind of the implications and the follow up process to a privacy breach is essentially the same. Yeah. You know, obviously there's a couple of additional steps when it comes to AI, but that doesn't require an entirely separate risk register.

Yeah. So, and the final thing that I will say that I can talk about endlessly, don't laugh at me, is that you need to roll out an enterprise AI solution as quickly as you can. So to do that, you should ideally pick kind of your main service provider.

So if you're with Microsoft, that is Copilot. If you're with Google, that is Gemini. Ultimately, you can choose a least privilege approach and that enables you to kind of roll out that enterprise tool really fast.

And that is the thing that really mitigates that privacy and confidentiality risk. It really drops the risk from like danger, red flashing light to like, OK, you know, we now have a more manageable level of risk. Yeah, great.

Oh, shut up. All right. So that is the governance section done.

So once all those bits are in place, everything else, so closing the culture gap, the capability gap and the value gap becomes much easier and safer. And to be honest, even once you've just got that governance group and technical lead in place, everything else becomes a bit easier. So if you do need help getting started on moving faster, feel free to email Adam.

But what we'll do now is we'll move on to culture. So culture, it's like Adam said earlier, it's the hardest gap to close. And also, as I'm sure anyone listening is aware, culture is the most important thing for long term success.

So you can have perfect policies, but culture determines whether people actually follow them. So the question then is, how do you build a culture of enthusiastic scepticism, where people use AI confidently, but verify everything? How many steps have we got here? So we've got five steps for people. And I think, you know, that kind of framing of enthusiastic, sceptic is like, that's, I think, the best way to think about it.

And I would say that that's what you are, Sean. Yes, we've come to that conclusion over the last two days. Reluctantly.

Yeah. And so to kind of build that culture, the first place to start is that leaders must model the behaviour. And by leaders, I don't necessarily mean the CE, although they should be.

But you know, if you're a team leader or something like that, you really have to be using AI visibly. Your culture is not going to budge unless you are doing that, because people copy what they see, not what they're told. Yeah.

And so what does visible use actually look like? Can you give anyone practical examples? Yeah. So look, the main way I think about visible use, and kind of the easiest and safest option, is to use it in internal meetings. So you can have your AI assistant help you with the meeting prep.

You can have it generate images or graphics for use in your slides or kind of as conversation points. Use it to brainstorm ideas during the meeting. Use it to transcribe the session, create the minutes.

And so what you're doing there is you're kind of showing the team that, hey, there's a lot of different uses for this tool. You're doing it on kind of internal, in a much more safe environment, because it's kind of internally focused work. And what you'll see is that because you are actively and visibly using it, you'll see that the team actually start doing the same thing.

So if you're a kind of more advanced leader, then what you can do is you can set up some of our simple kind of tools that we've provided. So one example is the Board of Strategic Advisors that I recommend everyone set up. So you could say, oh, I've chucked that thing into my Board of Strategic Advisors, and they've suggested these changes or improvements.

And again, that's like just little moments like that that kind of hammer home that AI use. And importantly, share your mistakes. And you're smiling because I had a shocker the other day.

So I had my AI assistant quickly look at some data for me and come up with some percentages. And I was chatting to the team about it in a team meeting. And one of the team members rightly pointed out that the percentages added to 108.

And so the great thing about that, well, first of all, I should object. But the great thing about that is that it's a learning moment. And the data that I used was kind of inconsequential.

And the learning moment out of that was that I could be like, yeah, I didn't verify the information. I now need to put that in the risk register because it was risky AI use because I didn't verify the outputs. And that was a real example of me being too enthusiastic, too much of a rush, not enough of that sceptic.

And so it was a great moment for everyone to go, OK, this happens to everybody. Yeah, cool. And it also happens to be a great segue to our next step.

Did we plan that? I don't think we planned much of this. OK. Yes.

So the next thing is make speaking up about AI concerns completely normal. So where we want to get to is that we want to have the team, when the team say something like, I think there's an AI issue here, we want to make that feel as natural as me saying to you, Sean, there's typos in your document. If I say there's typos in your document, you don't get angry or annoyed at me.

You're like, oh, great. I need to get either someone else or I need to do a more thorough check. So that's where we want to get to.

And why we're doing that is we're trying to bring shadow AI into the light without punishment. So if someone says, I've been using this tool for this task, I'm not quite sure if that was right, that's actually the start of a really good risk management conversation and opportunity, and shouldn't go into your kind of HR processes. Yeah.

And it doesn't just relate to culture. It's also a capability-building opportunity at the same time. That's right.

Exactly. Yeah. And so, well, very early in the session, we discussed how you'd rank de-skilling as the number one risk.

And I suppose the thing that is potentially leading to de-skilling is over-reliance on AI. And that is becoming, well, for some organisations, that's perceived to be becoming a bit of a big issue. And that's really what this next step is about in terms of culture, isn't it? Yeah, that's right.

So we need to create deliberate AI habits that include that critical human-in-the-loop process or step. So the best way to do that is actually by developing AI-assisted workflows, because they kind of force those moments. We'll come to that in a little bit.

But there's a couple of quick tips here that I'd give people. So build a peer review into your processes. So for us, as an example, peer review is just a completely standard thing that we do.

The point here is that when you are choosing who's doing the peer review, you make them aware that AI was used in the document. Yeah. And just on that, speaking from experience, make them aware of exactly where AI was used in the document and what you want them to actually review.

Because otherwise, if you've just given someone a 60-page report and said AI was used in this, not particularly helpful. But if you say, I used it to generate this bibliography from this unstructured list of sources, then they know, right, I actually need to spend extra time checking that that's transferred across, that it's the right style and everything like that. 100% correct.

And as you touched on, it is way better to get somebody else to do that, because again, that somebody else might have a bit more of that sceptic and less of that enthusiasm kind of lens. So that's a really good point. The other couple of things, just quickly on that.

So I like to suggest to people to run the kind of spot the inaccuracies. It's kind of what you touched on. So you can do that.

And then my favourite and the most impactful thing that I've seen is that most organisations now require disclosure on any outputs that involve AI. My recommendation is that your disclosure has to include the names and roles of the people that responsible for the drafting. And what that really changes is that the ownership of that document, because their name is literally written on it, really, really changes how thoroughly they check things.

And would you add to that disclosing how AI was used in a more granular way? So and so used AI for this task? Yeah, exactly. So we, I mean, that's, you've asked so now I'm going to go into it. So we kind of have a two tier disclosure set up here.

And the reason for that is, to your point, you know, if we've got a large project, and it's the end deliverable, then in that end deliverable, you know, it might be a 40 page report, but we may have used AI on a bunch of tasks to contribute to that final deliverable. So that's where we kind of have this, we think, requirement to be very explicit about what AI tools were used, what they were used for kind of in the process to get to that final deliverable. We, of course, still include the people's names and roles.

That's kind of the big disclosure. If we're doing things like, you know, it might be a project process report or something like a progress, I should say. So it might be like a progress report.

So it's not kind of a final deliverable. But it's something that, you know, we're providing to a client. So it's still highly important to us.

You know, we'll still have an AI, if AI was in fact used, we'll have a disclosure on there. And as you said, it will be, you know, Sean, senior consultant, used AI to summarise, you know, the whatever, the project progress from our system. Yeah, great.

Thank you. Cool. And then so the next step is to make successes and learning moments visible.

So that's basically just rewarding good practise and normalising near misses or actual misses as the case may be. Yeah, thanks for bringing that up again. Yeah, so what we're doing is, again, bringing I used AI for X, it surfaced Y and here's how I verified the information. And again, yeah, share those near or real misses with the team. Yeah, cool.

And then finally, you've got embed fairness and equity checks. And this one's really important for organisations working with diverse communities, right? Yeah, that's right. So, you know, in the way the AI tools are kind of made, built, is obviously on a huge amounts of training data.

That training data has biases in it and, you know, one of the main biases is actually the volume of data that's available for these communities. And that trains bias into the models. And so you just need to be really aware of that.

And so what I recommend people do in that instance is you need to have someone with cultural competency review that AI assisted work, because, you know, if you've tried to use or you have used AI to kind of summarise, might be a te ao Maori or Aboriginal kind of principles or ideas, it will confidently give you answers. And if you don't have that high level of competency, you could very well be misled. Yeah, great.

Cool. And then so I think to sum this bit up, if people stay focused on the five actions that we've discussed just now, the cultural gap will start to close slowly but steadily. Yeah.

And you'll build the kind of environment, whether it's within your organisation or within an individual team, where people can succeed at using AI. Yeah. And then so now we're on to capability.

So capability only matters once governance and culture are sort of heading in the right direction. But assuming they are, there's one core skill that I know, Adam, you recommend that everyone masters before anything else. And that's the ability to apply the SAFE framework every time they use AI.

Yeah. So do you want to talk a bit about what the SAFE framework is? Yeah. So we'll just quickly touch on that, the stuff on the slide.

But the SAFE framework is not a policy requirement. It's, as you said, it's a core competency kind of for the team. And the point here is it's kind of like that extra reminder to do your spell check.

So you can see on the screens, S stands for source. So where did the kind of output come from? A is accuracy. Does it stand up to verification and factual accuracy? Fairness.

Who could this harm or exclude? And exposure. You know, what data did I use and should I have used it? Yeah. Great.

And so basically, it's just teaching people to apply a new quality filter that before AI, they never had to apply. But now it's just like proofreading, basically. Exactly.

Yeah. Cool. And then our next step in building capability is to roll out AI training, which is obviously a very broad step.

But Adam, how can people think about doing that? Yeah. So it is very broad, which is why we've kind of popped up a slide to kind of show three kind of sections of your training, if you like. So we'll just talk very briefly about equip.

So at the equip stage, you're talking about what are your policy settings, what tools are available, how do you access the tools, how do you use those tools. And just one key tip here on training for people, once you do get to the using the tool stage, it's very tempting to try and teach everybody how to be a prompt engineer or a context use expert. I strongly suggest that you actually start with a prompt agent that generates detailed prompts and some AI assisted workflows.

It just gets your team going way faster. Yeah. Cool.

And then so the final step for closing the capability gap is building specialist capability. We're not going to go into this too much here because the type of specialist capability and how an organisation builds it is very dependent on the type of work that people are doing. That's right.

Yeah. So Adam, but I know you did have an example of what we mean by specialist capability that you wanted to share. Yeah.

So I know we're starting to get a bit tired on time. So look, specialist capability could be, you know, anonymisation or using synthetic data. We have separate sessions on what those are, and we have a session that also goes into five ways to create synthetic data.

So really encourage you to check those out if you're up to that level. Yeah. Cool.

And so we're almost through into the Q&A section. But before that, we'll briefly, very briefly discuss how to close the value gap. And we've put that last for a reason.

Governance isn't in place, can't use AI safely. If culture isn't right, people aren't using it consistently. If the capability isn't built, they won't use it well.

But once those have started to close those gaps, the value almost generates itself. But Adam, I know that you've got three steps because you do love steps, which people can see on their screens to sort of hasten that process. Yes.

Steps just makes it easy to follow, mate. Yeah. So the first one is figuring out which workflows to automate, not automate, to AI assist.

Yeah, exactly. And so on your screen, you can see a kind of pentagon. First thing is you need to determine which tasks are right for those AI-assisted workflows.

So the main thing there is that you want the tasks that are repetitive, kind of slow for humans to do, or cognitively light, and that don't involve a lot of context switching. So kind of create that list first, is what I'd say. Yeah, cool.

And then once you've got that initial list of those tasks, how can people start to figure out which are best to start not automating, but getting assistance for it? Yeah. So that's the other points of that pentagon. So then as you kind of rank yourself against those other four points, you kind of end up in the centre of that pentagon.

And the ones that are in the centre are the best workflows to start focussing on. The key thing here is that it's not about job titles. So it's not really roles.

It's those workflows and tasks that you're focussing on. Yeah, great. Thank you.

So that brings us to the end of our main session. But on our audience's screens, they can see all of the different steps across all of the different operating model elements that we've discussed today. Before we get to the Q&A, Adam, I do know, like steps, you love a final tip.

So what's your final tip? I just want the last word. That's what it is. Yeah.

So look, obviously, we've covered a huge amount. So what I'd suggest is you're better off picking the main ones for you, your team or your organisation based on this kind of summary slide that you can see. And start from there.

You're better off making incremental progress than trying to tackle it all in one go. So of course, if you have that really high governance risk, because you don't have the policy and some of those other bits, then I kind of pick ones along that continuum rather than down. Yeah.

And I think it also does make sense to think about it. You can start with just picking the first one of each, for example. If you're looking for something super simple, first one of each, build on that throughout the process.

All right. So if you could use a hand with any of this, we're here to help. And you can connect with Adam or if you want someone more sceptical, me, directly by clicking the button below, apparently.

And now we're going to get on to questions because there were quite a lot. So we have a question from Svetlana. Adam, you have privacy and confidentiality as a component of governance and culture, but not for capability.

Why? That is a good question. Look, I think the reason that I've put it in governance to begin with is because we're tackling that governance step first. And so if you can get those governance pieces set up, your AI policy, you've got your lead and your governance group, and you've got your AI level, your enterprise level tool, then your confidentiality and your privacy risks go down a lot.

So especially that enterprise level tool. So I'm not, I guess I'm saying you do still need some capability around those things. But actually, it wouldn't be a massive focus of mine once I've got that governance piece set up, because the governance is the kind of, think of the governance as like the safety net, right? So it gets you safer very quick.

And I think we probably didn't make it as explicit as we could, but applying the safe framework, that capability of everyone applying the safe framework, that's a really important capability for mitigating confidentiality and privacy risk. Absolutely. Yeah, that's a good point.

Yeah, no, great question. Thank you, Svetlana. You made us think.

You did. And then so we've got one from Joel, which is, oh, this is also one to make us think. So I'll hit you with this, Adam.

How can an organisation mitigate ethical or moral risks for different use cases of AI? Yeah. So he's gone on to say, for example, generative inventory and LLMs impact creative communities, and some people will negatively view organisations that use AI. Yeah.

Yeah. So look, that's a really difficult one to answer, obviously, because every organisation and individual have kind of a different moral compass, if you like. So what I would say, though, is that there is a bit of a changing tide on this.

So I'll just quickly touch on that. So literally in the last week, I think, some of the AI music generators have actually started paying money to artists whose music they've used on training data. So for example, they just did a deal with Warner Brothers.

So kind of the tide is changing, I would say, on that, because I do agree with you that it is an issue. So hopefully, we can continue to see positive strides in that direction. And just what was the other thing? Oh, imagery was another example.

Yeah. Yeah. So with things like imagery, we don't tend to actually use the AI images.

So what we do is, again, in our kind of AI assisted workflows, is we'll generate the kind of first iteration of the graphic or whatever it might be. And then we'll still have our design, and we use that as a design brief to it for our design team. Yeah, yeah, cool.

So yeah, look, it is a real ethical challenge. Yeah, yeah. And there's no simple answer.

Here's another one from Svetlana, which is one that I think about a lot as well, to be honest. Regarding disclosures, I've heard people saying AI will become BAU. We don't disclose use of Google for every step, or Microsoft Word, or Excel, or a calculator.

Why should we do that for AI? Yeah. And look, Svetlana, it's a fabulous question. And the way I see it is that you're touching on exactly the thing, and I think that will change.

The difference is that with those tools, they didn't have the ability to kind of replace, not replace, I'm trying to find the right framing. They didn't have the ability to kind of seemingly replace human thought. And I think that's a bit the nuance here.

So because of that, there is going to be a bit of a continuum. So at the moment, we're in this stage where I believe disclosure is the right thing to do, because it means that the responsibility is on the people who are publishing the documents to make sure that it's accurate. There will come a time, I do believe, that disclosure kind of becomes a bit unnecessary, because to Svetlana's point, it's like, well, I googled something.

I don't put Google on there. It's just we're not there yet. And all the conversations that I have with people about AI, which is like all the time, my view would be that most people aren't ready for even AI capability we have today.

Yeah, cool. And then so we do, we've got another question from Ashley, which I think is an interesting one as well. So is there an option to just avoid AI completely and stay old school, even if your workplace is actively adopting and encouraging it? And then what are the risks of this approach and this fast moving AI era? Yeah, that's a really great question, Ashley.

And it's one that I think a lot of people are grappling with. The way I think about it is that imagine, you know, late 90s, early aughts, you know, when everyone's being issued a computer, right? So the kind of most desirable people in the workforce were the people who could use a computer. You don't have to be a computer scientist or anything like that.

But you know, if you could use a computer, then great, you know, you were kind of ahead, I guess, of your peers. And so I kind of think about AI in that way. So if you can build up these kind of base level skills, then you will be kind of preparing yourself for that future.

Is there an opportunity to say old school, as you put it, Ashley? I think yes, there is. The challenge is, you know, again, there's kind of like a scale where over time, you know, the opportunities probably in the workforce for that kind of no AI skill set will decline over time. And look, I can't give you a time frame.

You know, is it 10 years? Is it 20 years? I don't know. But that will decline over time and the other, again, like computers, right? You could not imagine going to work now without a computer. Yeah, yeah.

I don't know what people did before computers. It's just a phone on a desk and some paper. What happened? Exactly.

So that's why I think that computer metaphor is a really useful way to think about it. So we're probably coming to the end of our time shortly. But I know a lot of people are talking about AI slop at the moment.

And I believe, including us and our marketing for this session. So to me, AI slop means something that looks good, but doesn't actually have any substance. Adam, what do you think AI slop is? And how can people avoid producing it? Yeah, look, for me, AI slop is AI output where there's a lack of substance to the content that the AI has produced.

And generally, there's kind of a lack of context within the content. The accuracy might be totally poor. And also, it might have like the kind of AI writing structure built within it.

So look, there are a bunch of things that you can do there to avoid becoming an AI slop person. And so just very briefly, if you're using... So we actually have a session coming up next week, I think is team? Yeah. Next week, which is all about AI writing.

So be sure to kind of come along to that. We've got our friends from the right group, they're coming along to host that session. So that'd be a really good one.

But I think beyond that, there are things that you can do to avoid it. I like to use a writing style guide. So that will really help show the model what not to do in terms of how to write content.

And then you need to get that context bit right. So if you're providing it sufficient context, then you'll kind of avoid the slop. Yeah, great.

And then, so I'm assuming someone's going to like wave at me when I'm meant to do the closing bit. Oh, I've just been waved at. However, one final point.

There are a lot of questions from people wanting to know how to work with large volumes of data. So what if you're and that's a big, big topic. So if you are keen on hearing a session specifically on this, please let us know in the chat.

And then we can, if there's enough interest in that, we can run a special session on how to work with large volumes of data. Cool. But I have been waved at.

And so that brings us to the end of our scheduled time. And so our next webinar from 400 plus questions on our previous AI webinars, one theme dominated, how to improve AI generated communication, quality and style. So like Adam said, we'll be joined by right group who will share their methodology to solve your AI woes of endless revisions and ineffective documents.

So to register, there's apparently another button on your screen. But that brings us, we're going to, I think we're going to try some more questions. Oh, cool.

So if you want to stay on, we'll tackle some more questions. Excellent. Yes.

Please keep them coming. This is great fun. Don't let me leave without talking more about AI.

So we do, we had a comment from Liam about who is wary about how involved AI is becoming in the workplace. So what do you think about that concern, Adam? Yeah, look Liam, I think you're right to be wary about how involved it is. You know, as I said to one of the people who submitted a question before, you know, the use is going to increase over time.

So, you know, again, perhaps Liam, you're an enthusiastic sceptic. And so you might actually have the exact right culture kind of mindset to really succeed with the use of AI. The couple of points that I'd just make on, you know, being wary about it and perhaps being a little bit cautious about using it.

I'm kind of reading into that maybe a bit more than Liam meant, but let's go with that framing. This is the worst that AI is ever going to be. In terms of performance.

In terms of performance, right? So it's only going to get better. So if we think about it in that framing, it's pretty useful today. In 5-10 years time, you know, maybe it's twice, 10x more useful, like somewhere in there, right? If that is the case, then AI skill development obviously becomes more and more important.

So, you know, again, going back to that kind of computer skills example. Yeah, cool. And then so we had a question from Piper, which is any advice for starry-eyed managers who want us to use AI for everything without killing the dream? So how do we, and I believe this is speaking from the perspective of someone who has been managed by someone that's telling them use AI for everything, how do we keep expectations realistic, a balance of enthusiasm and scepticism? Yeah, that's a good question.

Really good questions today. Yeah, god, you're killing me now. Okay, let's try and give Piper some useful advice.

Okay, so how do you manage starry-eyed managers? For me, I'd probably start at that piece that we were talking about around culture. So, you know, is this manager one of the people who is leading and visibly using AI? Because I've experienced myself, you know, people who are like, yeah, we can use AI for everything. And then once they actually start experiencing or using it for themselves, what they quickly realise is that actually AI isn't great at everything.

And so that's probably what I would start with. And then from there, I would start demonstrating Piper some of those moments where AI kind of did a sub-optimal job. That's quite risky though.

You can imagine from the way I would approach it as someone who is not a manager. So speaking truth to power here. I don't know about that, but yeah, go on.

I think a really good way to approach it could be with being upfront about that list of risks that we spoke about at the start before the risks occur and having that conversation with the manager and being like, look, these are all the different, you can even just show them that slide. And then say, these are all the different things that can occur. These are much less likely to occur if I'm using AI in these ways and the ways that you feel comfortable, that you feel capable.

And then you can say, if you think it can be better, then there's all these different things that we as a team need to put in place before we can start doing this in a risk-free way. That was a much better answer than my crappy one. Thanks.

You spent too long at the top, Adam. So we have another comment from Sol, who wants to know how to balance speed and accuracy with AI. So what do you reckon about that, Adam? Yeah, so again, we've kind of read into your thing, Sol, a little bit.

And I think one of the areas that we often get feedback around is everybody's like, oh, I know that I need my prompts to be really good. But to write a really good prompt is actually quite time consuming. And so for many people getting started or earlier on, they'll find that writing the prompt, one, I've got to learn how to write a good prompt.

Two, I then need to actually write it. And so those kind of barriers, if you like, mean that actually it's a bit faster for me to just get on and do it. And so what I would say is that actually that's where things like a prompt agent, which again, we've got a session on, prompt agent is super useful because you can write a rubbish thing in there and it spits out a really detailed prompt that's structured correctly.

So you don't actually need to be a prompt engineer. It will show you what to, you know, the format, et cetera. And you can just review it and edit it.

Like it's so, so fast. And then the other thing is if you're kind of working in a team and you've got some kind of common workflows, that's where things like a prompt and context library comes in handy. I'm sure lots of people have heard about prompt libraries.

My view is that your prompt library should have a context library along with it. So, you know, here's your prompt and here's the contextual information that you're adding to it. Just to touch on what that is.

So contextual information might be things like AI writing style guide for this task. It might be a template. Here's how I want the information kind of disseminated.

You know, it might be examples of previous good work. Yeah. Cool.

All right. Cool. I think, I suspect we're running out of time now.

I have, however, dropped my clicker. So to check the time, I'm going to have to briefly disappear from frame. And I'm back.

Right. So, and oh, and so as it turns out, that probably is a good place to wrap up. So thank you so much for embarrassing myself.

Yeah. So thank you so much for staying on and for all your really thought provoking questions, really great questions. And that's the kind of discussion that really makes these sessions valuable for everyone, I think.

For those of you who are joining the team for our next webinar, I hope you enjoyed that session. And remember, if any of today's discussion has sparked ideas for your own work, we're always happy to continue the conversation. Just click the button on the screen.

So thanks again for joining us. We really do appreciate it. I hope you enjoyed this, found it vaguely entertaining and useful.

Have a great rest of your day. Thanks, everyone. Ka kite.

16 Topics +130 Resources

Book a time with our experts

Book time
+50 Webinars
Allen + Clarke: AI Risk Management Slides
16 Topics +130 Resources

Sign up to our Resource hub & Unlock all content

+50 Webinars
Resource Hub

Other resources you might like