We’ve noticed you’re visiting from NZ. Click here to visit our NZS site.
We’ve noticed you’re visiting from NZ. Click here to visit our NZS site.
Toby Black and Stuart Beresford share some of the knowledge they have gained through developing AI policy.
Learn from their experience and get a better understanding on why you should be looking into your own AI policy if you haven’t already, and gain some knowledge on what you should be taking into consideration when doing so.
“Informative, and a nice, well-paced delivery and well answered questions.” – Leonie Walker, Senior Policy Adviser, ANZCA
“I liked it that you chose not to go for the usual blue skies topics around AI and went with a more practical, next-step topic like developing an AI policy.” – Mathew Chacko, Policy, LINZ
Welcome and thanks for taking the time to join us today. My name is Stuart Beresford and I am a Senior Consultant here at our firm Allen & Clark and I'm also the Privacy Officer for the firm. Kia ora everyone, I'm Toby Black.
I am also a Consultant here at Allen & Clark and I have an academic history of studying AI. I did my Masters at Otago University on the rights and responsibilities of AI use in New Zealand's public sector. Allen & Clark works across New Zealand and Australia for a range of government and non-government agencies, private sector organisations and international organisations.
We have several specialities including evaluation and research, evidence-based policy analysis, regulatory design and implementation, business change and international development assistance. A lot of this work involves synthesising and analysing large amounts of data and information. Over the past year, some of our staff have started to explore whether artificial intelligence could be used to make their work easier, particularly in relation to repetitive tasks that are time-consuming and not intellectually stimulating.
The leaders within our firm were not opposed to using AI but wanted to ensure that it was used safely and does not pose a risk to the integrity of the projects that we are working on. So they asked Toby and myself to produce a policy that guides the use of AI within our firm. This webinar will explore the benefits of using AI.
It will look at different ways of defining AI and how those definitions will impact the levels of responsibility and risk for each type of AI. In this discussion, we propose making a distinction of AI and algorithms based on their outputs rather than necessarily the technology that makes these decisions. We'll explain more about that later.
The webinar will then provide a brief overview of our journey with developing an AI policy and how this was informed, developed and ultimately delivered. As part of the discussion, we will provide our top tips that people should follow when developing an AI policy for their business and agency. We have a quick poll for you to complete which will help today's discussion.
This poll asks, does your business utilise AI and does it have an AI policy? And we would welcome everyone to complete that poll. So Toby, can you give some more detail about the background to the policy? Sure thing. So at Allen & Clark, we started having discussions about the need for investment in developing and implementing an AI policy for our company.
This conversation happened at the start of this year, particularly in the shadows of the rise of Catch EPT and tools that have become very noteworthy in the news. The process started with an acknowledgement that AI technologies present unique strategic requirements that do not fit under existing procedural and technological policies that we had here at ANC. So once we identified the need for an AI policy, we leveraged the expertise we had within ANC.
So obviously, my academic background in AI responsibilities and obligations in the public sector aligned really strongly with the issue, particularly given the sphere in which we do a lot of our work. And obviously, this was a perfect complement to your expertise in privacy law and privacy obligations being the privacy officer here at Allen & Clark. And this is particularly relevant given we identified the bespoke needs of privacy collection, storage and security.
Following discussions with leadership here at our company, we agreed on a position on AI use of AI introduction that we were comfortable with. We'll go back to the importance of these initial conversations when we do the top 10 tips for developing an AI policy. We recognised for us the competitive advantage that comes with leveraging certain AI technologies and how they can provide this advantage on a commercial level.
We also identified the benefits for our consultants and our people that are doing the work and a shift in the work that they could be doing. When you talk about AI offering firms a competitive advantage, what do you mean by that? So we have established that AI is here and it's here to stay. Other companies and other organisations are going to be introducing more and more artificial intelligence in their work.
And our initial discussions, we recognised that in the competitive industry that we work in and consulting, being able to gain a point of difference when pitching and trying to win work is really important for us to keep the lights on, but also pick up the work that we really want to be doing. And with new technologies seeming to pop up daily, it's important to create a framework which the company can stick to and align with when we're wanting to use some of these new AI tools to gain this advantage for our work. Having said this, we also identified the need to create clear boundaries for our staff to ensure that any new technologies that are introduced won't negatively impact the reliability and the quality of the work that we produce and that any data that we have provided from our clients stay secure as if we weren't using these technologies.
On that, do you have anything to add here about the privacy applications? Well, I think the only thing I'd say at this point is that the Privacy Act 2020 applies to everyone using AI tools in New Zealand. That Act contains information privacy principles around how to collect, use and share personal information. People using artificial intelligence, people using AI tools need to be confident that they are upholding these principles.
Otherwise, they could find themselves in breach of the Act. I will go into a little more about this later, I think. So, how useful do you think AI is for our consultants? How can it help them do their work better? So, Stu, you and I are both consultants here at Allen & Clark and I think I can speak for both of us in that we like to think that the thing that makes us good at our jobs is the ability to problem solve, it's the ability to think critically and it's the ability to deal with complex analysis of problems that are very multi-layered.
We like to be able to work outside the box in doing the work that we do. Unfortunately, the reality of our work is we don't get to spend every moment of every day doing this kind of thinking and doing this kind of work. There's a fair bit of grunt work, which is often associated with most of the projects we do.
So, this can include significant data entry, it can include coding large names, numbers of submissions with public consultations or extensive national, international jurisdiction scans. As a company, we've identified that the value our employers get from using and working with us is the value that you and I offer as consultants in terms of this analysis and problem solving. But what we can do is we can automate some of this data entry that isn't necessarily the things that we hold the most skill at and the things that the clients are getting the value add from us.
If we can implement technologies that automate at least some of our processes, then we can be more efficient in our work, we can spend more time doing the things that we want to be doing and the things that we're really good at and all of this is possible without compromising the quality of product that we provide to our clients. Yeah, I definitely take your point on that. I mean, as you know, last year we, or two years ago, we were asked by the Ministry of Justice to support them with regard to the development of the conversion therapy legislation.
We were asked to review the submissions that were received on that bill. 106,000 submissions were received. Most of them were form or minor submissions, but we were sent through around 50,000 submissions.
The first stage of that work was to filter them, to take out all the minor, the one-liners, the submissions that were form submissions that were identical and that took hours. I think, as you know, I calculated when we were preparing for this seminar, this webinar, that we spent 95 hours on that stage of the process alone. We did a quick AI, what is it, we used an AI tool to run through the submissions that we received recently and it took a couple of hours to set the AI tool up, but it then spat out the results in about five minutes.
95 hours compared to three, I think it was. So, it saves time, it saves our client money, and it also sort of like helps keep our staff motivated because they're not doing, as you say, the grunt work, that tedious aspect of the job, which doesn't allow it, that gets in the way of us doing the cool stuff. Yeah, you could say it saves our employee sanity.
Yeah, well, I think it does in a way, but that's just one use of AI and it's one example. And I mean, in terms of the cost, I think 95 hours times whatever fee is quite considerable. So, before we discuss the AI policy itself, I do have one actual question and that is, what do you mean when we talk about artificial intelligence? Because for me, it means one thing, but do we have a common understanding? So, there are a lot of different distinctions and definitions of AI in terms of how it's categorised because there's a common understanding that there are many different types of AI and these different types of AI will have different implications and different consequences through their implementation.
Recently, some of you may have seen the guidance developed by the New Zealand Government and the Office of the Privacy Commission that provided a summary of information about generative artificial intelligence and the use of this. If you haven't seen this, it's a great one-stop shop to get an overview about the government's position on AI use, but also to get an understanding about what AI is and how to be concerned about different types of AI. And you can download it off the website.
Yes, absolutely. In this resource, AI is defined in three different ways and they're used as categorisations. So, there's an overarching definition, within that there's a distinction and within that there's another.
So, the overarching definition is engineered systems that can generate outputs for key objectives without explicit programming. That's a really good general definition for AI and it's simple enough that I think it's a good basis to work off for when you're defining your own AI. Within this, the Office of the Privacy Commission categorises machine and deep learning as the next step down from this.
And machine learning trains machines to make decisions. Deep learning is specialised and typically involves more complex data and decisions. So, this is a much more specific distinction of a certain type of AI.
So, an example of that would be, for instance, cancer screening. And instead of having a radiologist look at different X-rays of patients, those X-rays would be put into a machine and then the machine would learn through identifying the cancer that's picked up. Yeah, and the decision-making aspect is what really defines this type of technology.
And within that, there's generative AI, which the Office of the Privacy Commission describes as using prompts to generate outputs closely resembling human-created content. And that's like ChatGPT? Yes, I was about to say, ChatGPT is probably the most commonly known example of this currently around. But any large language model, any sort of output like that, that fits into generative AI.
And these are really useful, overarching definitions. But we think that it's quite a broad brush approach to defining AI. And this means that you don't necessarily catch the nuance of how these different AIs have different implications for use and ultimately the policy that they're going to be used under.
We have just made the decision in our policy to define AI based on the outcomes and the outputs that it produces. And using this as the key distinguisher for how this impacts other decisions related to this technology. So we acknowledge that this idea of a truly conscious AI, this deep AI, it's a strong AI as it's known in academia, is quite far off currently.
So we haven't considered this in our policy. But we have considered three main algorithm-based programmes based on their outputs. So the first is operational algorithms.
And these are algorithmic processes that interpret or evaluate information that results in or materially inform decisions that impact individuals or groups. So what would be an example of the use of such an operational algorithm? So operational algorithms could be used to impact waitlists, make decisions on waitlists in the medical sector or even in the justice sector. So it's making decisions based on inputs in terms of how it's going to rank people that are going to be seen by the health profession.
So based on the urgency, their needs and everything else. What is the next? So the next algorithm is policy development and research algorithms. And these are analytical tools used to analyse large and varied data sets to identify patterns and trends.
And this can be used to support policy development, forecast costs and or to model potential interventions. A key distinction straight off the bat for this algorithm is that it's who is subject to decisions and interpretations made. So an example being? So any sort of predictive forecasting that a business may use.
It may extrapolate data to forecast how the revenue stream will be affected by certain decisions they make. So say for instance in the sales area. Yes, and the main difference here is that these algorithms don't necessarily make an action for the individual that's using it.
So operational algorithms provide a decision that will impact people. These policy development algorithms will just give information for people to use how they wish. And what's the third type of algorithm that we identified? So this third is business rules algorithms, which are relatively simple algorithms created by people that use rules to define or constrain a business activities.
So these are relatively straightforward systems where groups create the rules of engagement for agencies. A key example for this is online websites that provide recommendations based on what you do. That is a business rule algorithm because that's just informing if these decisions are made, this decision will be.
So when we see pop-ups on Facebook with advertisement, what type of algorithm are those? So that's a business rule algorithm and that's just based off your behaviour and it's saying if this person likes these things, they might like this thing. So the different outcomes of these algorithms will lead to different concerns, implications, and requirements in order to ensure that these different types can all be implemented in different ways that are safe, secure, and effective. And I think on that, it'd be a good time to go to you, Stu.
What privacy issues should people consider when they're using AI tools? I think the key thing is, and as I explained before, people using AI tools need to take responsibility for their actions. AI technology creates risks for privacy. Trust and accountability, mainly concerning the use of personal information.
This is because AI systems often rely on collecting vast amounts of data to train the algorithms and improve performance. This data can include personal information such as names, addresses, financial information, and sensitive information such as the health of the individual, medical records, etc. The collection and processing of this data can raise concerns about how it's being used and how it's being accessed.
The main privacy concerns, as you've alluded to, surrounding AI is the potential for data breaches and unauthorised access to personal information. With so much data being collected and processed, there's a risk that it will fall into the wrong hands and there's a risk that it could be hacked or that when we release a report that there is information in the report that should otherwise be protected. So, when AI collects personal information, it is essential to ensure that the collection, use, and processing of that data is done in compliance with New Zealand's privacy legislation, as I've explained.
And AI algorithms, the algorithms that you've referred to, should be designed to minimise the collection and processing of personal information to ensure that that data is being kept secure and confidential. In other words, it needs to be sanitised. The Privacy Commissioner has recently released some useful guidance to help businesses and organisations engage with AI in a way that respects personal privacy.
These include being clear and upfront with clients around using AI, developing procedures on how to ensure the information is accurate, which is really important, before using that information, and having human oversight over the use of AI. I think that last one is quite crucial. When we developed, when you were developing those tips, it was really pleasing to see that the tips that you developed actually aligned with those privacy, those guidance that the Privacy Commissioner issued.
That guidance is available on the Privacy Commissioner's website, and I would definitely encourage people to take a look at it, because it is extremely useful information, and it will help with people who are engaging with AI. So, look, Tobes, we've developed some top tips. You've worked really, spent a lot of time thinking about them and putting them together for the company.
Is it possible for you to walk through some of those, all of those tips, and also identify what good looks like and what it doesn't, when engaging with these tips? Yeah, sure thing, and I think this will be really useful for, we've just got those poll results from earlier at the start of this webinar, and this will be particularly relevant for those, there's 35% of the audience currently that use AI, but don't explicitly have an AI policy. So it'll be really good to think about this, if you're going to continue this use of AI, that you implement a policy that includes these things. And I think even if you don't use AI and don't have a policy, some of these tips will relate in terms of thinking about the future state of your organisation and how you might want to get ahead of AI use by having an AI policy in place.
So the first is understand what the AI that you're going to use can and can't do, and how it's going to use whatever inputs you provide it to come to a conclusion. So this will look like asking about new AI technology and the implications of its use before engaging in external or client work, or even internal work. If you're using any sort of data, you should understand how this technology is coming to these conclusions, and also understand the limitations of AI before using it to carry out tasks.
So you shouldn't use AI if it doesn't have an outcome or a definable explanation. Yeah, and these explanations, they're not necessarily going to be always really easy to understand for people that aren't engaged with technology, but there should be an explanation available that at least someone, a representative in your company, understands and has been over and deems as acceptable. So your organisation should always maintain oversight across all AI use.
That's that human oversight. Yes, so there should be awareness and knowledge about any use of AI within your organisation, but there should also be the ability for employees to have the ability to bring forward possible AI to use and implement. If they identify AI that could be really useful for your organisation and it's not currently implemented or it's not currently approved, then there should be the opportunity for these employees to bring that technology forward as a possible option to be considered.
And, I mean, in terms of what this doesn't look like, it's setting up any sort of long-term automated AI process that has not got any oversight by management or internal AI. Yeah, and I think that that is a good point because that sort of leads to the next tip, which is around privacy and ensuring that you take the necessary steps to protect privacy, such as checking that all the information used is clear of personal data. And having, what is it, setting up a long-term AI tool doesn't allow for that.
Yeah, and I think in terms of protecting and ensuring data privacy, which is our third top tip, what would that not look like for you as our privacy officer? I think it wouldn't look like just entering personal information willy-nilly or running, not checking the information that we've collected from clients that may contain, for instance, passing comments such as, this created stress for me. So that sort of comment is a form of medical information and AI could pick it up and see that individuals that had engaged in a programme were under stress and that isn't exactly what we'd want to sanitise that sort of information. So I guess it's really important as well as being aware of the fact that private information shouldn't go into AI, it's also having that awareness of what private information is and what can be included in that.
And I think that that's absolutely right, but it also leads to the next tip, which is around clearly communicating to clients and to people that you're working with around when you're using AI. Yeah, so at Allen & Clark we have a lot of client-facing work and our bread and butter is working with external clients to provide solutions. When we are having these initial conversations with clients and discussing our methodology and how we're going to do this work, we just need to be really clear and upfront if any AI is going to be used and it's the same for any organisation.
If you have external or internal work, you're doing it because you're getting this work because of your reputation and the people that are involved in this work. You can use AI to automate parts of this work that we've spoken about earlier with the stuff that employees don't need to necessarily be doing all the time, but it needs to be clearly set out in stone, why these decisions are being made, what the implications are in terms of issues such as privacy and assurance to the client that this won't affect the quality or security of the final product. And I think the next tip, when you create a policy that can be applied to technology, you need to create the policy that can be applied to technologies that may arise, not just those that you're currently working with.
And I think that the issue around this, and I thought that when you developed your tools, this was quite a useful one because it recognises that using new pieces of AI without any conversations around with people, with relevant people, about how it may interact with that policy may be problematic. But the flip side of it is that if you try and regulate for every single eventuality, your policy is going to be extremely detailed and it's going to not be prescriptive. It's going to be overly prescriptive and therefore it's going to be unwieldy and create difficulties for users.
The other thing that the next policy relates to, interacting with government and guidance around this. Recently, I was at a talk where senior politicians were talking about the fact that AI is here and it's here to stay, but it's about having guardrails around protecting the use. Have you got any views on this? Yeah, so over the last probably longer than it's been in the news, over the last five or six years, there's been quite a few published charters and tools provided by the New Zealand government to support safe implementation of AI use.
It's really important that your organisation's policy or approach to AI use fits within these overarching ideas that the New Zealand government have supported and promoted. So there's tools such as the AI Charter, which is a document that was published in 2018 that agencies and different organisations signed on to, which essentially promised that when they're using AI, they'd maintain oversight, they'd keep transparency of algorithms and things like that. There's also the Algorithm Assessment Report, which similarly outlines how algorithms are going to be used and agencies can actually enter in the algorithms that they use in their work and disclose how this is affecting what they do, what the inputs are and what are the outputs.
So I think it's really important that you should consult with these tools before you push out your AI policy to make sure you're doing right by both your organisation and the people that are going to be affected by AI use. And don't create policies that are in conflict with those rules. Yes, exactly.
So you would agree that agencies and organisations need to be proactive, that they shouldn't wait for the technology to pass them by. You talk about the fact that these policies have been around since 2018, yet as we saw from the poll, very few companies have actually set up AI policies themselves. Yes, and I think the findings from that poll are a really good example of how just because you don't use AI doesn't mean you will never use AI.
Or you might not think you're using AI. Exactly. That's the better question, because grammarly is a form of AI.
It's predictive in terms of it runs an algorithm around how people speak and what is appropriate grammar, and therefore it corrects the user on what it thinks the user should be saying. Yes, and there's a general understanding that things like grammarly are quite low-stakes AI. It's quite difficult to imagine the misuse of tools like that, but it is really important to distinguish that there is AI.
So if that's a really common tool that lots of different people use, and more and more common tools are going to be using AI, and as this AI gets more developed, these algorithms get more developed, it's going to be more and more important to have a policy. So don't just base it on whether you're using ChatGBT, because that just happens to be in the news at the moment, but actually think more widely around what sort of tools and programmes your company are using. Yes, and don't wait for everyone else in your industry to start using AI before your organisation realises that you need to start using that.
And I suppose that this brings us to the next step, and that is that it's really important that when you're developing the policy that you engage with your workforce, that you learn about their technologies that they're excited about. As we've indicated, that's what triggered our policy, because our staff started to explore and experiment with AI tools. Yes, and as you said, Stu, that we were fortunate enough that at Allen & Clark we were encouraged to have these discussions, to have an opportunity to, in a safe way, play around with some of these technologies and identify where we could be getting an advantage.
So allowing your employees to be able to say to people in leadership positions, these technologies, they're exciting, they could really help whatever our industry is. It's really important, and it's also just as important to make sure that the strategic position, the approach of your business isn't decided without checking in on the people that are doing the work in your organisation. The second last one is remember what makes your organisation unique.
When developing a policy, it's important that it fits within your current policy suite. So it needs to be consistent in tone, it needs to be consistent in strategic vision. It's not just a set and forget, implement just a standard policy that you don't look at, you don't consider in terms of the implications for your organisation.
It's really important that your organisation is represented and heard throughout this policy. So an AI policy for Fonterra will be different to our own AI policy, for instance. Yes, exactly, and I think the last tip that we have is don't be afraid to look for help from outside of your business.
At Allen & Clark, for example, we have yourself and me who had the expertise within Allen & Clark that we could put this AI policy together with support from other colleagues such as our IT and Processes Manager. Your organisation will have expertise within different sectors, but it may not necessarily have the expertise to develop this AI policy. So it's really important to be open to talking to other people in other organisations, particularly within your industry.
If you know that someone else in your industry has developed an AI policy, it's really important to not be afraid to ask how that policy was implemented, what the steps were to making this policy happen. And I think that you're absolutely right because I had discussed a couple of aspects with our policy with the Office of the Privacy Commissioner. Hey look, in our opinion, being proactively thinking about these tips will help people use artificial intelligent tools better and manage risks.
We encourage everyone who may be considering the use of AI tools to assist them to develop an AI policy before doing so. This actually brings us to the end of the formal part of this webinar, but we do have time for some questions. We have one here and that is, if you have a large data set, how do you sanitise it? Would it be great to know what type of software you use? Yeah, sure.
So I think probably the best example is the conversion therapy subs analysis that we did. Do you want to walk through that process in terms of what we did without AI maybe to start with, so we can talk about that distinction? So obviously there are some keyword searches that you can run. You can particularly be looking at medical or information.
There are tools around identifying names and that will be helpful for checking to make sure that those are deleted. And I think that those are the information that we checked. The other one is that when you're loading information from external sources, checking to make sure that those reports and those data sets are also not containing any personal information.
That does require people to have that actual spending some time before they're loading it into the AI system, that it is sanitised, so to speak. And as I said, it's just doing some quick fire checks. I mean, for instance, if you have got reports that you've collected from interviews, if it's from a report, sorry, the interview with Toby, you would just do a quick word search to see whether the name Toby appears in the interview.
And if it's not, if it does, then you delete the name and then that's ready to be loaded. And once we've done that sanitise, we upload these tools. At Alan Clark, we use Envivo, which is a data management and storage and organisation tool.
And it just carries very large amounts of data individually. And then we can group these by key themes and key terms. So one of our listeners, Gareth, has mentioned that we need to consider the policy distinctions between AI use in the public space versus contained space.
Tobes, do you have any thoughts on this? Yeah, so I think it relates to what we've already spoken about today in terms of any external facing AI use. It's much more high stakes in terms of misuse of a technology or misuse of data. If we're using AI in a public space, there needs to be complete confidence in the reliability of whatever the decision and data is being produced.
There also needs to be confidence that the data is protected. So, for example, ChatGPT is a public domain tool, which means if you're putting data into that public space, that data might not necessarily stay within your own private use. So public tools means that the data you're putting in, even though you might get really good data out, could be going anywhere.
If you have a private, internally hosted software, then that data is contained within. I didn't know that about ChatGPT. So from what you're saying is that you can use it, but it stores your search in the cloud, and therefore somebody could technically access that information.
Yeah, so things like ChatGPT, they are secure in terms of their own use, but they're storing that data that you're putting in somewhere. So there could be external forces that can access that data maliciously. So whenever ChatGPT is being used, it needs to be thought of as a transaction.
You're giving data in and getting data out. You're not only getting something out for no important reason. So it's another good reason that you need to be very, very careful and spend the time sanitising that data.
So what responsibilities do users have for using tools developed, using potentially dubious means? I think it's about creating the culture that people can be upfront about where data has come from. And really, for example, at Element Clark, we cannot afford for, even if it's good information that is produced, or a good product that's produced, we can't afford, in terms of our reputation and the trust that we have with clients, to be producing things that come from dubious or untrusted sources. So it's about making sure that everyone knows that if you're using a dubious tool for AI use, then that can have really serious implications for your reputation as an organisation, because it's a really dangerous game to play to try and hope that no one figures out where this information has come from.
And that's why we really recommend disclosing every piece of AI that you use if you're working externally with clients. And also, you know, from the ChatGBT example, you know, or consequence, if the information that you're loading into ChatGBT does contain personal information, that would then expose you to a potential breach under the Privacy Act, wouldn't it? Yeah. And also, just on that, within our policy template that we will be sharing, we talk about the need for, depending on your position on AI, that it interacts with existing policies, such as code of conduct within an organisation.
So it's about making sure that this AI policy, whatever the limitations on the accepted use of AI, they will be backed up by tools such as your internal code of conduct. So there are real consequences if people flout these rules. So there's a couple more questions that we've got.
And the first one is one that I was actually going to ask myself. And that is, you know, what considerations should AI policies have about entering Mari intellectual information into the AI portal? Yeah, so there's really interesting work at the moment going on about the importance of data sovereignty for Maori and the fact that Maori data is taonga, it can't be used for open access, because much like other data, but particularly for Maori, Maori data information should be by Maori for Maori. So there's currently work taking place in the government to ensure that any AI policy or direction has direct alignment with Te Tiriti o Waitangi.
So it's allowing that autonomy for Maori to be dealing with Maori data. If your organisation is dealing with any sort of personal data about Maori or for Maori, then it's really important that you have very deliberate conversations and considerations with Te Tiriti o Waitangi and how you're going to manage that in a safe way. That's a good point and definitely one that I think that many people will be discussing further.
Yes, it's very much a developing conversation about how they're going to introduce this. So how do we take, the last question that we've got is, how do we take the misinformation alarmism aspect out of public discourse on AI? The person who's asked this question has made the comment that AI is ambiguous in the transportation, ubiquitous in transportation and other technologies that are low risk from a privacy perspective. AI is not inherently bad.
It's how we use it that could be bad. Yeah, and I think I completely agree with this point. And it's actually about having more discussions like this one.
It's about the public discourse about AI being more informed. Because as this commenter said, AI is a power multiplier. It doesn't mean it's a good thing or a bad thing.
It just means that whether people want to use it for good or bad, that can be multiplied. So it's actually just about having these open discussions, creating this platform for people to talk and people to discuss the use of AI in a really safe place and not being shouted down by that alarmism. And the last question that we've got is, how do we have quality assurance with the output? In terms of influencing AI, in terms of AI produced outputs, I presume.
It's about not relying on AI. So it's similar to, there's this idea of automation bias where if you or I make a mistake, and whatever scale mistake it is, we make an honest mistake in our work, it's generally understood that human error exists. It's not malicious, it doesn't reflect on our standard of work generally.
It might just mean that we made an error. But we don't hold AI to the same standard. If AI makes a mistake, we kind of want to throw the baby out of the bath water and get rid of it.
So it's about being able to work through and understand what the AI did. If the AI has made a mistake that you don't understand, then there's probably an issue with how the AI has been coded. And that may have been coded by you, but it may have been coded externally.
But it's about understanding that you need to have that oversight, which we've talked about already, and have that knowledge of how this AI is working to come to these outputs. Thank you for taking the time to listen to us today. I hope you've found the information that we've provided valuable.
If you have any further questions, please feel free to contact us and we will do our best to answer the questions that you may have.