At Allen + Clarke, we use AI technology to enhance our work while maintaining the human expertise and critical thinking our clients expect and value. We believe in transparent, ethical AI use that respects privacy and delivers better outcomes for clients and communities.
How we use AI:
- We employ AI tools to improve efficiency and provide richer insights across our service areas.
- Our team maintains full oversight of all AI-supported work, ensuring quality and accuracy.
- We assess all AI tools against privacy, security and ethical standards before use.
- We remove personal information from data before processing with AI tools.
- We specifically disclose AI use in our client proposals and throughout project delivery.
Our safeguards:
- An AI Steering Group governs our AI use, maintaining a register of approved tools and practices.
- We follow a robust decision framework for determining appropriate AI use.
- All team members receive training on responsible AI use, its limitations and risks.
- We align our practices with Australian and New Zealand public sector AI ethical principles.
- We conduct regular reviews of our AI approach as technology evolves.
- We will not input your personal, confidential or sensitive data into AI tools without your express permission. Should permission be granted, the data will be anonymised before processing.
We're committed to using AI as a tool that enhances, never replacing our professional judgment. This means you benefit from innovative solutions while receiving the thoughtful, human-centred service that defines Allen + Clarke.
Principles of use
We embrace the following principles as the basis of our AI Policy for Allen + Clarke:
- Ethical use - namely fair, responsible, and ethical AI use, ensuring that AI systems are aligned with societal values. This means fairness and non-discrimination in AI applications, accountability for AI decisions, ensuring agencies take responsibility for AI driven outcomes, and public benefit - ensuring AI is deployed in ways that serve communities and individuals rather than creating harm.
- Transparency and Accountability - AI use must be transparent, explainable and accountable to our clients. This includes clear documentation on how AI makes decisions, disclosure to affected parties of AI usage where it affects services or decision-making and mechanisms for review and redress in case of errors or unfair impacts.
- Human oversight - AI should support, not replace, human decision-making. To prevent unintended consequences, this requires human-in-the-loop approaches, ensuring people retain ultimate control over AI-driven decisions, and requires that AI should not be used to make high-risk or sensitive decisions without human review.
- Risk Management - it is important that we identify, assess and mitigate risks associated with AI. The NSW Government has the most comprehensive risk assessment tool for departments to use during procurement, which can be viewed here. It is important to clearly manage:
- bias and discrimination, ensuring AI does not reinforce inequalities
- unintended consequences, requiring agencies to evaluate potential harms
- security risks, ensuring AI is protected against misuse or cyber threats.
- Data Privacy and Security - privacy and responsible data use are central to this policy. The AI systems we use must comply with privacy laws (Australia's Privacy Act 1988 and New Zealand's Privacy Act 2020), ensure data minimisation (collecting and processing only what is necessary) and be secure, protecting personal and sensitive information from misuse.
- Fairness and Human Rights - ensuring AI does not exacerbate inequalities or harm vulnerable communities is a priority. AI should be designed and deployed without discrimination. Indigenous rights and cultural considerations should be front of mind. We must take active steps to mitigate biases in AI decision-making.