AI Use Policy
Armour believe that we should use technology in a legal, responsible and ethical way. This includes how we use Artificial Intelligence (AI) tools in our business operations. This policy sets out how and where AI is used, what it will not be used for, and how we ensure compliance with data protection laws such as the UK GDPR. This policy is primarily concerned with the use of Generative AI systems, but for avoidance of doubt, the principals are true for all AI systems: Generative AI / Machine Learning AI, Language (Learning) Model / and General AI.
Who this policy applies to
This policy applies to all individuals working at or on behalf of Armour Risk Consulting, including:
- Permanent staff
- Company directors
- Temporary or contract workers
- Freelancers and consultants
- Any third parties acting under our instruction
Everyone covered by this policy is expected to follow the principles outlined here when using AI tools in the course of their work with, or for, the company. Breaches of this policy may result in appropriate action being taken, depending on the nature and severity of the issue.
Where we may use AI
We may use AI tools to support the following activities:
- Marketing & Communications:
AI may be used to help structure and draft social media posts, website content, newsletters, and general marketing copy. This allows us to maintain consistent messaging and save time on repetitive content tasks. - Internal Efficiency:
Occasionally, AI may assist with things like formatting internal notes, summarising non-sensitive documents, or generating draft ideas for presentations or non-client-facing materials.
AI may be used to support staff with day-to-day tasks such as drafting ideas, summarising general content, or improving structure and tone. It is not a replacement for staff expertise or professional judgement. All AI outputs are reviewed by an appropriately qualified team member before use to ensure accuracy, quality, and alignment with our standards.
Where we do not use AI
We do not use AI in any of the following areas:
- Client Documentation:
We do not use AI tools to issue client-facing documentation or any compliance-related materials. All client documentation is created and reviewed by qualified professionals. - Handling Personal Data:
AI tools are not used to process, analyse, or interpret any personal or sensitive data, including data we may handle on behalf of our clients. This ensures we remain compliant with UK GDPR and maintain the trust of our clients. - Key Decision Making:
AI is not used to make decisions that affect clients, employees, or stakeholders. Any decision-making remains fully human-led.Data Protection & Privacy
Any use of AI is carried out with full respect for data privacy laws and principles. We do not upload or input client data into AI tools, especially where it contains personal or confidential information.
Our team is trained to recognise what can and cannot be processed through such tools.
Bias and Fairness
We recognise that AI tools can reflect biases present in the data they are trained on. This can influence the tone, assumptions, or accuracy of the content they generate. To reduce the risk of bias:
- All AI-generated content is reviewed by staff before use
- We avoid using AI to produce content where neutrality, fairness, or legal accuracy is essential
- We encourage staff to flag anything that seems biased, misleading, or inappropriate in AI outputs
By treating AI as a support tool only we aim to ensure our work remains balanced, inclusive, and aligned with our values.
Review and Ownership
This policy will be reviewed regularly and updated if our use of AI changes. Responsibility for compliance with this policy lies with our management team, and any questions or concerns can be raised by contacting enquiries@armour-risk.co.uk