AI Policy

ethicAil AI Policy

ethicAil is committed to genuinely engaging with our stakeholders to ensure that our AI is aligned with their needs and values. This AI Policy sets out our responsibilities, processes, and commitments towards ensuring Responsible AI usage across our organisation.

1. Purpose

This AI policy aims to establish guidelines and best practices for the responsible and ethical use of Artificial Intelligence (AI) within ethicAil. It ensures that our employees are using AI systems and platforms in a manner that aligns with the company’s values, adheres to legal and regulatory standards, and promotes the safety and well-being of our stakeholders.

2. Scope of this AI Policy

This policy applies to all employees, contractors, suppliers, and partners of ethicAil who use or interact with AI systems, including but not limited to all LLMs, plugins and data enabled AI tools. Its requirements should be reflected in other policies and procedures, agreements and contracts, as necessary.

3. AI Policy Definition

We define Artificial Intelligence (AI) as the ability of machines or software to perform tasks that would normally require human intelligence. AI systems can process data, learn from it, and make decisions or predictions based on that data. AI is a broad field that encompasses many different types of systems and approaches to machine intelligence, including rule-base AI, machine learning, neural networks, natural language processing and robotics.

4. Policy

4.1. Responsible AI Use

Employees must use AI systems responsibly and ethically, avoiding any actions that could harm others, violate privacy, or facilitate malicious activities.

4.2. Compliance with Laws and Regulations

AI systems must be used in compliance with all applicable laws and regulations, including data protection, privacy, and intellectual property laws.

4.3. Transparency and Accountability

Employees must be transparent about the use of AI in their work, ensuring that stakeholders are aware of the technology’s involvement in decision-making processes. Employees must utilise ethicAil’s centralised system for AI governance and compliance efforts (‘AI System of Record’) to ensure transparency of proposed and active AI activities. Employees are responsible for the outcomes generated by AI systems and should be prepared to explain and justify those outcomes.

4.4. AI Data Privacy and Security

Employees must adhere to the company’s data privacy and security policies when using AI systems. They must ensure that any personal or sensitive data used by AI systems is anonymised and stored securely.

We have carried out a Data Protection Impact Assessment (DPIA) for AI and made any necessary changes to our policies and procedures. As part of that, insofar as reasonably possible, we will:

  • Use accurate, fair, and representative data sets to ensure these are inclusive.
  • Not include personal data in data sets, or at least pseudo-anonymise or de-identify it.
  • Ensure our data consent procedures are always simple and clear and obtain user consent when using AI systems that process personal data.
  • Reflect our use of AI in our privacy statement to ensure users know when their data is being used by AI, whether AI is making decisions about them and, if so, what these decisions are.

We are aware of the ICO guidance and other regulation on AI and data protection and have reflected any additional requirements in our policies and procedures.

We have robust cyber security procedures that everyone is aware of and complies with consistently to minimise the risk of AI scams and disinformation.

4.5 Risk Assessment of AI Systems

We will undertake risk management processes (to include, but not limited to risk assessments) to ensure that our organisation’s AI systems, along with those of our suppliers, do not represent potential risks to any of our stakeholders. As further outlined in Section 7 – AI Risk Management of this AI Policy.

4.6. Bias and Fairness

Employees must actively work to identify and mitigate biases in AI systems. They should ensure that these systems are fair, inclusive, and do not discriminate against any individuals or groups, as set out in the AI Ethics section of this Policy.

4.7. Human-AI Collaboration

Employees should recognise the limitations of AI and always use their judgment when interpreting and acting on AI-generated recommendations. AI systems should be used as a tool to augment human decision-making, not replace it.

4.8. Training and Education

Employees who use AI systems must receive appropriate training on how to use them responsibly and effectively. They should also stay informed about advances in AI technology and potential ethical concerns.

4.9. Third-Party Services

When utilising third-party AI services or platforms, employees must ensure that the providers adhere to the same ethical standards and legal requirements as outlined in this policy.

AI Governance Policy

5.1. AI Governance Board

A multidisciplinary AI risk management team (‘AI Governance Board’) comprised of a diverse team of experts, including data scientists, legal and compliance professionals, and ethics specialists will ensure that AI initiatives are developed and deployed responsibly, in compliance with relevant laws and regulations, and with ethical considerations in mind. The AI Governance Board will create and define roles and responsibilities for designated committees critical to the oversight of ethicAil’s AI initiatives. (example, AI Ethics Committee)

All key AI decisions and proposals will be subject to scrutiny and approval by the ethicAil Board.  They will be advised on any concerns or breaches in AI use and will review this policy and our AI performance annually to keep up with evolving AI technologies and ethical standards.

5.2. AI Governance Implementation & Support

Use of AI by our organisation will have appropriate human oversight with humans being responsible for making all final decisions on their output.  We will maintain oversight by monitoring AI systems’ performance, impact, and compliance with this policy on an ongoing basis.

To support this, we will create any necessary guidelines on the collection, use and storage of data. This will ensure accountability for the decisions made by AI systems, which may include measures such as auditing, reporting and review processes and the use of algorithms in decision-making, including the steps we will take to ensure these are as fair and unbiased as reasonably possible.

5.3. Designated AI Officer

We have appointed a designated AI Officer responsible for overseeing the implementation of this policy, providing guidance and support to employees. Our Designated AI Officer can be contacted at:

[email protected]

5.4. Independent Verification of Responsible AI Policies

We will seek to enlist support from independent third parties to help verify and assess our Responsible AI activities and to ensure that we are achieving progress in our commitment to Responsible AI and meeting commitments set out in this policy.

5.5. Responsible AI Reporting

We will regularly make available reports and/or documentation that provides details and updates on our Responsible AI activities.

6. Policy on Management of AI

We will support our people in adapting to the changes AI will bring by providing them with appropriate support and skills development and taking into account their needs, when designing roles and work procedures.

The requirements of our AI policy will be embedded in other relevant policies and procedures, contracts, agreements and other documentation, such as job descriptions.  We will ensure that those in our organisation with responsibilities for or involvement in AI, understand our AI Policy, their responsibilities in delivering this and are accountable for doing so.

7. AI Risk Management

Our AI risk analysis has included any specific groups who may be at risk and other reasonably foreseeable uses of the technology, including accidental or malicious misuse. The risks have been identified and quantified, and the avoidance/mitigation action put in place will ensure that the level of risk remains within acceptable limits.

8. Implementation and Monitoring

8.1. AI Oversight

Our designated AI Officer will be responsible for overseeing the implementation of this policy, providing guidance and support to employees, and ensuring compliance with relevant laws and regulations.

Where necessary we will also seek to enlist the support and guidance of experienced third-party professionals to help provide adequate oversight and monitoring of our AI usage.

8.2. Periodic Reviews

The AI Officer will conduct periodic reviews of AI system use within the company to ensure adherence to this policy, identify any emerging risks, and recommend updates to the policy as necessary.

8.3. Incident Reporting

Employees must report any suspected violations of this policy or any potential ethical, legal, or regulatory concerns related to AI use to the AI Officer or through the company’s established reporting channels.

We encourage our customers and other stakeholders to also report any suspected violations of this policy or any potential ethical, legal, or regulatory concerns related to AI.

9. AI Ethics

9.1. Stakeholder needs and values

We are committed to genuinely engaging with our stakeholders to ensure that our AI is aligned with their needs and values. We factor into our risk analysis any exclusion or detriment to them based on their identity. We will take reasonable steps to avoid or minimise any exclusion or detriment and transparently communicate this. We will ensure that any AI created content respects the dignity of individuals and represents them in the way they would wish to be, including them being accurately depicted.

9.2. Accessible AI

We will make our AI systems and content as accessible as possible. Insofar as reasonably possible, we will use accurate, fair, and representative data sets to ensure these are inclusive.  We will ensure that any AI decisions are understandable and interpretable by stakeholders. This could involve documenting the logic behind AI decisions, providing clear explanations, and making sure that the reasoning is accessible to non-technical users.

9.3. Identification of Bias

All reasonable efforts will be made to identify any bias within an AI system we use, to ensure any bias has either been eradicated or mitigated to the point where it is within an acceptable level of risk. We are open and transparent about any bias within an AI system (that we are aware of) and how we manage this.

9.4. Labelling of AI

Where AI is used to create content, there are appropriate checks and safeguards in place to ensure:

  • We are open and transparent that the content has been created by AI.
  • AI created content is either self-evident or clearly identified and.
  • It will not be used for purposes where the use of AI has been specifically not permitted.

9.5 Content Moderation

There is appropriate content moderation by humans, to minimise the potential for errors and bias/defamatory phrases etc.

10. AI Environmental Considerations

We are aware of the environmental impact of AI due to its very high energy consumption.  We will take this into account when considering our environmental impact and seek to make use of any emerging technologies that will help to minimise or mitigate this.

11. AI Legal Compliance

We will take all reasonable steps to identify copyrighted material.  For any such material we use, we will ensure we have their copyright agreement, or it falls within ‘fair use’, or other exception to copyright, or the Open Government Licence (OGL), or some other free use category.

We will not knowingly use any online material, such as from social media accounts or online galleries, which has been marked as ‘NoAI’, ‘NoImageAI’, or similar.

We will take all reasonable steps to ensure that our use of AI does not have a negative impact on the legal rights and/or liberties of individuals or groups and complies with the Data Protection Act.

In particular, we will ensure that for any AI use of our data, the data is clean, complete, compliant and we have appropriate consent, particularly the safeguarding of sensitive personal information.

12. Enforcement

12.1. Employees

Violations of this policy may result in disciplinary action, up to and including termination of employment, in accordance with ethicAil’s disciplinary policies and procedures.

12.2. Suppliers and Stakeholders

We may seek to make amendments up to and including termination of contracts where our suppliers do not meet the required standards set out in this policy.

13. Policy Review

This policy will be reviewed annually or as needed, based on the evolution of AI technology and the regulatory landscape. Any changes to the policy will be communicated to all employees.

Policy Version Control

Version 2 – Updated 1st of February 2024

Scroll to Top