Go back

IA Ethical Policy Statement

COMMITMENT POLICY

At Xcapit we are committed to the responsible use of Artificial Intelligence (AI), the development of AI systems and the implementation of responsible AI solutions that accelerate innovation, improve efficiency and contribute to sustainable growth. We believe this supports our fundamental data compliance and ethics objectives to preserve digital trust, reliable data-driven decision making and sustainable data ecosystems set out in our Data Compliance and Ethics Policy Statement.This policy sets out our core principles and operating standards for the ethical use of AI and the design and implementation of AI systems.We rely on the definition of “AI system” used by the Organisation for Economic Co-operation and Development (OECD) when referring to “AI” in this Policy to refer to a machine-based system that, for explicit or implicit goals, infers from the input it receives how to generate outputs such as predictions, content, recommendations, or decisions that [may] influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptability after implementation. It provides a framework for compliance and effective risk management under artificial intelligence, privacy, data protection, data compliance, security, intellectual property and other global data and information laws, rules, regulations, external frameworks and standards across all categories and types of data.This policy sets out the basis for the use of AI systems at Xcapit, including the use of AI systems to perform, or assist in the performance of, any work-related activity. It applies to the interaction, development or implementation of AI systems within Xcapit. It encompasses all technologies that are, or are based on, AI systems.Where an applicable law, rule, regulation or contractual obligation, or other Xcapit policy requires a higher standard, we will follow the requirements of that law, rule, regulation, contract or Xcapit policy.

OPERATING PRINCIPLES AND STANDARDS

The following Operating Principles and Standards guide how we work to meet our Engagement Policy.

Human-centric values and principles: We ensure that our AI systems and use cases are designed, implemented, enhanced and retired in a way that respects our Core Values, our supporting Ethical Principles, human rights, privacy and data protection, non-discrimination, diversity, equity and inclusion.

Transparency and explainability: We are committed to transparent and meaningful disclosure about our AI systems in our solutions, processes and communications consistent with our Privacy and Personal Data Protection Policy and our Information Security Policy.

Fairness and non-discrimination: We assess fairness in our AI systems and seek to avoid systematic errors that have discriminatory consequences for individuals and groups, including discrimination throughout the AI system implementation process.Safety: we employ responsible design, development, deployment and communication to users to mitigate potential harm from our AI systems.

Quality, robustness, accuracy and traceability: We thoroughly review the quality, robustness, accuracy and traceability of the inputs and outputs generated by the AI systems we use.

Risk management: We recognize that risk management is an essential component of ethical AI, and we address it by tailoring our data compliance, ethics, privacy and technology risk assessment methodologies to the unique risks posed by AI systems.

Privacy and confidentiality: We manage the privacy and confidentiality risks associated with AI systems by following our established standards set out in our Privacy and Personal Data Protection Policy.

Engagement and competence: We will actively engage with stakeholders, including our team members and end users,to solicit feedback and address concerns about our AI systems and our use of AI, including concerns about decisions or outcomes that are inconsistent with the principles and operating standards in this policy.