Updated: 2024-10-03, version: 1.2

Introduction

After many years of following the development of generative AI, we decided at the beginning of 2023 to incorporate AI into our operations and to actively follow its development, to learn more and to truly start using AI in our business.

Context

At Digitalist Open Tech, we use AI as a working tool for our everyday job-related tasks. We also build specific AI services for internal use as well as for our clients.

We know that both we as employees and our clients are affected by how we build AI services and how we use them and in the long run - society as a whole.

Purpose

The purpose of this AI policy is to ensure that Digitalist uses artificial intelligence (AI) tools responsibly, transparently, and in a manner that respects individual privacy, fosters trust, and promotes fairness. This policy apply to all employees and subcontractors who interact with AI tools on behalf of Digitalist.

Target group

All employees and subcontractors that we work with are responsible for adhering to this policy and ensuring that no prohibited data is shared with AI tools.

Scope

This policy covers any AI tool used within Digitalist and any data processed or shared by these tools, regardless of its format or storage location. This includes both company-approved (trusted) AI tools and any other (untrusted) AI services.

Definitions

AI tools: Any software, application, or system that utilizes artificial intelligence or machine learning algorithms to perform tasks that typically require human intelligence. This includes natural language processing tools, image recognition systems, chatbots, predictive analytics platforms and other related technologies.

Trusted AI services: AI tools and services that have been vetted, approved, and authorized by Digitalist for use with company data. These services comply with our security, privacy, and compliance requirements.

Untrusted AI services: AI tools and services that have not been approved by Digitalist. These may include public AI platforms or third-party services without proper security assessments.

Personal data: Any information relating to an identified or identifiable natural person, as defined by applicable data protection laws such as GDPR.

Sensitive personal data: Specific categories of personal data that are subject to additional protections under law, including data revealing racial or ethnic origin, political opinions, religious beliefs, genetic data, biometric data, health information, or data concerning a person’s sex life or sexual orientation.

Ethical guiding principles

Digitalist is committed to adhering to the following ethical guiding principles when using and developing AI tools:

Transparency

We will communicate clearly and openly about our use of AI tools and their purposes, capabilities, and limitations. We do this internally, in conversations with our clients as well as on our digital channels.

Privacy and data protection

We will protect the privacy of individuals by adhering to EU data protection laws and regulations, and by implementing robust security measures to safeguard personal and sensitive information.

Bias - fairness and non-discrimination

As far as we possibly can, we will strive to prevent unfair bias and discrimination in AI systems and ensure that they treat all individuals equitably. We are aware that that is not always the case.

Accountability

We will hold ourselves responsible for the outcomes of AI tools used by Digitalist and establish processes to monitor, assess, and improve their performance. We have internal processes for managing incidents when an employee violates Digitalist’s policies.

Human-centric approach

We will prioritize human well-being and ensure that AI tools are designed to augment human capabilities, rather than replace them.

We are also aware that the efficiency savings that can occur in the wake of using AI tools may cause fear for our or our clients' jobs. We will handle this with great humility, consideration and empathy.

Data that must never be shared with AI tools

<aside> ☝🏽 Eg, if we want support in writing meeting notes, replace the names with “[Client name]”, “[Person 1]” before entering the text into the prompt.

</aside>

In general terms: the data in our Personal Data Processing Register must never be shared.

In more detail - what must never be shared

Personally identifiable information (PII)

Information that can be used to identify an individual, company names, such as names, addresses, phone numbers, email addresses, social security numbers, and passport numbers.

Sensitive personal Information

Data about a person's race, ethnicity, religion, political affiliations, sexual orientation, medical history, and financial information.

Confidential communication

Confidential or personal communication between persons, clients, authorities - ie our stakeholders.

Copyrighted material

Sharing copyrighted material without proper authorization is illegal and could lead to copyright infringement.

Non-consensual data

Information collected without the explicit consent of the individuals involved, such as photos or videos taken without permission or data obtained through unauthorized surveillance.

Data violating ethical guidelines

Read more in chapter Ethical guiding principles in this policy.

Data that could be used to harm others

Information that could be used to harass, blackmail, or target individuals or groups, such as revenge porn or doxxing material.

Exceptions for trusted AI services

You may share prohibited data with AI tools only if all the following conditions are met:

  1. Explicit approval: Senior management has authorized and documented the sharing.
  2. Trusted AI tool: The AI tool is approved by Digitalist to handle certain sensitive data. This includes open or private AI models hosted on own cloud infrastructure, which are secured to prevent data leakage.
  3. Legal compliance: The sharing follows all applicable laws and regulations.
  4. Data protection: Proper safeguards like encryption and access controls are in place.
  5. Data Processing Agreement: A formal agreement exists with the AI provider to protect the data.
  6. Business necessity: Sharing the data serves a legitimate business purpose and aligns with our ethical principles.
  7. Informed consent: If required, you have obtained consent from the individuals whose data is involved.

Important: Even when these conditions are met, you must consult the Chief AI Officer or Data Protection Officer before proceeding.

Acceptable data for sharing

Employees may share the following types of data with AI tools, provided they are using trusted AI services and adhere to this policy:

Anonymized data: Data that has been processed to remove or obscure personal identifiers, making it impossible to identify individuals.

Aggregate data: Statistical or summary data that does not contain personal identifiers.

Publicly available information: Information that is already in the public domain and does not violate any confidentiality agreements.

Non-sensitive business data: General business information that is not confidential or proprietary.

Data anonymization techniques

Before sharing data with AI tools, employees must ensure proper anonymization:

Remove personal identifiers: Replace names and other identifiers with placeholders (e.g., “[Client Name]”, “[Person 1]”).

Generalization: Use broader categories instead of specific details (e.g., “a European company” instead of naming the company).

Data minimization: Share only the minimum amount of data necessary for the task.

Risk assessment: Evaluate the risk of re-identification and take steps to mitigate it.

Differentiation between trusted and untrusted AI services

Trusted AI services

Trusted AI services are those vetted and approved by Digitalist for use with company data, including sensitive and personal data under specific conditions.

List of approved tools: Employees can access the list of trusted AI tools on the company intranet.

Compliance: Trusted services comply with relevant laws and have appropriate data protection measures in place.

Data Protection Agreements: DPAs are established with all trusted AI service providers.

Untrusted AI services

Untrusted AI services are those not approved by Digitalist. These may include public AI platforms or third-party services lacking proper security assessments.

Prohibition of sensitive data use: Employees must not share any sensitive, confidential, or personal data with untrusted AI services under any circumstances.

Limited usage: Untrusted services may only be used for non-sensitive tasks using publicly available, non-confidential data.

Approval process for AI tools

Request submission: Employees wishing to use a new AI tool must submit a request to the IT department or Chief AI Officer.

Evaluation criteria: The AI tool will be evaluated based on security measures, compliance with regulations, data protection standards, and alignment with our ethical principles.

Approval notification: Approved tools will be added to the list of trusted AI services, and employees will be informed accordingly.

Implementation guidelines

AI tool selection and development

We choose (or develop) AI tools that are aligned with our guiding principles.

Employee training and awareness

Provide regular training to employees on ethical AI use, data privacy, and the responsible management of AI tools.

We will continuously talk about these matters on staff meetings and other conversations.

Data quality and bias mitigation

All of us must ensure that the data we give out and hence is used to train and operate AI tools is accurate, representative, and free from biases that could lead to unfair or discriminatory outcomes.

That must always be on top of mind when entering text in the AI tools’s prompts.

Privacy by design

Incorporate data protection and privacy measures throughout the entire lifecycle of AI tools, from design to deployment and maintenance.

This means: when we choose an AI tool we consider data privacy and data integrity.

Ongoing monitoring and evaluation

Continuously monitor AI tools to assess their performance, identify potential biases or ethical concerns, and make necessary improvements. We talk about this on our weekly staff meetings.

Collaboration with stakeholders

We aim to engage with stakeholders, including customers, partners, employees, regulators, and the society, to gather feedback and foster trust in our use of AI tools. We do this by reaching out to potential partners, by hosting meetups and community workshops and of course in our conversations with our clients.

Reporting and escalation

Employees and sub contractors should report any suspected ethical violations or concerns related to the use of AI tools to their manager or HR or using the Security Incident service on the intranet. Digitalist will investigate all reports and take appropriate corrective action.