Updated: 2024-10-03, version: 1.2
After many years of following the development of generative AI, we decided at the beginning of 2023 to incorporate AI into our operations and to actively follow its development, to learn more and to truly start using AI in our business.
At Digitalist Open Tech, we use AI as a working tool for our everyday job-related tasks. We also build specific AI services for internal use as well as for our clients.
We know that both we as employees and our clients are affected by how we build AI services and how we use them and in the long run - society as a whole.
The purpose of this AI policy is to ensure that Digitalist uses artificial intelligence (AI) tools responsibly, transparently, and in a manner that respects individual privacy, fosters trust, and promotes fairness. This policy apply to all employees and subcontractors who interact with AI tools on behalf of Digitalist.
All employees and subcontractors that we work with are responsible for adhering to this policy and ensuring that no prohibited data is shared with AI tools.
This policy covers any AI tool used within Digitalist and any data processed or shared by these tools, regardless of its format or storage location. This includes both company-approved (trusted) AI tools and any other (untrusted) AI services.
AI tools: Any software, application, or system that utilizes artificial intelligence or machine learning algorithms to perform tasks that typically require human intelligence. This includes natural language processing tools, image recognition systems, chatbots, predictive analytics platforms and other related technologies.
Trusted AI services: AI tools and services that have been vetted, approved, and authorized by Digitalist for use with company data. These services comply with our security, privacy, and compliance requirements.
Untrusted AI services: AI tools and services that have not been approved by Digitalist. These may include public AI platforms or third-party services without proper security assessments.
Personal data: Any information relating to an identified or identifiable natural person, as defined by applicable data protection laws such as GDPR.
Sensitive personal data: Specific categories of personal data that are subject to additional protections under law, including data revealing racial or ethnic origin, political opinions, religious beliefs, genetic data, biometric data, health information, or data concerning a person’s sex life or sexual orientation.
Digitalist is committed to adhering to the following ethical guiding principles when using and developing AI tools:
We will communicate clearly and openly about our use of AI tools and their purposes, capabilities, and limitations. We do this internally, in conversations with our clients as well as on our digital channels.
We will protect the privacy of individuals by adhering to EU data protection laws and regulations, and by implementing robust security measures to safeguard personal and sensitive information.
As far as we possibly can, we will strive to prevent unfair bias and discrimination in AI systems and ensure that they treat all individuals equitably. We are aware that that is not always the case.
We will hold ourselves responsible for the outcomes of AI tools used by Digitalist and establish processes to monitor, assess, and improve their performance. We have internal processes for managing incidents when an employee violates Digitalist’s policies.
We will prioritize human well-being and ensure that AI tools are designed to augment human capabilities, rather than replace them.
We are also aware that the efficiency savings that can occur in the wake of using AI tools may cause fear for our or our clients' jobs. We will handle this with great humility, consideration and empathy.
<aside> ☝🏽 Eg, if we want support in writing meeting notes, replace the names with “[Client name]”, “[Person 1]” before entering the text into the prompt.
</aside>
In general terms: the data in our Personal Data Processing Register must never be shared.
Information that can be used to identify an individual, company names, such as names, addresses, phone numbers, email addresses, social security numbers, and passport numbers.
Data about a person's race, ethnicity, religion, political affiliations, sexual orientation, medical history, and financial information.
Confidential or personal communication between persons, clients, authorities - ie our stakeholders.
Sharing copyrighted material without proper authorization is illegal and could lead to copyright infringement.
Information collected without the explicit consent of the individuals involved, such as photos or videos taken without permission or data obtained through unauthorized surveillance.
Read more in chapter Ethical guiding principles in this policy.
Information that could be used to harass, blackmail, or target individuals or groups, such as revenge porn or doxxing material.
You may share prohibited data with AI tools only if all the following conditions are met:
Important: Even when these conditions are met, you must consult the Chief AI Officer or Data Protection Officer before proceeding.
Employees may share the following types of data with AI tools, provided they are using trusted AI services and adhere to this policy:
Anonymized data: Data that has been processed to remove or obscure personal identifiers, making it impossible to identify individuals.
Aggregate data: Statistical or summary data that does not contain personal identifiers.
Publicly available information: Information that is already in the public domain and does not violate any confidentiality agreements.
Non-sensitive business data: General business information that is not confidential or proprietary.
Before sharing data with AI tools, employees must ensure proper anonymization:
Remove personal identifiers: Replace names and other identifiers with placeholders (e.g., “[Client Name]”, “[Person 1]”).
Generalization: Use broader categories instead of specific details (e.g., “a European company” instead of naming the company).
Data minimization: Share only the minimum amount of data necessary for the task.
Risk assessment: Evaluate the risk of re-identification and take steps to mitigate it.
Trusted AI services are those vetted and approved by Digitalist for use with company data, including sensitive and personal data under specific conditions.
List of approved tools: Employees can access the list of trusted AI tools on the company intranet.
Compliance: Trusted services comply with relevant laws and have appropriate data protection measures in place.
Data Protection Agreements: DPAs are established with all trusted AI service providers.
Untrusted AI services are those not approved by Digitalist. These may include public AI platforms or third-party services lacking proper security assessments.
Prohibition of sensitive data use: Employees must not share any sensitive, confidential, or personal data with untrusted AI services under any circumstances.
Limited usage: Untrusted services may only be used for non-sensitive tasks using publicly available, non-confidential data.
Request submission: Employees wishing to use a new AI tool must submit a request to the IT department or Chief AI Officer.
Evaluation criteria: The AI tool will be evaluated based on security measures, compliance with regulations, data protection standards, and alignment with our ethical principles.
Approval notification: Approved tools will be added to the list of trusted AI services, and employees will be informed accordingly.
We choose (or develop) AI tools that are aligned with our guiding principles.
Provide regular training to employees on ethical AI use, data privacy, and the responsible management of AI tools.
We will continuously talk about these matters on staff meetings and other conversations.
All of us must ensure that the data we give out and hence is used to train and operate AI tools is accurate, representative, and free from biases that could lead to unfair or discriminatory outcomes.
That must always be on top of mind when entering text in the AI tools’s prompts.
Incorporate data protection and privacy measures throughout the entire lifecycle of AI tools, from design to deployment and maintenance.
This means: when we choose an AI tool we consider data privacy and data integrity.
Continuously monitor AI tools to assess their performance, identify potential biases or ethical concerns, and make necessary improvements. We talk about this on our weekly staff meetings.
We aim to engage with stakeholders, including customers, partners, employees, regulators, and the society, to gather feedback and foster trust in our use of AI tools. We do this by reaching out to potential partners, by hosting meetups and community workshops and of course in our conversations with our clients.
Employees and sub contractors should report any suspected ethical violations or concerns related to the use of AI tools to their manager or HR or using the Security Incident service on the intranet. Digitalist will investigate all reports and take appropriate corrective action.