Other reports

Digital ethics

TAG overview

People, machines, data, and processes are becoming increasingly interlinked with technological advances transforming our society and posing new ethical challenges. Our digital ethics activities define how we responsibly handle data and algorithms.

Our approach to corporate digital responsibility

As it is our aim is to develop and use new digital technologies responsibly, we evaluate ethical issues that may arise from algorithms, artificial intelligence (AI) and data-based business models in an early stage. Since 2021, our Digital Ethics Advisory Panel (DEAP) has been focusing on complex ethical issues surrounding digital technologies.

Roles and responsibilities

One of the main tasks of the DEAP is to support us in developing digital applications responsibly while addressing ethics questions that could result from collecting and processing data as well as from the use of these innovative technologies. It issues recommendations for our entrepreneurial activities.

The panel comprises external international science and industry experts from the fields of digital ethics, law, Big Data technologies, digital health, medicine, and data governance. In addition, we involve bioethics experts as well as representatives from patient organizations as needed. The DEAP has its mandate from the Executive Board; our employees may submit topics for the panel to discuss. As in the previous year, the panel held four meetings in 2023. These focused on issues concerning the use of generative AI. Summary minutes of the DEAP meetings have been accessible on our intranet since 2023 insofar as they do not contain any business secrets. They also document the recommendations issued.

Our commitment: Guidelines and standards

As a company, we want to position ourselves in the digital ethics sphere. We are therefore developing clear ethical standards in this new field, primarily for critical areas, for instance handling health data. In this effort, we collaborate with various stakeholders and experts.

Together with the DEAP, we apply our Code of Digital Ethics (CoDE) in order to address questions pertaining to the ethical use of data and algorithms. The CoDE serves as a guideline for our digital business models, as a tool for analyzing ethical challenges, and a basis for practical DEAP recommendations. As one of our overarching governance documents, it applies to all employees and is publicly accessible as well.

The CoDE is based on five core principles: justice, autonomy, beneficence, non-maleficence, and transparency. These principles provide a clear structure for assessing ethical issues. Moreover, they support our business sectors as well as individual employees in difficult situations for which laws or other types of regulations do not (yet) exist.

The CoDE not only helps us to assess the ethical risks posed by existing activities, but also enables us to evaluate the ethical aspects of newly emerging digital solutions. To this end, we apply a principle-at-risk analysis (PaRA), which is based on the CoDE. We use the PaRA to examine ethical issues resulting from our business as well as from the development of internal applications and new products. The DEAP uses the results of the PaRA as a basis for discussion. The PaRA method was described in the scientific journal Minds and Machines in 2023.

Developments in the field of generative AI, for instance ChatGPT, are growing in importance. All our business sectors are developing applications based on generative AI. To apply these innovative technologies responsibly and to the benefit of all, an ethical framework is currently being developed. The DEAP is intensively evaluating the guidelines. The aim is to roll out this framework company-wide in 2024.

Training on the ethical use of data and algorithms

In June 2023, online training on the CoDE was assigned to approximately 12,000 managers with personnel responsibility who can access the training in eight languages via our internal training platform. In addition, an advanced training course is available specifically for employees working in the fields of data science, AI and other digital areas of specialization. The course serves to illustrate the importance of the CoDE and empowers participants to make responsible decisions concerning the ethical aspects of data use and algorithms in digital products and business models. This course is also available to external stakeholders via our publicly accessible website.

Identifying risks

Since 2022, we have been looking at potential ethical risks that could result from projects by the Analytics Center of Excellence (ACE) of our Life Science business sector with the aim of developing suitable processes. The unit analyzes data from the business sector in order to obtain insights for our business.

In this connection, in June 2023, we launched a tool developed in-house for the early identification of ethical risks in ACE project management activities. This Group’s digital ethics check is a semi-automated analysis mechanism. It uses existing project features to calculate ethical risks and proposes potential measures to mitigate them. The basis for this is a scoring system that creates a risk assessment for each project. Depending on the risk score, the ACE unit can draw conclusions for product development. In doing so, it includes all decisive stages in the product life cycle and examines them for their ethical risks. As of January 2024, every new project in the Life Science business sector is to be analyzed in accordance with our scoring system. The aim is to expand the Group’s digital ethics check to projects across the entire company.

Big Data
Large data sets that may be analyzed computationally to reveal patterns, trends and associations, especially relating to human behavior and interactions.
Generative AI
AI that is typically built using special computer models and has capabilities that earlier AI did not have, such as the ability to generate content.

tags

Share this page: