Responsible use of artificial intelligence
Artificial intelligence (AI) is programmed by humans. It is thus in line with human thought and behavioral patterns, assumptions and cultural influences – but it is still a machine. In order to achieve a responsible co-existence between human and machine, we should address both the “why” and the “how” of any collaboration. We create trust by aligning algorithms with ethical principles.
When developing automated decisions based on artificial intelligence or statistical methods, therefore, the following guidelines are important to us. Our top priority is to develop AI applications for people – not against them. In doing so, we are guided by regulatory frameworks, prevailing legislation and ethical principles – above all the Otto Group’s Code of Ethics and our one.O corporate values.
1. People take precedence over AI
Human action takes precedence over AI – it supports our decision making. We monitor the performance of our applications and can intervene at any point. We also regularly validate AI models in our operational activities. We visualize results so that we can quickly identify any irregularities. This approach is adopted particularly in cases in which far-reaching consequences and loss of trust are possible. This applies in particular to the development of bots and the use of large language models. The department is responsible for assessing how much autonomy we grant to AI.
2. Trust through transparency
Transparency towards departments
An AI application is only successful if consumers recognize its added value and if it generates trust. We therefore inform users that they are interacting with artificial intelligence and flag this to them. We communicate openly about the possibilities and the limitations of our AI. It is important to us to make the functionality of the AI understandable and, where possible, to also explain the results that are obtained. We critically examine whether an objective can also be achieved with an algorithmic system that is less complex and easier to follow without a significant loss in quality.
Customer trust
The development of an AI application is motivated by the added value it brings to our customers and is validated and continuously improved using Friendly User Tests and the Business User Tests. Implementation as a user application is initially carried out in small steps and is expanded after careful testing processes. We are convinced that transparency is the key to building trust in generative AI. That is why we communicate openly about the capabilities and limitations of our tools, such as the voicebot, in accordance with the legal requirements of the EU AI Act. Customers should know exactly what to expect. If a request is outside the current scope of functionality, this is communicated. In addition, we strengthen trust through continuous monitoring and analysis of the performance of our AI applications. This enables us to identify potential for improvement at an early stage and continuously optimise the customer experience.
3. Non-discrimination, diversity and fairness
Critical review of results
Diversity is important to us. Our goal is to develop and use models that make fair decisions and do not discriminate. Therefore, the assumptions and data on which AI is based should be as representative as possible. However, we are aware that automated decision-making can also lead to discrimination, as different fairness criteria – which must be defined when programming the algorithm – sometimes compete with each other. Accordingly, discriminatory biases cannot be avoided without exception, even with AI-based decision-making systems, including those that use generative AI. We take this into account when developing and programming our models and critically review the results in this regard.
Principle of reversibility
We apply the principle of reversibility: the results of AI are fundamentally reversible, so that decisions can be reversed through human intervention. The use of (generative) AI enables us to strengthen participation and actively reduce discrimination in line with the Accessibility Enhancement Act.
Review of external AI solutions
Before we use external generative AI solutions internally, we carefully review them based on various criteria such as data protection, political neutrality, non-discrimination and EU directives. We keep our principles in mind and adapt external solutions as necessary to ensure they meet our standards.
4. Sustainability
We only develop an AI application when it is appropriate. If a simple method works better, we will use this and draw our customers’ attention to it. We also apply the principle of data minimalism: we specifically select only data for our models that is really necessary and we streamline our models if possible. This is for the purposes of data protection, duration and lower consumption of resources. We are aware of the potential environmental impact of increased CO2 emissions from AI and carefully weigh up the consumption of resources against the benefits.
5. Secure and robust against manipulation
We are aware that forms of AI applications can be deliberately deceived or manipulated. Therefore, we take the current security standards into account for productive applications and focus on protecting the data and decision-making basis against intentional and unintentional manipulation. To this end, we consciously invest time in the necessary application-specific research – especially in cases where data is imported from external sources. Our goal: AI that serves people and does not harm them. This protection is particularly important in the field of generative AI, which is why we are working on implementing the “Second Instance”, among other things, to create additional layers of security. In this context, we are also aware of the phenomenon of “hallucinating generative AI” and are actively counteracting it in the software development of our tools by testing them using various approaches.
6. Data protection and data management
Responsible data management and secure data storage are not just obligations for us as a member of the Otto Group but are also a mark of trust. We recognize just how important it is to protect personal and sensitive data. We communicate as transparently as possible which data is being used by whom for what purpose.
7. Responsibility, liability and accountability
We clearly define who is responsible for which system and which function in our automated applications and document this. Technical responsibility lies with the relevant department and is exercised in accordance with the principles of human-in-command (HIC), human-in-the-loop (HITL) and human-on-the-loop (HOTL), depending on the scope of the decision. HIC and HITL are used for more critical applications, while HOTL is used for less critical applications such as recommendations. The persons responsible must be aware of the associated tasks – this also applies in cases of shared responsibility. Responsibility cannot be transferred to AI, but selected activities can. Liability issues are addressed within the framework of legally applicable requirements.
8. Development culture and future orientation
We are excited about the possibilities of AI. We are not afraid to make mistakes – learning from them is a key factor in our progress. We start small in our developments with a lot of monitoring (HITL and HIC approaches). Only when a sufficient level of confidence in the reliability and safety of a developed application has been achieved do we consider transitioning to the human-on-the-loop (HOTL) principle. In doing so, responsibility remains at the forefront of our eight principles. We are curious about the upcoming stages of AI development – we want to be pioneers, seize opportunities and face future challenges.
Glossary
This approach describes how AI does not perform any actions directly, but rather a human must always give approval first. An example of this would be an employee who reviews an AI-generated reply email and only sends it to customers after approval.
This approach means that an AI application can act autonomously in parts, but a human can intervene at any time. An example is the voice bot, which answers routine inquiries itself but transfers the conversation to a human for more complex issues.
This approach means that the AI application can act even more independently, without humans monitoring individual conversations or approving emails first. An example would be the voice bot, which can now handle many conversations on its own. In the background, quality metrics are evaluated by humans. If anomalies occur, the human intervenes.
This test describes testing and trying out an AI application relatively early in the development phase. Here, test subjects use the application with a benevolent eye and provide initial feedback from the customer's perspective.
This test works similarly to the Friendly User Test, except the test subjects are from the company itself. The Business User Test is used after the Friendly User Test, i.e., after initial external feedback has been incorporated and the application has been further improved.
Learn more about our AI Hub
Our AI Hub offers you a unified infrastructure for using a wide variety of AI applications. From shopping assistants to voice-based applications, we enable you to provide personalised services and shopping experiences for your customers.
- Shopping Assistant
- Voice Bot
- Brand Voice Generator
- Accessibility Bot
- Knowledge Bot
AI in retail and logistics
We do not see artificial intelligence as an end in itself but as an important tool in supporting retail and logistics. Responsibility ultimately always remains with humans. For us, this means responsible commerce.