An AI review layer for trusted and ethical AI-based systems
one.O's Second Instance identifies ethical risks and explains model decisions in plain language. It also generates tamper-proof audit logs so compliance, legal, and product teams can maintain an ethical framework throughout the AI workflow, reinforcing accountability, ethics, and governance.
Developed in collaboration with Prof. Jürgen Anke and Prof. Maik Thiele from HTW Dresden and grounded in scientific methods, we jointly explored how ethical AI drives business innovation. Second Instance functions as a built-in quality check and impartial "LLM-as-a-judge layer". It integrates seamlessly into existing machine-learning pipelines, offering continuous monitoring and automated remediation to bridge the responsibility gap in AI.
As part of the Otto Group, we align with the group's values and shared commitment to trust, transparency, and sustainability, helping businesses deploy ethically certified AI systems with confidence.
Ethical quality assured within your AI ecosystem
Building ethical AI begins with credibility and with earning trust from customers and stakeholders.
Second Instance reviews AI outputs including text, images, audio, and video for compliance, fairness, and safety. As a central AI Quality Gate, it flags bias or discriminatory content early and feeds learnings back into the system to strengthen accuracy, reliability, and trust across all AI channels.
Ethics in AI systems and customer-facing solutions
Companies using AI‑supported communication who need predictable assistant replies to reduce risky outputs and preserve brand integrity.
Platform and app providers seeking to demonstrate trusted models, accelerate certification readiness, and secure enterprise clients.
Compliance and legal teams needing explainable AI and auditable records to streamline audit preparation and support strategic decisions.
International brands and service providers entering EU markets who must comply with GDPR and meet regional AI requirements.
Responsible and transparent AI experiences
Ethical AI relies on transparency, respect for user privacy, and established guidelines, promoting accountability within AI-driven touchpoints.
Conversational chatbots can unintentionally show bias or stray from brand tone. With Second Instance, every interaction is pre-audited for fairness and safety. Our website knowledge bot, monitored by AI Quality Gate, automatically detects risky replies and regenerates compliant versions. Approved responses ensure responsible interactions and build user trust.
AI recommendations shape customer experience in the online shopping assistant. Second Instance reviews personalization logic to spot manipulative or biased product suggestions. For example, unfair pricing tied to user demographics.
Each decision includes an accountable summary, and reviewed cases are logged for oversight. This keeps AI recommendations explainable and aligned with ethical standards.
Diversity in visual content is essential for fashion retailers and e-commerce. Second Instance operates as an ethical AI Quality Gate on MOVEX | Virtual Content Creator, certifying images and virtual models for balanced representation across skin tones, body types, and styles.
Teams receive guidance to adjust prompts or generation settings. This reduces stereotyping and promotes authentic inclusivity in every campaign.
AI at one.O: guidelines for the responsible deployment
These guidelines help us navigate the important relationship between people and machines. They include seven key points that reflect the legal standards and ethical values we uphold as we implement responsible AI.
Quality gate for clear and reliable AI outcomes
We help organizations implement AI solutions that are measurable, manageable, and accountable. With Second Instance, teams can then deliver consistent AI experiences that strengthen trust and reliability while setting a standard for responsible innovation.
With Second Instance, virtual shopping assistants, recommendation engines, and other AI applications are monitored for bias, plausibility, and consistent performance in retail, finance, and healthcare. Customer interactions are assessed for fairness and balanced treatment.
Every AI decision is documented and interpretable. Teams gain insights into why outputs are generated, supporting accountable decision-making for both customer-focused and internal processes.
Second Instance evaluates AI-generated content with dynamic ethics scores and real-time dashboards tracking KPIs on trust, bias, fairness, and performance. This allows organizations to monitor reliability and enhance customer experience.
Protect customer data with robust privacy features. Ensure compliance with GDPR, the EU AI Act, and other local requirements while maintaining responsible AI practices.
Contact us!
Do you need more information? Our experts would be pleased to assist you.