Reliable and Secure Systems

Dependable AI and secure systems for the real world.

The RSS group develops methods, tools, and systems to make AI and software more reliable, secure, and trustworthy in safety-critical environments.

Research themes

4

Curated focus areas spanning dependable AI and security.

Active projects

3

Current grants and collaborations across lab priorities.

Open opportunities

3

Student and staff roles for prospective group members.

Featured

Research Topics

Explore the research areas that connect our work on dependable AI, secure systems, and safety-critical software.

Research topics visual for dependable AI and secure systems

Research

Research highlights

Our work combines rigorous engineering, systems design, and applied AI methods to support dependable deployment in high-stakes settings.

Runtime Assurance

Monitoring, contracts, and fallback mechanisms that keep adaptive systems inside safe operating envelopes.

4 people0 projects

Explore topic

Secure AI Infrastructure

Systems support for auditable deployment, reproducible experiments, and trustworthy supply chains.

3 people0 projects

Explore topic

Evaluation and Benchmarking

Structured assessment of model behavior, failure modes, and operational readiness in safety-critical settings.

3 people0 projects

Explore topic

Human-Centered Operations

Interfaces and workflows that help teams supervise, debug, and govern complex AI-enabled systems.

2 people0 projects

Explore topic

Highlights

Featured projects

A representative sample of current projects spanning runtime assurance, secure AI infrastructure, and operational readiness.

Assurance Cases for Learning-Enabled Robotics

Funder: European Research Council

Active

2025-2029

Developing assurance-case templates, runtime monitors, and operator feedback loops for mobile robots in public spaces.

Runtime AssuranceEvaluation

Trusted Telemetry for Edge AI Systems

Funder: German Research Foundation

Active

2024-2027

Building lightweight attestation and telemetry pipelines so edge deployments can be observed without compromising performance.

Secure AI InfrastructureEdge Systems

Why this matters

Translating research into dependable practice

We focus on workflows and infrastructure that help research results hold up under operational constraints, not just in controlled benchmarks.

News

Latest updates

The latest public updates are loaded from Supabase and may point to internal announcements, external posts, or LinkedIn updates.

No public updates are available yet. News items will appear here automatically once they are published in Supabase.

Get involved

Open opportunities

Current openings and collaboration pathways for students, engineers, and visiting researchers.

PhD Position in Runtime Assurance for Embodied AI

PhD

Open

Start: October 2026

Deadline: 31 May 2026

Work on runtime monitoring, fail-safe design, and safety evidence for AI-enabled robotic systems operating with partial observability.

Research Software Engineer for Secure Experiment Infrastructure

Research Staff

Open

Start: Flexible from Summer 2026

Deadline: Open until filled

Support reproducible experimentation, deployment automation, and secure systems tooling across the group’s platforms.

Typical applicants include doctoral students in systems, security, robotics, and trustworthy AI, as well as research engineers who enjoy building durable tooling.

Quick links

Explore the current lab structure and contact points

This milestone focuses on a polished frontend skeleton. Content is realistic placeholder material designed to show how people, projects, publications, and recruitment fit together across the site.

Events

Recent events

Seminars, talks, and public activities from the RSS group.

No public events yet

Upcoming and recent events will appear here once they are published in Supabase.