Reliable and Secure Systems
Dependable AI and secure systems for the real world.
The RSS group develops methods, tools, and systems to make AI and software more reliable, secure, and trustworthy in safety-critical environments.
Research themes
4
Curated focus areas spanning dependable AI and security.
Active projects
3
Current grants and collaborations across lab priorities.
Open opportunities
3
Student and staff roles for prospective group members.
Research
Research highlights
Our work combines rigorous engineering, systems design, and applied AI methods to support dependable deployment in high-stakes settings.
Runtime Assurance
Monitoring, contracts, and fallback mechanisms that keep adaptive systems inside safe operating envelopes.
Explore topic
Secure AI Infrastructure
Systems support for auditable deployment, reproducible experiments, and trustworthy supply chains.
Explore topic
Highlights
Featured projects
A representative sample of current projects spanning runtime assurance, secure AI infrastructure, and operational readiness.
Assurance Cases for Learning-Enabled Robotics
Funder: European Research Council
2025-2029
Developing assurance-case templates, runtime monitors, and operator feedback loops for mobile robots in public spaces.
Trusted Telemetry for Edge AI Systems
Funder: German Research Foundation
2024-2027
Building lightweight attestation and telemetry pipelines so edge deployments can be observed without compromising performance.
Why this matters
Translating research into dependable practice
We focus on workflows and infrastructure that help research results hold up under operational constraints, not just in controlled benchmarks.
News
Latest updates
The latest public updates are loaded from Supabase and may point to internal announcements, external posts, or LinkedIn updates.
No public updates are available yet. News items will appear here automatically once they are published in Supabase.
Get involved
Open opportunities
Current openings and collaboration pathways for students, engineers, and visiting researchers.
PhD Position in Runtime Assurance for Embodied AI
PhD
Start: October 2026
Deadline: 31 May 2026
Work on runtime monitoring, fail-safe design, and safety evidence for AI-enabled robotic systems operating with partial observability.
Research Software Engineer for Secure Experiment Infrastructure
Research Staff
Start: Flexible from Summer 2026
Deadline: Open until filled
Support reproducible experimentation, deployment automation, and secure systems tooling across the group’s platforms.
Quick links
Explore the current lab structure and contact points
This milestone focuses on a polished frontend skeleton. Content is realistic placeholder material designed to show how people, projects, publications, and recruitment fit together across the site.
Events
Recent events
Seminars, talks, and public activities from the RSS group.
No public events yet
Upcoming and recent events will appear here once they are published in Supabase.

