Grand Challenge

Promoting Safety at the Human-Digital Interface

As part of UL Research Institutes, the Digital Safety Research Institute (DSRI) aims to better protect the public from rapidly evolving AI and digital harms.

Our Research Projects

DSRI is researching methodologies to advance public safety assessment of rapidly evolving AI/digital systems.

Feb 2025 — Ongoing

Content Assessment

Assess synthetic and manipulated media detection tools for their ability to properly explain their function, outputs, etc., so that users can properly use these imperfect tools.

Jun 2023 — Dec 2025

Chatbot Assessment

DSRI is assessing LLMs for their ability to provide expert advice in medicine, software development, and therapy.

Future Research

Coming soon

AI Agent Assessment

Assess Agentic AI for their ability to perform tasks appropriately for the public.

Coming soon

Incident Assessment

Assess public harm incidents associated with specific types of AI/digital systems.

Back to top