Feb 2025 — Ongoing
Content Assessment
Assess synthetic and manipulated media detection tools for their ability to properly explain their function, outputs, etc., so that users can properly use these imperfect tools.
Grand Challenge
As part of UL Research Institutes, the Digital Safety Research Institute (DSRI) aims to better protect the public from rapidly evolving AI and digital harms.
DSRI is researching methodologies to advance public safety assessment of rapidly evolving AI/digital systems.
Feb 2025 — Ongoing
Assess synthetic and manipulated media detection tools for their ability to properly explain their function, outputs, etc., so that users can properly use these imperfect tools.
Jun 2023 — Dec 2025
DSRI is assessing LLMs for their ability to provide expert advice in medicine, software development, and therapy.
Jan 2022 — Dec 2025
A partnership between ULRI and Northwestern University to establish best practices for the evaluation, design, and development of machine intelligence that is safe, equitable, and beneficial.
Coming soon
Assess Agentic AI for their ability to perform tasks appropriately for the public.
Coming soon
Assess public harm incidents associated with specific types of AI/digital systems.