Learning from the Past – The AI Incident Database
The Digital Safety Research Institutes (DSRI) of UL Research Institutes is thrilled to announce our partnership with TheCollab to continue developing and maintaining the AI Incident Database (AIID). DSRI views the AIID as a crucial first step in mitigating the harms of AI systems.
Artificial Intelligence (AI) promises to bring many advances to societies to help solve our toughest problems. But many people are worried about AI system dangers, e.g., the uncontrollability, targeted harassment, deep fakes, and social manipulation, to name a few. These types of harms can affect people and societies at worldwide scales.
The first step in combating these dangers is to understand them. This is where the AIID comes into play. The AIID indexes news reports about AI incidents making them discoverable by AI researchers, engineers, policy makers, and anyone else who is interested. The AIID helps individuals quickly learn about AI incident types and their underlying causes. In addition, AI systems engineers can learn about past AI incidents and use this information to create detection and mitigation strategies to prevent similar incidents from occurring in their systems. As George Santayana said in the Life of Reason, 1905, “Those who cannot remember the past are condemned to repeat it.”
The AIID cannot do this alone and needs your help. Anyone is invited to submit incident reports to be indexed to help engineers build safer systems and policy makers enact data-based policies. Anyone can volunteer to help with the indexing. And we invite AI-systems engineers and policy makers to use our database and provide feedback on its utility and ways to improve the database.
At DSRI, we are excited to be contributing to AI safety through the AIID and our partnership with TheCollab.
Learn more about DSRI here.
Learn more at the AIID or submit an incident report.