Outcomes

As DSRI’s research progresses, our Institute is committed to sharing our outcomes for users to learn how our work positively impacts the digital ecosystem. These outcomes will include peer-reviewed publications, keynote speaking events, and other materials.

Chadda, A., McGregor, S., Hostetler, J., & Brennen, A. (2024, March). AI Evaluation Authorities: A Case Study Mapping Model Audits to Persistent Standards. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, No. 21, pp. 23035-23040).

Guha, S., Khan, F. A., Stoyanovich, J., & Schelter, S. (2024, February). Automated data cleaning can hurt fairness in machine learning-based decision making. IEEE Transactions on Knowledge and Data Engineering.

Kawakami, A., Guerdan, L., Cheng, Y., Glazko, K., Lee, M., Carter, S., ... & Holstein, K. (2023, November). Training towards critical use: Learning to situate ai predictions relative to human knowledge. In Proceedings of The ACM Collective Intelligence Conference (pp. 63-78).

Rastogi, C., Leqi, L., Holstein, K., & Heidari, H. (2023, November). A Taxonomy of Human and ML Strengths in Decision-Making to Investigate Human-ML Complementarity. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing (Vol. 11, No. 1, pp. 127-139).

Guerdan, L., Coston, A., Wu, S., & Holstein, K. (2023, October). Policy Comparison Under Unmeasured Confounding. In NeurIPS 2023 Workshop on Regulatable ML.

Loke, L. Y., Barsoum, D. R., Murphey, T. D., & Argall, B. D. (2023, September). Characterizing eye gaze for assistive device control. In 2023 International Conference on Rehabilitation Robotics (ICORR) (pp. 1-6). IEEE.

McGregor, S. (2023, August). A Scaled Multiyear Responsible Artificial Intelligence Impact Assessment. Computer, 56(8), 20-27.

Hammond, K., & Leake, D. (2023, July). Large language models need symbolic AI. In Proceedings of the 17th International Workshop on Neural-Symbolic Learning and Reasoning, La Certosa di Pontignano, Siena, Italy (Vol. 3432, pp. 204-209).

Guerdan, L., Coston, A., Holstein, K., & Wu, Z. S. (2023, June). Counterfactual prediction under outcome measurement error. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (pp. 1584-1598).

Bell, A., Bynum, L., Drushchak, N., Zakharchenko, T., Rosenblatt, L., & Stoyanovich, J. (2023, June). The possibility of fairness: Revisiting the impossibility theorem in practice. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (pp. 400-422).

Guerdan, L., Coston, A., Wu, Z. S., & Holstein, K. (2023, June). Ground (less) truth: A causal framework for proxy labels in human-algorithm decision-making. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (pp. 688-704).

Holstein, K., De-Arteaga, M., Tumati, L., & Cheng, Y. (2023, April). Toward supporting perceptual complementarity in human-AI collaboration via reflection on unobservables. Proceedings of the ACM on Human-Computer Interaction, 7(CSCW1), 1-20.

Ackerman, R., Icobelli, F., & Balyan, R. (2022, December). Intelligent Tutoring Systems in Healthcare for Low Literacy Population: A Concise Review. In 2022 International Conference on Computational Science and Computational Intelligence (CSCI) (pp. 1827-1829). IEEE.

Xu, S., Mi, L., & Gilpin, L. H. (2022, November). A Framework for Generating Dangerous Scenes for Testing Robustness. In Progress and Challenges in Building Trustworthy Embodied AI.

Rhea, A. K., Markey, K., D’Arinzo, L., Schellmann, H., Sloane, M., Squires, P., ... & Stoyanovich, J. (2022, September). An external stability audit framework to test the validity of personality prediction in AI hiring. Data Mining and Knowledge Discovery, 36(6), 2153-2193.

Jenkins, R., Hammond, K., Spurlock, S., & Gilpin, L. (2022, March). Separating facts and evaluation: motivation, account, and learnings from a novel approach to evaluating the human impacts of machine learning. AI & SOCIETY, 38(4), 1415-1428.

McGregor, Sean. "Open digital safety." Computer 57.4 (2024): 99-103.

Zhang, H. (2024, May). Searching for the Non-Consequential: Dialectical Activities in HCI and the Limits of Computers. In Proceedings of the CHI Conference on Human Factors in Computing Systems (pp. 1-13).

Kieslich, K., Helberger, N., & Diakopoulos, N. (2024, June). My Future with My Chatbot: A Scenario-Driven, User-Centric Approach to Anticipating AI Impacts. In The 2024 ACM Conference on Fairness, Accountability, and Transparency (pp. 2071-2085).

Kawakami, A., Coston, A., Zhu, H., Heidari, H., & Holstein, K. (2024, May). The Situate AI Guidebook: Co-Designing a Toolkit to Support Multi-Stakeholder, Early-stage Deliberations Around Public Sector AI Proposals. In Proceedings of the CHI Conference on Human Factors in Computing Systems (pp. 1-22).

Herasymuk, D., Arif Khan, F., & Stoyanovich, J. (2024, June). Responsible Model Selection with Virny and VirnyView. In Companion of the 2024 International Conference on Management of Data (pp. 488-491).

Dr. Jill Crisman, vice president and executive director (2024, September). IP Crime in the Face of Generative AI.  Presented at the 16th annual IP Crime Conference, Oslo, Norway.

2023 Impact Report | UL Research Institutes
Our annual impact report addresses how UL Research Institutes is working to address evolving public safety risks through groundbreaking safety science research.