On 24 May 2024, the Open Modeling Foundation's Certification Working Group hosted the seventh ModelShare workshop. During the workshop, we heard three distinct but interconnected perspectives on model assessment.
The first speaker was Chris Erdmann, from the Open Modeling Foundation and SciLifeLab. Chris brought a systems view to the table, eliciting the different roles that different eight open science communities are already playing for better model-sharing practices.
The second speaker was Simon Hettrick, from the Software Sustainability Institute (SSI) and Southampton Research Software Group. Simon shared two success stories from the SSI: coining the term for – and enabling a now global community of – research software engineers; and challenging the UK’s “research excellence framework” through “the hidden REF.” In both cases, the role of recognition was crucial: research software engineers suddenly could create their own communities within academia, and the importance of roles that are commonly not rewarded according to the REF was brought to life. What the SSI called “the hidden role” includes countless professionals, from data stewards and librarians, to public engagement professionals and clinical trials managers. On the back of this work, the next REF submissions will permit outputs by any staff member, including non-academic staff.
The third speaker was Natalie Meyers, from the Lucy Family Institute for Data & Society at University of Notre Dame and GO FAIR US at the San Diego Supercomputer Center. Natalie focused on artificial intelligence impact assessments (“AI-IA”), suggesting five elements for such an assessment: (i) its process, (ii) a documentation strategy, (iii) a method for assessing impacts on individuals and groups, (iv) a method for assessing societal impacts, and (v) a schedule for the process that is feasible for the organization to implement and maintain. But an AI-IA is not a one-off process. At the point of deployment, an AI system must also align to desired outcomes set out through some ethics framework. This requires considering auditable practices for bias-detection and privacy-violation, and having clear escalation processes for when issues are uncovered.
Watch the three presentations below, and sign up here to learn how you can get involved with the Open Modeling Foundation’s Certification Working Group!
📸 Image by Alan Warburton / © BBC / Better Images of AI / Plant / Licenced by CC-BY 4.0