Video | Assessment

Rewatch the ModelShare presentations delivered by Chris Erdmann (OMF), Simon Hettrick (Software Sustainability Institute) and Natalie Meyers (University of Notre Dame and GO FAIR US)

Posted by Open Modeling Foundation on June 05, 2024 · 4 mins read

On 24 May 2024, the Open Modeling Foundation's Certification Working Group hosted the seventh ModelShare workshop. During the workshop, we heard three distinct but interconnected perspectives on model assessment.

The first speaker was Chris Erdmann, from the Open Modeling Foundation and SciLifeLab. Chris brought a systems view to the table, eliciting the different roles that different eight open science communities are already playing for better model-sharing practices.

  1. Consider the Open Modeling Foundation, which sets out to develop standards for model-sharing practices, and certify such efforts.
  2. There is also the Research Data Alliance, who’ve redefined the principles of findability, accessibility, interoperability and reusability (“FAIR”) from being about data, to encompassing both software and machine learning.
  3. More specifically, CoMSES Net promotes the publication of models in the social and ecological sciences in a way that ensures they meet the FAIR principles.
  4. Academic publishing practices have also come a long way but must continue to help make data, software and model information more readily available; PLoS’ “open science indicators” look like a step in the right direction.
  5. Papers with Code’s community was also mentioned for their efforts in parsing openly available research and linking them with their GitHub repositories.
  6. Hugging Face’s “model cards” were celebrated both for their level of detail and their being supported by librarians.
  7. OpenML considers diverse types of metadata for their machine learning models platform.
  8. And the Research Activity Identifier is challenging the practice of valuing and linking only academic publications to include more diverse research outputs.

The second speaker was Simon Hettrick, from the Software Sustainability Institute (SSI) and Southampton Research Software Group. Simon shared two success stories from the SSI: coining the term for – and enabling a now global community of – research software engineers; and challenging the UK’s “research excellence framework” through “the hidden REF.” In both cases, the role of recognition was crucial: research software engineers suddenly could create their own communities within academia, and the importance of roles that are commonly not rewarded according to the REF was brought to life. What the SSI called “the hidden role” includes countless professionals, from data stewards and librarians, to public engagement professionals and clinical trials managers. On the back of this work, the next REF submissions will permit outputs by any staff member, including non-academic staff.

The third speaker was Natalie Meyers, from the Lucy Family Institute for Data & Society at University of Notre Dame and GO FAIR US at the San Diego Supercomputer Center. Natalie focused on artificial intelligence impact assessments (“AI-IA”), suggesting five elements for such an assessment: (i) its process, (ii) a documentation strategy, (iii) a method for assessing impacts on individuals and groups, (iv) a method for assessing societal impacts, and (v) a schedule for the process that is feasible for the organization to implement and maintain. But an AI-IA is not a one-off process. At the point of deployment, an AI system must also align to desired outcomes set out through some ethics framework. This requires considering auditable practices for bias-detection and privacy-violation, and having clear escalation processes for when issues are uncovered.

Watch the three presentations below, and sign up here to learn how you can get involved with the Open Modeling Foundation’s Certification Working Group!

📸 Image by Alan Warburton / © BBC / Better Images of AI / Plant / Licenced by CC-BY 4.0