Monitoring Machine Learning Systems from the Point of View of AI Ethics
Pysyvä osoite
Kuvaus
© 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
The practical implementation of AI ethics remains a challenge. Guidelines and principles are numerous but converting them into practice appears difficult for organizations developing ML systems. It is argued that bringing AI ethics closer to software engineering practice could help. In this regard, monitoring of ML systems and metrics related to ethics could be one way of making ethics more tangible. While various existing papers discuss technical approaches to, for example, monitoring fairness, a more holistic view on monitoring AI ethics is lacking, as is discussion on MLOps and ethics. In this paper, we discuss AI ethics from the point of view of monitoring, building on existing research from AI ethics, software engineering, and machine learning, to propose a typology of metrics for monitoring ML systems during their operational lives. We then discuss monitoring ML systems from the point of view of AI ethics by building on this typology and using the Ethics Guidelines for Trustworthy AI (AI HLEG) as a framework to illustrate what monitoring AI ethics might mean in practice. In doing so, we highlight that (a) some issues related to AI ethics are hardly unique to AI ethics and are something frequently tackled in ML monitoring, (b) though AI ethics involves many high-level design decisions to be made early on in the development of a system, there are still various aspects of AI ethics that may be monitored. Overall, this paper presents initial discussion on the topic in hopes of encouraging further studies on it.
Emojulkaisu
Proceedings of the Conference on Technology Ethics 2024 (Tethics 2024)
ISBN
ISSN
1613-0073
Aihealue
Sarja
CEUR workshop proceedings|3901
OKM-julkaisutyyppi
A4 Artikkeli konferenssijulkaisussa