The explainability of Artificial Intelligence algorithms, in particular Machine-Learning algorithms, has become a major concern for society. Policy-makers across the globe are starting to reply to such concern.
In Europe, a High-level Expert Group on AI has proposed seven requirements for a trustworthy AI, which are: human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity/non-discrimination/fairness, societal and environmental wellbeing, and accountability.
Para leer más ingrese a:
https://www.cerre.eu/publications/explaining-black-box-when-law-controls-ai
¡Haz clic para puntuar esta entrada!
(Votos: 0 Promedio: 0)