You can watch the recording of the debate here.
“Preventing bias in AI algorithms and in the data sets used to train these algorithms is of paramount importance. Our society has been ridden with biases, discrimination, and injustice, since times immemorial. We do not want to replicate the biases and discrimination of our past or the biases of our current societies into the digital world” said Dragoş Tudorache (Renew, RO), Chairman of the Special Committee on Artificial Intelligence in a Digital Age (AIDA).
“But even more, beyond this caveat, we should use AI and the transition to the digital age as an opportunity to reduce or even eliminate biases and discrimination from our societies. We should seize this opportunity. We have a chance to get it right” he added.
The hearing took place with two panel discussions with experts from academia, civil society and industry, including computer scientist and AI specialist Timnit Gebru and EU Agency for Fundamental Rights Director Michael O’Flaherty.
The first panel focused on the impact of bias on the development of trustworthy AI. The second panel discussed algorithmic accountability, data governance, and how to reduce bias in AI systems.
More information on the event, as well as the programme and meeting documents, are available from the hearing webpage.
Background
A study from the European Parliament’s research service for the Panel for the Future of Science and Technology (STOA) highlights how AI can be susceptible to bias. Systematic bias may arise as a result of the data used to train systems, or as a result of values held by system developers and users. It most frequently occurs when machine learning applications are trained on data that only reflect certain demographic groups, or which reflect societal biases. A number of cases have received attention for promoting unintended social bias, which has then been reproduced or automatically reinforced by AI systems.