A recently published Panel for the Future of Science and Technology (STOA) study examines the impact of biases on datasets used to support decision-making systems based on artificial intelligence. It explores the ethical implications of the deployment of digital technologies in the context of proposed European Union legislation, such as the AI act, the data act and the data governance act; as well as the recently approved Digital Services Act and Digital Markets Act. It ends by setting out a range of policy options to mitigate the pernicious effects of biases in decision-making systems that rely on machine learning.
Machine learning (ML) is a form of artificial intelligence (AI) in which computers develop their own decision-making processes for situations that cannot be directly and satisfactorily addressed by available algorithms. The process is adjusted through the exploration of existing data on previous similar situations that include the solutions found at the time. The broader and more balanced the dataset is, the better the chances will be of obtaining a valid result; but there is no a priori way of knowing whether the data available will suffice to collect all aspects of the problem at hand. The outputs of systems based on AI can be biased owing to imbalances in the training data, or if the data source is biased itself with respect to ethnicity, gender or other factors.
Biases are commonly considered to be one of the most detrimental effects of AI use. In general, therefore, serious commitments are being made to reducing their incidence as much as possible. However, the existence of biases pre-dates the creation of AI tools. All human societies are biased – AI only reproduces what we are. Therefore, opposing this technology for this reason would simply hide discrimination and not prevent it. Our task must be to use the means at our disposal – which are many – to mitigate its biases. In fact, it is likely that at some point in the future, recommendations made by an AI mechanism will contain less bias than those made by human beings. Unlike humans, AI can be reviewed and its flaws corrected on a consistent basis. Ultimately, AI could eventually serve to build fairer, less biased societies.
Rather than increasing regulation, it is crucial to ensure that existing rules, such as the EU’s General Data Protection Regulation (GDPR), cover all new aspects that may appear as the technology evolves. European legislation such as the proposed AI act (together with the data act proposal and the data governance act) may apply not only to algorithms but also to datasets, thereby enforcing the explainability of decisions obtained through systems based on ML. The idea of setting up AI ethics committees to assess and provide certification for the systems or datasets used in ML is also proposed by organisations such as International Organization for Standardization (ISO) or European Committee for Electrotechnical Standardization (CEN). The Organisation for Economic Co-operation and Development (OECD) follows similar lines in its recommendations on AI. While setting up standards and certification procedures seems a good way to progress, it may also lead to a false impression of safety, as the ML systems and the datasets they use are dynamic and continue to learn from new data. A dynamic follow-up process would therefore also be required to guarantee that rules are respected following the FAIR principles of data management and stewardship (FAIR: findability, accessibility, interoperability and reusability).
The STOA report begins by providing an overview of biases in the context of artificial intelligence, and more specifically of machine-learning applications. The second part is devoted to the analysis of biases from a legal point of view, which shows that shortcomings in this area call for the implementation of additional regulatory tools to address the issue of bias adequately. Finally, the study, and its accompanying STOA options brief, put forward a range of policy options in response to the challenges identified.
Read the full report and STOA options brief to find out more. The study was presented by its authors to the STOA Panel at its meeting on 7 July 2022.
Your opinion counts for us. To let us know what you think, get in touch via stoa@europarl.europa.eu.
Source: STOA study on auditing the quality of datasets used in algorithmic decision-making systems