Fri. Dec 13th, 2024

    Brussels, 30 October 2023

    We have all witnessed the recent emergence of new and highly sophisticated AI generative tools, such as ChatGPT (OpenAI). This technology, based on Large Language Models (LLMs), is here to stay and will undoubtedly have a massive impact on the world of work and on the societies in which we live. Generative AI tools are already starting to effect the way many of us work, whether in the public or private sector.

    Panel discussion on ChatGPT and other generative AI tools

    The aim of the session organised at the Council by the Council Library, together with Springer Nature, was to hear from experts in the field about recent developments in generative AI tools, the key benefits and the main risks.

    Moderated by Stavroula Kousta, Chief Editor, Nature Human Behaviour, the panel included:

    • Virginia Dignum, Professor of Social and Ethical Artificial Intelligence at the University of Umeå in Sweden and associated with the Delft University of Technology in the Netherlands;
    • Sandra Wachter, Professor of Technology and Regulation at the Oxford Internet Institute at the University of Oxford;
    • Francesca Rossi, AI Ethics Global Leader and a distinguished research staff member at IBM Research AI;
    • Martin Goodson, former Chair of the RSS Data Science and AI Section (2019-2022), organiser of the London Machine Learning Meetup and CEO of AI startup, Evolution AI;
    • Dieuwke Hupkes, research scientist at Facebook AI Research and involved with the Amsterdam unit of ELLIS.

    Among the many key issues raised, speakers talked about the benefits of this new technology, including:

    • The ability to perform a variety of language-based tasks that have not been previously possible (such as summarising large amounts of data).
    • Having the capability to create complex text and images that can later be utilised to address difficult issues, such as climate change.
    • What we can learn from neural networks (e.g., discovering patterns that we have not seen before can be made easier and they can also be very useful for routine, time-consuming tasks).
    • Research can be more inclusive within the academic community and progress faster with open source LLMs (the more people that work on it, the quicker it will be to find potential solutions to problems).

    The panellists also observed that generative AI amplifies existing risks relating to bias, discrimination, privacy, transparency, job security, etc. as well as posing some new ones, in particular:

    • Increasing the speed at which misinformation and extreme views can be disseminated.
    • Disruption in society and displacement of white-collar jobs in such areas as law, banking and engineering as the importance of education and ability decreases.
    • New issues related to copyright infringement, privacy, and plagiarism.
    • Cost to the environment due to the energy consumption needed for storing and maintaining massive amounts of data. 

    A lively discussion took place on whether AI tools could think and on how claims of imminent super intelligent machines taking over society were overshadowing and preventing discussions on the current and very real risks society faces from (generative) AI. The importance of good future-proof regulation and accountability was highlighted, with one speaker calling for AI to be built with seat belts and brakes and with ‘drivers’ licences’ and traffic rules being factored in. It was pointed out the risks were not so much in the technology itself, but in its use and application. A multidisciplinary and multi-stakeholder approach was called for, involving policy-makers, companies and research communities all playing a role in the safe development and use of AI tools as well as in supporting innovation. On averting the risk of societal disruption, several panelists strongly supported a truly open AI model, possibly within a publicly funded project modelled on the human genome project.

    If you are fascinated by Artificial Intelligence and in particular generative AI tools such as ChatGPT, you may like to consult the Council Library LibGuides, which will help you find authoritative information on these matters:

    This post does not necessarily represent the positions, policies, or opinions of the Council of the European Union or the European Council.

    Source – EU Council

    Forward to your friends