Wed. Dec 25th, 2024

Brussels, 19 February 2024

“Check against delivery”

Good afternoon.

Let me start by thanking Stéphanie Yon-Courtin for organising this important event. As Competition Commissioner, I have been very pleased with your interest and dedication to competition policies over the last five years in the European Parliament.

Thank you for putting the spotlight on Artificial Intelligence and competition. Because ‘humanintelligence’ is exactly what we need right now, to strike the right balance on intelligence of theartificialkind. In order to shape the emerging markets that are enabled by Large Language Models and other applications in AI. To make sure that competition can thrive, and consumers reap the benefits of these new markets, without hampering their development.

By thinking ahead, by acting swiftly and by cooperating, we have a window of opportunity to maximise these benefits; while at the same time, minimising the risks. But that window is closing. If we don’t act soon, we will find ourselves, once again, chasing solutions to problems we did not anticipate. So debates like this one are not only very timely, they are also urgent.

In some ways, this feels very much like the early-2000s (“two-thousands”), at the start of the first Digital Decade: At first, we saw what the raw technologies linked to “Web 2.0” could do, and it was pretty cool: Social media, digital television, digital marketplaces. Pretty soon, digital disruption was radically changing how we book flights for a city break, where we stay and how we decide what to eat when we get there.

This first wave of disruption was good for competition – we saw this at the start of the first Digital Decade: as markets became digital, new services emerged, choice increased and the quality kept getting better. The competition problems came, but it wasn’t until the Second Digital Decade – around 2014 – that the need to act became clear. It’s worth saying that it was the EU that led on this. Others jurisdictions are still catching up.

If the development of AI mirrors this history, then the right response is to phase in competition control gradually, in line with market growth. To respond faster than we did for Web 2.0, but to still give some time for the benefits of disruption to fully play out.

The question is: is the AI revolution going to follow the same history as Web 2.0? I have my doubts. In 2005, web entrepreneurs were true pioneers, exploring an uncharted frontier. When Evan Williams and Jack Dorsey set up Twitter in 2006, they didn’t even know what to call this kind of service, much less understand its impact. The same was true of Facebook, PayPal, YouTube and dozens more companies that competed in that early ‘gold rush’. Scaling up was possible, because there was plenty of uncharted land, on which to stake your claim.

This is not what we are seeing today. As we begin the Third Digital Decade, the development of AI isn’t happening in a vacuum. And it isn’t being driven only by pioneers, university students or small research facilities. Many of the key actors here are incumbents, with power in multiple markets where AI is likely to play a role.

Another key difference is the perception of what will happen next. Two decades ago, almost no one predicted how much social media would change the world and the economy. Even the innovators themselves were surprised by how big and how deep their companies could go. For Artificial Intelligence, the consensus is clear: this is a transformative technology; one that will change the way we work, the way we learn, the way we buy and sell – even the way we think. It will touch virtually every aspect of the economy, and it won’t take long for these effects to be felt.

Finally, there is the nature of the technology itself. Large Language Models depend on huge amounts of data, they depend on cloud space, and they depend on chips. There are barriers to entry everywhere. Add to this the fact that the Tech Giants have the resources to acquire the best and brightest talent. We’re not going to see disruption driven by a handful of college drop-outs who somehow manage to outperform Microsoft’s partner Open AI or Google’s DeepMind. The disruption from AI will come from within the nest of existing tech ecosystems.

Still, we can make an impact.

For me, the very first lesson from our experience so far is that our impact will always be greatest when we work together, communicate clearly, and act early on. That is why our call for contributions on Competition in Virtual Worlds and Generative AI is so important. The deadline is in a few weeks (11 March): So there is still time to make your voices heard. Beyond this, I will continue to engage with my counterparts in the United States and elsewhere, to align our approach as much as possible.

A second lesson we’ve learned is that digital markets are wide-reaching, sometimes affecting the economy in ways you might not have expected.  So we have to look carefully at vertical integration and at ecosystems. We have to take account of the impact of AI in how we assess mergers. We even have to think about how AI might lead to new kinds of algorithmic collusion.

A third lesson is that competition policy has to work together with digital regulation in this fast paced, dynamic economy. The Digital Markets Act is a new approach to preserving a level playing field online. It is born out of our years of experience with antitrust enforcement. And it is designed to go hand-in-hand with the continued use of traditional competition policy instruments.

The same has to be true for AI. How well these markets work will depend on a number of different regulatory parameters. For instance, there are still big questions around how intellectual property rights are respected. About how ethical AI is deployed. About areas where AI should never be deployed. In each of these decisions, there is a competition policy dimension that needs to be considered. Conversely, how AI regulation is enforced will affect the openness and accessibility of the markets it impacts.

There are questions around input neutrality and the influence such systems could have on our democracies. A Large Language Model is only as good as the inputs it receives, and for this there must always be a discretionary element. Do we really want our opinion-making to be reliant on AI systems that are under the control – not of the European people – but of tech oligarchs and their shareholders?

There’s no way I can answer all these questions right now – I wouldn’t dare to try. Thankfully, tonight we have an opportunity to listen to some of the best voices in the field. With their help, I’m confident we’ll move closer to the right set of solutions. Solutions that will allow this exciting new disruption to take root in Europe, in a way that is truly beneficial to our economy, to our citizens and to our democracies.

Our window of opportunity to shape this outcome won’t stay open for long. It’s up to us to seize it.

Thank you.

Source – EU Commission

 

Forward to your friends