Sun. Oct 6th, 2024

Brussels, 28 June 2024

“Check against delivery”

I’m delighted to be joining you today to conclude this workshop on competition in Virtual Worlds and Generative AI. This is a hot topic. And it’s showing no signs of cooling down. On the contrary.

Generative AI is a transformative innovation that gives us a powerful new tool. It holds tremendous promise. Of turbocharging the economy, boosting growth, productivity and competitiveness. Of bringing in a new industrial revolution. A new world. The possibilities seem endless. From run-of-the-mill things like speeding up customer relations, administrative tasks or software development. To cool stuff like drug discovery and development, or personalised medical treatment plans.

What will all of this mean for us in the future? With the new opportunities come new challenges and threats. To privacy, equality, some say even to our democracy. There are both pessimists and optimists. But we as competition authorities have to be realists. Because we act in the here and now. To make markets work for people. So is there anything that we can do? And if so, what is it? We are on a learning curve, and we are facing a steep hill. That is why a workshop like today’s matters so much. Because we won’t make it to the top of the curve unless we go on this journey together.

We have many questions. Does AI have the power to disrupt the status quo? Will there be real competition from new challengers? Or will it remain among the established tech giants who claim intense rivalry but actually dominate the market? Will it give the big companies even more power, or could we just go from one monopoly to the next?

Will the foundation level remain concentrated and what is needed to open it up for entry? How will start-ups access the necessary inputs to develop and build their models? What about licensing and open sourcing? Knowing that open-source often becomes proprietary over time? We have been discussing many of those questions today and step by step we are getting closer to finding answers.

What we do know for certain, is that we need to find out fast what an effective governance model for AI and virtual worlds should look like. A model that can quickly fix the competitive harms in AI markets without stifling innovation.

Because now is the time to act. Strong competition enforcement is always needed at times of big industrial and tech changes. It is then that markets can tip, that monopolies can be formed, and that innovation can be snuffed out. When competition enforcement steps in, it allows different business models and new ideas to develop. So we must get ready. AI and the metaverse are developing at breakneck speed. We cannot just sit back and see how things pan out.

This isn’t like 20 years ago when we were just figuring out the internet and the digital economy was just budding. We are now dealing with existing market power and all the issues that come with it. Key areas of the digital economy like search, e-commerce, and social networks are almost monopolized. We need to start from this reality and make sure we don’t repeat the same mistakes. We must learn from the last few decades. We need to guide these technologies, so they benefit everyone. We must stop harmful practices, and make sure that AI delivers on its promise.

If we want to adapt our enforcement to this new reality properly, we first need to understand the emerging AI ecosystem across various sectors of the economy. That’s why competition authorities are frontloading their learning process, through consultations and outreach activities like our workshop today.

And we have taken a leading role in this. For several months now we have been gathering specific information on the competition dynamics in the AI industry. By opening calls for contributions and by sending requests for information to several players.

Today is another big step forward on our journey to grow our understanding. By bringing people together from different parts of the value chain. To dive deeper into the issues and explore how we should deal with markets that are increasingly shaped by generative AI. Because if you leave things to the market, you’d better make sure that the market works.

Today we have heard about the sources of market power and the dynamics of competition in AI and virtual worlds. We have heard interesting ideas on what competition authorities can do to foster a healthy environment for AI innovation. To keep the markets fair and competitive.

The next question will be what conclusions we should draw from our growing insights. It is already clear that we need to be on our guard. Over market concentration, anti-competitive behaviour, and new types of partnerships.

And as we reflect on sharpening our tools to match new market realities, the basic principles of competition enforcement are still the same.

Because monopolies are monopolies and price fixing is price fixing, whether we’re dealing with car manufacturing, cement production, or machine learning. So, we continue to apply our trusty merger and antitrust rules. Even though we sometimes have to remind AI market players that the competition rules also apply to them.

And our Digital Markets Act applies too: the DMA can also regulate AI even though it is not listed as a core platform service itself. AI is covered where it is embedded in designated core platform services such as search engines, operating systems and social networking services.

So we are applying our rulebook to concerns that we have already in the AI world. We are looking at the issues very closely, from all angles and with all our tools.

A major risk we see is big tech players leveraging their market power across different markets within their ecosystem. Concentration is especially high at the top of the value chain, where large foundation models are trained to be used in various applications. These models need vast amounts of data, computing power, cloud infrastructure, and talent, which only a few players have.

This could lead to practices like tying and bundling by dominant firms, blocking AI competitors from accessing essential resources, and preventing customers from switching. We need to keep a close eye on this. That is why in March we sent formal information requests under our antitrust rules to several big tech players, including Microsoft, Google, Facebook and TikTok. We have reviewed the replies, and are now sending a follow-up request for information on the agreement between Microsoft and OpenAI. To understand whether certain exclusivity clauses could have a negative effect on competitors.

Another risk we see is that big tech companies could make it difficult for smaller foundation model developers to reach end users. Whether alone or in alliances with preferred partners. So we are closely monitoring distribution channels to make sure businesses and consumers still have a wide range of choices among foundation models. This is why we are also sending requests for information to better understand the effects of Google’s arrangement with Samsung to pre-install its small model “Gemini nano” on certain Samsung devices.

And we have a number of other preliminary antitrust investigations ongoing into various practices in AI-related markets.

In terms of merger control, we are seeing a trend of big companies setting up partnerships with small AI developers. It is becoming a feature of the industry. Actually, these investments are important: they give access to the necessary components and allow AI systems to be developed. So generally, we consider these deals to be pro-competitive. But they can sometimes create entrenched market positions, especially through exclusivity rights. So we need to keep an eye on them to ensure fair play.

This is why, to name just one case, we scrutinised the Microsoft and OpenAI partnership also from a merger control angle.

Microsoft has invested $13 billion in OpenAI over the years. This partnership has been a win-win for both of them: OpenAI uses Microsoft’s vast computing resources to develop its tech, and Microsoft integrates OpenAI’s services into its own products.

But we have to make sure that partnerships like this do not become a disguise for one partner getting a controlling influence over the other. If that were the case, we would need to review it under our merger rules.

In January 2023, Microsoft and Open AI made a new investment deal, and in November 2023 Sam Altman was fired and re-hired, which led to Microsoft having an observer on the Open AI board. This made us look into the relationship between the companies. Under the EU Merger Regulation, the key question was whether Microsoft had acquired control on a lasting basis over OpenAI. After a thorough review we concluded that such was not the case.

So we are closing this chapter, but the story is not over. We will keep monitoring the relationships between all the key players in this fast-moving sector, including Microsoft and OpenAI.

We’re also looking into other new developments in AI markets. For example a practice called “acqui-hires,” where one company acquires another mainly for its talent, as we have seen with Microsoft and Inflection. We will make sure these practices don’t slip through our merger control rules if they basically lead to a concentration.

So we continue to enforce our rules in both antitrust and merger control.

And this is a global challenge and so we are working with other competition authorities worldwide. I am glad to say that we are cooperating intensely with many of them on AI and its challenges to competition. We can only benefit from the expertise shared by competition authorities in Europe, across the channel and across the pond.

First, in our ECN network. We’re thrilled to have friends from the French, Portuguese, and Hungarian competition authorities here today, sharing insights from their market studies.

As we have heard, the Portuguese NCA published an Issues Paper on competition and generative AI in November last year. And the Hungarian NCA has launched a market analysis on the impact of AI on market competition and consumer behaviour. 

The French NCA has also conducted a public consultation into how large technology companies approach AI. Benoît Coeuré told us about their findings in this workshop. And they are taking action: earlier this year they fined Google for “content scraping” from news websites to train its Gemini AI chatbot without permission.

Beyond Europe we also appreciate the efforts of the UK and the US. We work closely together with each of their authorities on both policy and enforcement.

It is important that we are aligning our ideas so that a clear and shared approach emerges sooner rather than later. This will benefit everyone, not least all the players in the AI industry.

Our colleagues at the CMA were the first to publish a Report on AI Foundation Models last September. And they keep updating their findings. In their latest report of April 2024, the CMA raised serious concerns that AI has the potential to drastically change economic landscapes.

The CMA is now planning to use its full range of regulatory powers to ensure a competitive environment that promotes innovation and to prevent the abuse of market power in AI markets. And it will get more enforcement powers once the UK Digital Markets, Competition, and Consumers Act takes effect later in 2024.

Just last week, they opened an investigation into AI-related agreements among several large digital platform firms. And we’re expecting a decision soon on whether they’ll launch a formal investigation into Microsoft’s investment in OpenAI.

And the FTC has been active and vocal on AI too.

In October last year President Biden issued an Executive Order on “Safe, Secure, and Trustworthy Artificial Intelligence,”. It stresses the need to promote a fair and open AI ecosystem. Including by helping small developers get the technical assistance and resources they need.

In January of this year, the FTC hosted a Tech Summit where Chair Lina Khan announced the launch of a market inquiry into the investments and partnerships between AI developers and major cloud service providers.

On policy we work closely together with the US as part of our Joint Technology Competition Policy Dialogue and the EU-US Trade and Technology Council. To promote fair competition in rapidly evolving markets in the digital sector, including artificial intelligence and cloud.

Overall, it’s clear that competition authorities everywhere are wising up in the brave new AI world. They are looking at ways to ensure fair competition and prevent anticompetitive practices.

And whenever we compare notes, we see similar issues. From the recent analyses and reports and from our discussions today, there are a few critical takeaways. The commercialisation of AI and its powerful tools is going to be lead by a few companies that already have a lot of market power. And these firms could leverage powerful network effects to control emerging markets. So we remain vigilant. Not just because of AI’s huge potential to boost growth and competitiveness but also to protect things like freedom of expression and equality.

Today’s workshop has been very useful. Sure, we are only in the foothills of our learning journey. But with every step we take, we get a clearer view of the peak ahead. So together let’s make sure that AI delivers on its promises. For markets, and above all, for people.

Thank you.

Source – EU Commission

 

Forward to your friends