Thu. Sep 19th, 2024

San Francisco, 31 May 2024

“Check against delivery”

Thank you for bringing the spotlight into a very important intersection of AI and competition. There is an overwhelming agreement – AI is a technology that can bring many revolutionary changes. To paraphrase a quote from the recent Oppenheimer movie “This isn’t a new technology. It’s a new world”.

It is one of the most powerful technologies ever invented. And AI has a potential of transforming everything; not only a competition policy.

Today, I would like to talk to you about two challenges of AI, and the EU’s approach to it.

First, AI challenges to competition – to stay faithful to theme chosen by our hosts.

Second, I would like to zoom out on the broader challenges of AI for democracy. This is at the heart of my visit in California as part of what I call a democracy tour. In Europe we are working hard to find answers to these challenges brought by digitalisation and shape our own approach to them.We start from a simple idea: technology is to serve the humans. Not the other way around. We are not mere data fields, for tech companies to harvest from and then make the decisions for us or make money on our thoughts and fears. This is why we have chosen a European model to regulate technologies in order to make sure they respect human rights. And I want to tackle one scepticism that I have heard from many corners in the US head on – namely that regulation stifles innovation. And they can work hand in hand.

We design laws to address risks for the people, or to open markets that have been sealed by those who have become too big to compete against. This brings trust of the consumers, and innovation through competition and predictability. And where there is trust and healthy competition – there is financing from both public and private sources.

Challenges to competition

The debate like this today is very timely, because I still believe we have a narrow window of opportunity to set the rules that would allow competition to thrive in the world of AI.

In both the EU and the US we are facing very similar problems. To deal with the challenge of the future, it is good to understand the past, so we know where we are coming from. I am thinking of the digital disruption in the 2000s.

Social media, digital marketplaces, online payments – this was all very exciting. And this was the age full of pioneering innovators. Starting in garages, or universities or while dropping out of them. The innovators could not foresee or grasp the scale of impact their revolution would bring.

Today, with AI revolution, some of those things sound familiar, but I also see differences, too.

First, the revolution is led in large part by incumbents, powerful companies with stakes in many markets.

Second, our awareness – virtually no one has doubts that AI will affect every aspect of our work and life.

And third – the technology itself. AI, and large language models, have huge entry requirements. They need unprecedented amount of data and storage, among others. The more, the better.

In the market where you need an entire ecosystem to thrive, it is hard to imagine a kid with a vision challenging Microsoft and Open AI, or Meta’s LlaMa 2 or Google’s DeepMind.

And as many of you in the crowd today are competition experts, I am sure you can immediately see barriers to entry everywhere.

How to answer these challenges?

First, with human intelligence. And events like this. We have to figure it out, and I am sure we will.

Second, by working together. And this is why I appreciate a meeting of minds with Governor Newsom. Europe and US must continue to work together closely on these issues to achieve a greater impact.

Third, by thinking out of silos. Competition policy has to work together with digital regulation that sets guardrails. In Europe this means for instance the Digital Markets Act.

Challenges to democracy

Let me move now to the challenges of AI for democracy. Democracy is an open debate, where we can argue and disagree, seek for compromise, and then argue again.

We have seen how digital revolution has pushed us further into bubbles, or digital ‘rabbit holes’. We are still learning about the impact of social media on our children and their mental health, but I believe this could be the main challenge for us in the years to come.

We are seeing in Europe how Russia, but also China and other actors, use digital means to spread disinformation and foreign interference.

And this is a security risk to Europe, and to the US, too. Russia is fighting in Ukraine with bombs, and with disinformation all over the world, also here in the US.

With AI – they are just gaining a new powerful tool to deploy old tactics. Why they are doing this? Because the essence of democracy is trust. Trust to one another, trust in the democratic institutions or the media. Without trust, democracy crumbles.

Now, with AI, we can hardly trust what we see or hear any more. Deep fakes, synthetic images or videos, not to mention texts, are fast becoming indistinguishable to human eye. These can be manipulation on steroids.

This is also why I am on democracy tour before elections in Europe. I want to understand how well EU countries, authorities and experts are prepared to digital threats to democracy and security.

And this tour brings me to California, because the big tech has a tremendous role to play as well. What is being done here has a big impact in Europe, and in the rest of the world.

In short, I expect industry efforts into technology that would allow to detect and label AI products. I expect them to shift up a gear and invest into analysing the actors, behaviour and content of disinformation and foreign interference.

They have stepped up their game, but there are still many gaps, including over-reliance on AI to moderate content, or too big a focus on digital tricks like fake accounts or fake likes.

And instead of expanding their collaboration with experts and fact-checkers with smaller countries and different languages, I read in the news about lay-offs of trust and safety teams.

Finally, AI will pose further challenges to journalists and editorial content. This is an issue that I discuss regularly with the media sector, no later than this Wednesday with representatives of the Los Angeles Times. Independent media play a key role for democracy, to bring facts so that people can make informed choices – and votes.

This is even more crucial in these times of broader information disorders. So we, policymakers, have to work on solutions, starting by ensuring the protection of copyright – we have specific provisions on this in the European AI Act – I will talk about it in a moment.

We also need to fight those using AI to create fake, but looking like real, news websites, such as the now infamous Kremlin sponsored Doppelganger case.

I am not here to blame the social media for all the evil in this world. In democracy, we all have our role to play. And so far, the best vaccine against a virus of disinformation is societal resilience. Simply, we have to become better at understanding and recognising facts from fiction.

But the tech industry also has a duty to uphold democratic principles, here in the US and in the EU.

One conclusion from my 10 years of dealing with tech companies is that it took them time to realise that. This is why we helped them with a mix of regulatory and non-regulatory tools, such as the Digital Services Act or the anti-disinformation Code. With AI, we cannot afford to wait so long.

So, in the EU proposed an AI Act – a first comprehensive legislation on AI, including generative AI.

It will enter into force in July. It’s not only trust-fostering, but also good for innovation.

First, because of scale. It means one set of rules across all 27 European Member States.

Second, it gives the providers of AI systems a stable and predictable operating environment.

And third, the AI Act encourages companies to develop new products easily.

Research and development activities for instance don’t fall under its scope. And it provides opportunities for real world testing and regulatory sandboxes.

AI Act is risk-based and requires companies to test the risky product. And the incidents with TikTok light or recent launch of the Google search product shows that this is only good for consumers, but good for companies as well.

Ladies and gentlemen,

The debate like you are having today is not only needed, but it is urgent. I don’t have all the answers. Nobody has. This why we need collective efforts to ensure competitive and innovative market, preserve democracy and ultimately our values.

We still have a chance to shape the AI revolution, so it can become a force for a positive change, rather than another risk that brings us closer to a reality captured in a dystopian TV show ‘Black Mirror’.

Thank you for your attention. I wish you a fruitful debate.

Source – EU Commission

 

Forward to your friends