Brussels, 9 April 2024
Introduction
Good morning. It is a true honour to address you here today. A true honour to walk in the steps of some of this world’s most illustrious thinkers. Along Fuld Hall, all the way to this room, where many of them gave lectures that changed the course of time.
Albert Einstein, John von Neumann, Hetty Goldman, among others… And of course, Julius Robert Oppenheimer. The very Oppenheimer that directed this institute, and set history on a new course with his contribution to nuclear science. And these days probably the world’s most famous theoretical physicist .
Oppenheimer owes a large part of his revived fame across the world, to Christopher Nolan’s amazing talent as a movie director. Seven Oscars are enough to prove it. But I guess the truth is the other way around. Nolan’s movie owes a large part of its success to Oppenheimer himself – his complex, at times puzzling personality.
Julius Robert Oppenheimer was a scientist. But more, he was a humanist. One of the first science men to have asserted that technology and humanity can only go together. That’s a very modern thought, at a not-so-modern time.
Before many others, Oppenheimer understood that some technologies do not simply add something to the world. They change it completely. I guess this is what my countryman Niels Bohr means when in the movie, he tells Oppenheimer: “this is not a new weapon, it is a new world”.
The nuclear weapon did not just add-up to traditional warfare. It completely changed the stakes of traditional war. It triggered the need for international cooperation, just because the cost of not doing so would simply be too big for anyone to contemplate.
Now fast-forward eighty years. Today, digital technologies do not only add new ways for us to learn, to buy, to create, or keep in touch. Digital technologies change the world as we know it. And I see three ways this is happening, in particular:
- First – with the dominance of large digital platforms, technology is challenging democracy.
- Second – with the rise of General Purpose Artificial Intelligence, technology is challenging humanity.
- And third – with the global race for the technologies we need the most, technology is challenging our economic security. And shaping a new geopolitical world order.
Europe is on its way to answering all three of these challenges. Over the last decade, we shaped our own model of digitisation. A model based on a simple idea: tech to serve humans. No matter how fast technology evolves, we must make sure it serves us as humans, as opposed to us serving technology. Today, this model positions Europe at the forefront of the global efforts to govern tech. But we need a much broader range of democratic partners to do this.
This is what I’d like to talk to you about today. But before I do that, just one thing. In Europe, we regulate technology because we want people and businesses to embrace it. We need to bring down the old cliché that regulation goes against innovation. Quite the opposite. Laws exist to mitigate the risks, and open-up markets that have been closed down. So we can go full-on using technology. So businesses can freely innovate. And so we can mobilise the public and private investments needed to be at the edge, knowing we can trust the technology.
Now this said, let me take you through my three lessons for today’s technology. Starting with online platforms.
1. Platform regulation
Dear friends, “This is not a new weapon, it is a new world”. A world where the fundaments of our democracy are challenged by the rise of large digital platforms.
Democracy thrives in open spaces. In agoras, universities, public houses, parliaments. In places where people can discuss, disagree, contradict each other, find compromise. And do it all over again.
This does not look so different from the infancy of Internet, back in the 1990s. The World Wide Web came with a promise: to be a free place, open-to-all. A source that would facilitate access to knowledge, increase trust, and challenge the grip of authoritarian states on information. Thirty years of internet later, that promise was bever fulfilled. The picture looks radically different.
A few strong players started organising the internet for what would soon become billions of users worldwide. They became global matchmakers between providers and consumers of all kinds. Step by step, they took more space in our lives. With this space came power. And with power came control.
In just a few years, we saw open conversations closing down into a constellation of private chats. Our messaging groups. Our social media feeds. We saw content pushed to millions by sophisticated algorithms – not because it is true or relevant, but because it is emotionally viral.
Think of all the risks that this entails for our democracy. The risk to see our public debate privatised into scattered, polarised spaces. With no contradiction, no fact checking. And the risk that we end-up thinking the world is as large as what we see online. Or as small, rather. The risk not just that we can no longer share the facts, but that we lose our trust on everything and everyone who is not “one of us”.
We couldn’t let a few large platforms threaten democracy at large. So, we took action. We gave back to people part of the control they had lost to platforms, one initiative at a time.
Starting with the General Data Protection Regulation, that entered into force in 2018. It enacted the right to privacy.
We took this work further a few years later, with our Declaration on Digital Rights and Principles, which was proclaimed in 2022. We set out to make sure that, just as we’ve had rights for decades in the physical world, we should now have equivalent rights in the digital world. The right to be treated equally and fairly. The right to have access to reliable information, to freely express ourselves. And to have safe online journeys.
But giving rights to users is not enough. We must also give them the means to exercise these rights. That’s the purpose the Digital Services Act, Europe’s first flagship law regulating online content.
Because we cannot accept to be faced with insane amounts of illegal posts when scrolling on social media. We cannot accept that platforms become a place where abusers meet children. We cannot accept that harmful videos damage young women’s mental health. And we cannot accept that platforms’ algorithms are used by foreign adversaries to jeopardise our democracy, through disinformation campaigns. Never. But especially not in the midst of an election.
So we made platforms legally responsible for the content they help spread. Under the Digital Services Act, they must take down illegal content, and moderate harmful content – while preserving users’ freedom of speech. They must share how their recommender systems select the content that we see. They must declare political advertising. They must protect minors from harm. And a lot more.
Don’t get me wrong. At no point, and by no means, are we censoring users. It is quite the opposite. We are protecting users against illegal content. Content that our democracy, with all its legitimacy, has decided to deem illegal. So that users can feel confident enough to express themselves. With respect for others.
Dear friends, as I said, democracy thrives in open spaces. This means places where we are free to discuss. And free to choose. This leads me to another, this time economic, component of our democracy. I’m talking about vivid, open and fair marketplaces. Because with every choice we make as consumers, we have a say. And being respected as a consumer in the marketplace rubs off on democracy.
Years of antitrust enforcement – including not one, not two, but three Google cases and another one on-going, a Facebook case, a couple of Amazon cases, an Apple case and more – all this taught us that the same causes create the same effects. Whether offline or online, fear, and greed, lead big and powerful companies to want more power. With time, large digital platforms have outgrown their role as simple matchmakers connecting supply and demand. They started providing their own services, on the platforms that they control. They became the player and the referee.
Practices like self-preferencing started to lock users inside specific ecosystems. You search on Google for a flight ticket from Brussels to New York? The first thing you’ll see are options from Google Flights. You look for where to buy a new kitchen robot? Immediately, you’ll be served a list of preferred shops, put together by Google Shopping.
Progressively, a handful of players closed the gates of the digital marketplace. And made it the realm of the big few. This took away the chance for smaller players to make it. It also deprived users from their ability to choose among options. It drove prices up, and choice and innovation down.
So the Digital Markets Act came in, with its list of Dos and Don’ts. It forces dominant players to open the gates of the market. For instance, all six platforms designated under the DMA must adjust their search algorithms, so that rival offers receive the same prominence as their own.
The DMA came into force on 7 March. A few days later, we launched investigations into suspected non-compliance by three designated gatekeepers: Alphabet, Apple and Meta. We also launched investigations under the DSA. A few weeks ago, the European Commission opened a procedure to assess whether TikTok may be failing to protect minors. If proven that platforms failed to meet their obligations, sanctions will hit.
Rules are in place. We must now apply them. Because with legislation you change perception, but with implementation, you change behaviour.
Europe is not alone in this endeavour. All around the world, democracies have woken-up to the urgent need for the many to regain control from the few. Also in this country, this nation, I feel perception has changed drastically.
Three years ago, before we started the EU-U.S. Trade and Technology Council, I had to constantly contend with unfounded criticism that this was all about targeting big American tech firms. Last week at the Trade and Technology Council, we adopted a statement that shows how deeply aligned Europe and the US have become, on recognising the harms of platforms. And their responsibility to address them.
I am aware of the challenges of passing legislation in this country. But why should one accept that these platforms have to protect minors in Europe, but not here? Why should one accept that European researchers can access data from these platforms, but not other researchers?
And while this work is in motion, there is one thing I must be open about. I think we probably came to this problem too late. Today we’re doubling efforts to catch-up with lost time, trying to reverse harms that have become entrenched. So there is one space where we don’t want to make the same mistake again, and that is of course artificial intelligence. This time, we acted early on.
2. Artificial Intelligence
Which leads me to our second lesson inspired by Oppenheimer today. “This isn’t a new weapon, it is a new world”. Nothing could resonate better with artificial intelligence. And for good reason. Probably never in history have we been confronted with a technology that has this much power, but no predefined purpose. Neither good, nor bad in itself. It all depends on how we, humans, shape it and use it.
Just like in Oppenheimer’s time, we are faced with what AI researchers call the “alignment problem”: when technology has the power to both serve and destroy us, how do we channel its development? How do we ensure this technology reflects the societies that we want to have, instead of amplifying the flaws, and injustices, of the ones we already have?
Dear friends, it took ChatGPT just 60 days to reach 100 million users. Instagram needed more than two years to reach that number. The rise of General Purpose AI models brought new benefits, that we simply could not imagine. But it brought about a new consciousness of its possible risks.
Since then, much debate has arisen on whether this technology can bring existential risks to humanity. And this is certainly a debate worth having. But it should not distract us from the fact that the risks that we experience today – right now – are also existential. They are existential to humans.
Whether or not you can get a mortgage, access university, get a medical treatment that is adequate to you. These are existential decisions. And if AI is to play a part in those decisions, then we need to make sure it recognises us for who we are. As individuals, without the biases and prejudice that we have accumulated as humans.
In authoritarian parts of the world, we have seen AI developed and used for mass surveillance, for social credit systems, to oppress minorities, to censor and control information. We face the advent of AI-powered predictive policing. Humans targeted not for what they have done, but for what an algorithm considers they are likely to do.
Getting in control of these existential risks is a priority for us humans. And the only way to make sure this technology delivers on its enormous promise. This is what we need to do, if we are to live up to the legacy of Doctor Oppenheimer.
I believe it requires three things.
First, we must put our own houses in order.
This is what led us in Brussels, in 2021, to draw the outline of what would become the first ever worldwide law on artificial intelligence. It was passed by the European Parliament one month ago. Here in the United States, thanks to your leadership, Doctor Nelson, the White House adopted its blueprint for an AI Bill of Rights in 2022. Last October the President signed his Executive Order on Safe, Secure and Trustworthy AI.
This is only the beginning of the journey. As we have just discussed, rights are only as valuable as our capacity to enforce them. So we must move quickly in developing the standards, methodologies and benchmarks, that will allow us to ensure AI is shaped and used safely.
Second, we need a shared democratic model of AI development.
AI is bigger than any of us. And as we move forward, we need to foster an international convergence for guardrails on AI. It is natural to start with those with whom we share the closest values. And this is what we achieved last October, thanks to the Japanese leadership of the Hiroshima Process. A befitting name that again reminds us of the man behind this lecture. The G7 code of conduct on AI is the most advanced product of international collaboration so far. It needs to be swiftly put in motion through a common monitoring and evaluation system.
Still, this is not enough. This is my third point, we need to invest in universal governance of AI. The only real guarantee of safe AI will take “everyone or no one”. At the end of the day, we will need to find ways for the whole world to come on board. Including those we fundamentally disagree with. Just like in the midst of the Cold War, the world could come together to set limits to nuclear proliferation. It is not unthinkable that this can happen now. 21st of March, the United Nations General Assembly adopted its first resolution on AI. And later this year, the UN Secretary-General’s High-level Advisory Body on AI will release its recommendations. That will be the starting point for further discussion.
We know this will not be an easy task. Because these are times of systemic rivalry. The world is thrown into a fierce global tech race. A complex web, where commercial interests intertwine with ideological beliefs, and security concerns.
3. Economic security
Which leads me to the third and last Oppenheimer lesson. “This isn’t a new weapon, it is a new world”. This isn’t just about technologies. It is about how the race for the technologies we need the most, is changing the global economy.
Let’s take a step back. Beyond, the digital transition, we’re undergoing another technological revolution. The stakes are just as existential, and this time, it is about our planet. Clean tech will drive our shifting away from fossil fuels, and enable us to fight climate change.
One thing is clear: in a world powered by technology, those who lead are those who control the most critical technologies, and their supply chains. Chips, batteries, electric cars: our competitiveness will necessarily depend on our capacity to produce and deploy them.
Both the Covid pandemic, and Russia’s brutal war in Ukraine, have shown us our vulnerabilities. Europe and the US, each in their own way, depend on third countries for critical technologies, and the raw materials needed to produce them. And in this area, China has built- up a strong position, not always playing fair.
China is for us simultaneously a partner, an economic competitor, and a systemic rival. And the last two dimensions are increasingly converging.
We saw the playbook for how China came to dominate the solar panel industry. First, attracting foreign investment into its large domestic market, usually requiring joint ventures. Second, acquiring the technology, and not always above board. Third, granting massive subsidies for domestic suppliers, while simultaneously and progressively closing the domestic market to foreign businesses. And fourth, exporting excess capacity to the rest of the world at low prices.
The result is that nowadays, less than 3% of the solar panels installed in the EU are produced in Europe. We see this playbook now deployed across all clean tech areas, legacy semiconductors, and beyond – as China doubles down on a supply side support strategy, to address its economic downturn.
Our economies cannot absorb this. It is not only dangerous for our competitiveness. It also jeopardises our economic security. We have seen how one-sided dependencies can be used against us. And this is why Europe, just as the U.S., is reacting.
In October last year, the European Commission launched an anti-subsidy investigation into the imports of electric vehicles from China. If we determine that those electric cars have been illegally subsidised, we will impose remedies.
In the last few weeks, we have also launched investigations under our Foreign Subsidies Regulation. Every time we suspect that any foreign company has been unduly advantaged in a public tender, we dig further. We have investigated suspicious bids in a public tender for trains, in Bulgaria, resulting in a Chinese state-owned company withdrawing its bid. Just last week, we opened investigations into bids by Chinese companies that may have been unduly advantaged in a public tender for solar panels, in Romania.
Furthermore, I can announce that today, we are launching a new inquiry into Chinese suppliers of wind turbines. We are investigating the conditions for the development of wind parks in Spain, Greece, France, Romania and Bulgaria.
As you can hear, we’re making full use of the tools that we have. But I can’t help feeling that this is also playing whack-a-mole. We need more than a case-by-case approach. We need a systematic approach. And we need it before it is too late. We can’t afford to see what happened on solar panels, happening again on Electric vehicles, wind or essential chips.
Let me be clear: the investments that we put in our supply chains, the investigations that we run, or the new tools we have developed – those are not meant to constrain China’s success. They’re meant to restore fairness in our economic relations. Everyone is welcome to be successful. Everyone is welcome to trade with Europe. But they have to play by the rules.
As we further develop the strategy for clean technologies, we must reflect about the question of trustworthiness. These products become connected. And more and more, they are an essential part of our critical energy and transport infrastructure. So, we must make sure that we can trust them, and we can make sure that they uphold our values.
I believe that like-minded partners, starting with the G7 countries, should develop a list of trustworthiness criteria for critical clean technologies. These can include environmental footprint, labour rights, cybersecurity and data security. Criteria that are objective and country agnostic. These criteria could be deployed in different ways. As conditions for certain incentives, when granting certification before a product is used in certain sensitive areas. Or as non-price criteria in public procurement auctions, for instance. Those criteria would be developed among like-minded partners. But they would apply to all trustworthy producers across the world. That way, we can reach critical mass, and align our competitiveness with the values that we share.
This is what the Inflation Reduction Act missed, in my opinion. By tying the criteria to local production, instead of trustworthiness, the U.S. limited the potential scale for western producers. And it forced us to react by enabling matching subsidies. Which means that, essentially, each of us is using taxpayers’ money to attract or retain projects from each other. Instead of using it to give our companies an innovative or competitive edge in this global race.
As both the EU and US enter electoral periods, we see the uncertainty ahead. But one thing remains certain: in this geopolitical race for technology, having partners you can trust is a competitive advantage.
Conclusion
Dear friends, when Oppenheimer coordinated the creation of the Los Alamos facility, he brought together some of the world’s best scientists. But not only. He also brought a poet, a painter, and a philosopher. He further expanded this model here, at the Institute for Advanced Study.
First, because crossing disciplines improves chances of success. The excellence and the success of this place proves it a million times. But even more so, because Oppenheimer knew that we cannot dissociate technology from the world it was born in. The scientific discovery, from the impact it may have on humanity. How technology is made, from how it should be used.
And this is maybe where our role as policymakers kicks in. Scientists developed nuclear energy. But politics decided on its use. A weapon, or a source of energy.
After the war, around 1955, politics created the International Atomic Energy Agency, to promote the safe, secure and peaceful use of nuclear technologies. Which then created the conditions for the nuclear Non-Proliferation Treaty.
When it comes to digital, this is our 1955. And the policy choices we make today will shape how technology develops and how it is used, for decades to come.
“This isn’t a new weapon, it is a new world”. Indeed, just like nuclear decades ago, technology has the power to unfold a completely new world. But it is up to this world to decide what to make of technology. A weapon to diminish our humanity, and turn us against each other. Or, how I tend to see as a realist, a formidable source of human progress.
Thank you.
Source – EU Commission