In A Byte: The Big Tech Lobby Watering Down the EU AI Act
Can’t see the audio player? Click here.
Our journalist Manasa Narayanan in conversation with Bram Vranken, researcher and campaigner with Corporate Europe Observatory, a group working to expose the lobbying efforts in EU policy making.
They discuss Corporate Europe Observatory’s new report that exposes how Big Tech has undermined the EU AI Act through its lobbying efforts. The EU’s AI Act is a new piece of legislation, currently under works, that was brought forward to “make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly.” But more than two years since it was proposed, policymakers working on it have come under intense lobbying pressure from Big Tech trying to water down the regulations placed on AI systems, specifically on foundation models like ChatGPT. Read the report here.
An edited of transcript of their conversation is below:
– – –
Hello everyone, welcome to In A Byte — brought to you by the Citizens — where we explore all things data, democracy and disinformation, in bite-sized explainers; broken down and simplified just for you.
Today I’m joined by Bram Vranken, who is a researcher and campaigner with Corporate Europe Observatory, a group working to expose the lobbying efforts in EU policy making.
Recently, they released an extensive research report on how Big Tech giants have been undermining the EU AI Act. In short, the EU’s AI Act has been a legislative effort to rein in on the developments in artificial intelligence. But while Big Tech bros have been going around making headlines about the need to regulate AI, behind the scenes they have also been putting all their might to influence the Act in order to water down the legislation.
Let’s hear more from Bram about the report and its findings to see what Big Tech has been actually up to. Hi, Bram, thanks for joining me today.
I think let’s start with the basics. Can you give us a quick gist of the EU AI Act, what stage it is at and what does it aim to do really?
Yeah, so at the moment the AI Act is really in the last stage of decision making, which is the EU trialogues. Which is kind of the stage where the various different institutions, the Parliament, the member states and the EU Commission try to find a compromise between the different positions they’ve taken and then come to a final text.
But of course the AI Act has a very long history, which goes back to 2019 when the Commission started considering, okay, there is this new technology and we need to regulate it because there are certain risks involved. And that first step was already a quite problematic step because the Commission convenes an expert group. But the problem was that industry was already heavily dominant in that expert group. So there was a lot of criticism also from within that expert group. Several members were very critical of the work being done because there was a very strong focus on self assessments and voluntary commitments pushed by industry. Industry members of that group heavily pushed against any red lines on certain technologies.
So down, moved to the Commission and they came with a proposal and the proposal was what is being called risk-based. So that means that some uses of AI will be prohibited. So for example, social scoring will be prohibited. It’s not allowed. But that’s really like a small amount of uses that is completely off the table. And then the next category is a high-risk category. And that is, for example, when AI is being used for purposes of employment, because we know there is a lot of risk involved when you use an AI system to decide who should be hired or decide to select candidates. The AI might be biased. For example, women or people of colour might be discriminated against. So that’s a high risk category. And in such a case, there are certain safety requirements to make sure that if an AI system is used in such contexts that it doesn’t discriminate.
But in this whole process, suddenly there was the launch of ChatGPT. And ChatGPT, there are so many names for it; it’s been called general purpose AI, or foundation model. Actually, I don’t know what to call it anymore, but it’s just a system which can be used for a lot of purposes. So ranging from low-risk to high-risk and other applications will be built on top of this model. Which is a problem because the AI Act had at its premise, we will look at a use of a specific system which is developed for that specific context, and now suddenly there was an AI system which could be used in so many contexts.
So immediately there was strong push back from Big Tech on regulation of foundation models, general purpose AI.
Let’s explore this a bit. Your report highlights that Big Tech has been engaging in intense lobbying efforts to undermine regulation of what they’re calling foundation models like ChatGPT, as you’ve pointed out. And because of this risk based system that they’ve created and these models not falling into one category as such, Big Tech has pushed these lawmakers not to categorise it as high-risk. And so in the end, what happened was, instead of saying general purpose AI, they went for a terminology called foundation models, and that still isn’t necessarily subject to regulation?
So the European Parliament indeed dropped general purpose AI from the high-risk category because that was initially on the table. But then it came up with a new category which they called foundation models. And what the European Parliament wanted to do was impose some due diligence measures on Big Tech; on the developers of foundation models, which is like you’re using all this data from the Internet to power your systems. So at least do a quality check of the data, so the AI system is not biased or doesn’t discriminate. Do fundamental rights assessment if you develop such a system. So that’s like very, I would say, it’s quite basic. It’s transparency measures. It’s before you bring something on the market, you need to figure out what the risks are and mitigate those risks. So I wouldn’t say it’s a very strict piece of… that these requirements are very heavy.
As your report highlights, this has been linked to the lobbying spend that Big Tech has pushed, and how in crucial stages when this was being discussed, the spend went up. Could you give us the headlines from your report on lobbying spend and how Big Tech specifically lobbied the EU Parliament to water down legislation?
So the tech sector as a whole now spends around €130 million a year and that’s up from €97 million, a year or two years ago, which is I think a 16% increase. So that’s quite a lot in just two years’ time. And the €97 million figure was already really high. Big Tech’s lobby spending is a big chunk of that number. So it’s like for the big five, for Google, Amazon, Apple, Facebook, Microsoft, it’s close to €30 million. So they have massive resources. And I think what we have been able to show through our research is have that those resources have only been going up. If you compare to ten years ago, these companies were spending still substantial sums of money, but it was more and a category of €400,000 or €500,000. And now, for example, Facebook is in terms of lobbying spend, the biggest company in lobby spending in the EU. So these numbers have gone up really rapidly. And to an extent we haven’t seen or we have rarely seen before.
And that also reflects in the access they have to the institutions. So for example, we looked at meetings with members of Parliament in periods of 2021-2022 and 56% of meetings were with industry, which was more than half, which is still a lot because the Parliament is still the most accessible institution for NGOs, citizens and other stakeholders. But we saw that in 2023, those numbers went even further to 66%. So showing that Big Tech was able to massively ramp up its lobbying way more than other actors or stakeholders were able to do and in this way were able to capture policymaking. And I would say that the Parliament, although they gave uncertain crucial points, I would say they still pushed through with regulating foundation models despite the heavy lobbying presence.
But what you’re saying is not as strict as it could have been?
Exactly. And I think what Parliament really did wrong was they weakened the high-risk category massively, like to a really big extent. And I think that’s already a big loss.
Like you’ve pointed out in the EU lawmaking process, there are really three bodies on there. There is the Council, the Parliament and the Commission. We spoke about the Parliament and how Big Tech had great access to people in the Parliament, how there was great lobbying spend behind it. But their efforts did not stop there because your report also points out how they had access to the EU Commission as well. And they were the top lobbyists for the EU Commission. All top five lobbyists in the EU Commission, according to your report, are Big Tech companies, right?
Yeah. So what we see in the numbers is that once the European Parliament put its proposals on the table, attention from industry really shifted and we can see a spike in the number of meetings starting from, I think May onwards. The number of meetings on AI with high level EU officials really went up massively and that push was really because of industry lobbying.
So for example, we see the CEO of Open AI coming to Europe. We see the CEO of Google, Microsoft, all coming to Europe, having these very high level meetings with the EU commissioners, with Macron, with chancellors, Prime Minister in Spain. If you look at the meetings the Commission had, that was to a large extent with Big Tech. And if you look at the numbers, 86% of meetings of the EU Commission was with industry. So that’s a very captured process. There were almost no NGOs having meetings with high level EU officials, which is really problematic on such an important piece of legislation.
As you’ve said, so Open AI CEO Sam Altman, he had access to the Commission and he kind of also threatened at one point that they would stop operating in the EU and then retracted it…
I think he was being very ambiguous about it, not threatening but not threatening. I don’t know, it’s difficult to know somebody’s intention, but to me it seemed intentional.
And even Google’s CEO… I mean, there has been a general trend of them pushing for voluntary commitments as opposed to what can be enforced by law, right?
Yeah, yeah.
So what now? Right now we are at a stage where the Council, the Parliament and the Commission have to come to some sort of consensus together and give out the Act, the final Act. What do you think is going to happen? And is there any hope in strengthening some of the measures or is it already too late?
So it’s become a complete train wreck. So initially, everybody thought agreement is within grasp and it’s not going to be a tough trialogue process. And in the end, it’s astonishing what has happened because in October everybody thought within some months we will have an AI Act and then suddenly you have France, Germany and Italy changing their position.
There was a compromise in the making on regulating foundation models, which was watered down in the Parliament. But still, there was going to be some kind of requirements. And then these three member states completely changed, completely aligned with Big Tech’s position saying, no, we do not want any requirements on foundation models, none at all.
And have recently released a paper, the three of them, the three member states, basically pushing for voluntary commitments. So in a way, things have come full circle. The Commission set out with an expert group where industry members were calling for voluntary commitments. We’re at the end of this stage, so many years later and the risk is we will end up again with the voluntary commitments.
Some people have been saying that the AI Act is really in danger now and it might not might not even happen. It’s difficult to say. I think some kind of compromise will be reached, but it will be a very watered down text.
Is there anything I have missed that you would like to add?
We mainly focused on Big Tech, which I think is good. But I think what really changed matters was that some European companies got involved. There is Mistral AI in France and AlephAlpha in Germany. And these are really small companies, but they have been very active, really very effective in pushing their member states, so France and Germany, in blocking the negotiations.
Mistral AI is not a big company. It is a couple of tens of people, but they have the ambition to build an European alternative to OpenAI and for some kind of reason they have really struck a chord with Macron. And in Germany, it’s the Minister of Economy, Habeck, pushing back against regulating foundation models, under the whole premise that you are going to kill European companies, you are going to kill European innovation. But what they’re actually doing is giving Big Tech a free card. What they’re actually going to do is not put any obligations on the Big Tech companies, which are already developing these systems, which are already bringing them on the market and which is likely to become a very monopolised market dominated by the same companies as we have seen before.
To get updates about all our latest work, subscribe to our newsletter below.