The People’s AI Summit
The conversation around AI safety is urgently needed, but the UK government's AI Safety Summit fails to address the most immediate, dangerous risks of AI. So the Citizens is convening 'The People's AI Summit' with civil society, academics, experts and platform accountability advocates.
Here from an ad that looks like this? We’re calling out PM Sunak and the UK government for letting Twitter, Meta, and tech platforms control them and set the AI agenda. See our solutions below.
AI is the latest buzz word and the tech industry is all hands on deck: putting big money and capital behind it. Rishi Sunak is all over it. The Big Tech baddies are all over it. Everyone sees an opportunity to profit from the AI frenzy, maintain their tech monopoly and distract us from the issues of today and now.
Next week, PM Sunak will host an “AI Safety Summit” bringing together world leaders to discuss the threat of “Frontier AI”. In reality, this isn’t an “AI safety summit” at all: It’s a gathering dominated by big tech companies warning of the future harms of AI – but the risks of AI are already here. From encoded bias and massive monopolies to dangerous election disinformation happening right now, the AI Safety Summit is not the conversation we need.
So, we want to invite you to join civil society, academics, experts and platform accountability advocates for The People’s Summit for AI Safety – An Urgent Conversation On AI’s Clear and Present Dangers, Monday at 3PM GMT/11AM EST/8AM PST. The conversation will feature new data and research on AI’s risks, demands from civil society and an urgent call to put PM Sunak’s “AI Safety Summit” in context.
Speakers are listed below. For media login, please RSVP to email@example.com. Interviews are available now for preview stories.
Wondering how to tune in?
If you are a member of the public…
You are invited to watch the livestream on our YouTube channel. Alternatively, you can tune into the stream on our Twitter channels, either on the Citizens or on Real Facebook Oversight Board. The streams would be pinned to our Twitter channels when we go live on Monday. In the meantime, to keep up with our work and updates, subscribe to our newsletter.
If you are from the press…
As indicated above, please register your interest to join our livestream by sending us a note on firstname.lastname@example.org. We will be in touch with details on how to log in. Questions are invited.
STEPHANIE HARE (@hare_brain)
Researcher, broadcaster and author
Stephanie Hare is an outstanding researcher whose work focuses on technology, politics, history, and the world of work. Her latest book, Technology is Not Neutral: A Short Guide to Technology Ethics exposes some of the growing ethical problems in privacy, ownership of data, information, and more. Her book was selected as one of the best books by Financial Times readers in 2022.
Actor, Director of The YouTube Effect, leader of AI writers strikes
Alex Winter is a director, writer and actor whose past work has centered on uncovering corruption, the online black market, and most recently, the information harms present on Youtube. Alex also plays a leading role in the 2023 Hollywood writer strikes, which was organized to prevent AI from impinging on the work of artists.
AMBA KAK (@ambaonadventure)
Executive Director, AI Now Institute
Amba Kak is a leading expert in the field of artificial intelligence, with an expertise in a range of issues from regulation, data privacy, competition, network neutrality and digital copyright. She organized the first global compendium on Regulating Biometrics and an international and currently serves as the Executive Director of the AI Now Institute.
CARSTEN JUNG (@carsjung)
Senior Economist, Institute for Public Policy Research
Carsten Jung is a Senior Economist at the Institute for Public Policy Research where he specializes in macroeconomics and structural reform. In the past he’s worked extensively with the Bank of England, the University of Bayreuth, and the International Monetary Fund. He has since worked to draw attention to much-needed governance reforms for artificial intelligence.
CORI CRIDER (@cori_crider)
Lawyer, Investigator, Co-Founder, Foxglove
Cori Crider is a prominent lawyer and Co-Founder of Foxglove—an independent non-profit that aims to hold big tech accountable for their harms and inequalities. Formerly, she directed the national security team at Reprieve, which worked to defend the rights of marginalized people against powerful governments.
DEB RAJI (@rajiinio)
Computer Scientist, Activist
Deb Raji is a leading computer scientist, activist, and fellow with Mozilla. Her work focuses on algorithmic bias, AI accountability, and algorithmic auditing. Formerly, she’s been a part of Google’s Ethical AI team, a fellow with the Partnership on AI and AI now institute at New York University. She has been recognized by MIT Technology Review and Forbes for your accomplishments as a young innovator.
GALE ANNE HURD (@GunnerGale)
Producer, Founder of Valhalla Entertainment
Gale Anne Hurd is a well-known producer and director, having worked on such films as The Terminator, Aliens, The Abyss, and the Walking Dead. Most recently, she has produced The Youtube Effect, a documentary that explores the information harms present on Youtube.
SAFIYA NOBLE (@safiyanoble)
Founder, Center on Race & Digital Justice
Safiya Noble is a renowned expert in algorithmic bias and discrimination at the University of California, Los Angeles. She serves as the co-founder and co-director of the UCLA Center for Critical Internet Inquiry and the founder of the Center on Race and Digital Justice. She is the author of the best-selling book Algorithms of Oppression: How Search Engines Reinforces Racism.
SASHA COSTANZA-CHOCK (@schock)
Head of Research & Sensemaking at One Project
Sasha Costanza-Chock is a researcher and designer who works to support community-led processes that build shared power, dismantle the matrix of domination, and advance ecological survival. They are a nonbinary trans* femme. Sasha is known for their work on networked social movements, transformative media organizing, and design justice. Sasha is presently the Head of Research & Sensemaking at OneProject.org and Associate Professor at Northeastern University’s College of Arts, Media, & Design. Sasha is also a Faculty Associate with the Berkman-Klein Center for Internet & Society at Harvard University and a member of the Steering Committee of the Design Justice Network. They are the author of two books and numerous journal articles, book chapters, and other research publications. Sasha’s latest book, Design Justice: Community-Led Practices to Build the Worlds We Need, was published by the MIT Press in 2020.
Reports & Press Material
Ad Statement: Advocates Target AI Safety Summit w/ Digital Ads Calling Out Sunak’s Fealty to Big Tech Platforms
The Citizens are geotargeting Bletchley Park and attendees at the AI Safety Summit with a digital ad campaign, featuring AI generated ads calling out PM Sunak for letting Twitter, Meta and tech platforms control him and set the AI agenda. Read the full statement here.
Cronyism: The AI Summit Edition – How the UK’s newly formed Frontier AI Taskforce, driving the AI Summit, is entrenched in Big Tech money and connections [New Investigation]
Investigation by the Citizens of the members making up the Frontier AI Taskforce — the group driving the summit agenda as well as responsible for the larger AI framework in the UK — reveals a number of troubling links. Our analysis shows the group is riddled with conflicts of interest to Big Tech and government, and includes several adherents to the controversial ‘effective altruism’ philosophy. Further questions must be asked about why and how public money is being funnelled into private enterprise. Read the report here. The corresponding article in Byline Times here.
Ahead of 2024, artificial intelligence (AI) poses new threats to aid and abet the same harms that RFOB has been calling attention to since its inception. We have watched as an unregulated, unchecked landscape has resulted in casualties in the U.S. and around the world.
As democracies face a reckoning in 2024 with elections in more than 70 countries, RFOB believes it is critical that lawmakers and regulators craft and enforce policies to meet the moment and safeguard the rule of law in our digital ecosystems. Read our policy recommendations here.
Artificial intelligence for public value creation: Introducing three policy pillars for the UK AI Summit [IPPR Research]
The eyes of the world will be on this increasingly powerful technology as the UK holds a global AI safety summit in November. However, early signs indicate the discussions will lack ambition.
This paper suggests how, rather than merely focussing on harms, governments should outline a positive vision for how AI can help create public value. Second, underlines the need to assess potential structural harms to the economy, such as subtle ways of consumer deception and runaway market dominance of a small number of players, squeezing value creating innovation by smaller firms in the real economy. And finally, proposes the establishment of an Advanced AI Monitoring Hub, a technically specialised agency that is given oversight access to what is deemed ‘systematically important AI infrastructure’. Read the paper here.
People’s AI Series
Experts Speak Out about AI Safety Summit
“By ignoring the urgent harms of AI today, the ‘doomsday’ AI summit is doomed to fail. We face existential threats from AI in real time, with democracy itself at risk in 2024. Real AI safety means regulation now of runaway AI on social media before it’s too late.”Maria Ressa, Nobel Laureate and a member of the Real Facebook Oversight Board
“The AI Safety Summit looks years into the future, while ignoring the harms and damages of AI happening today. We need a real conversation about the harms of AI on democracy, privacy and the economy. And we reject the idea that Big Tech should be given a forum to report to world leaders progress against voluntary commitments. This is not how regulation works – and it’s not how we’ll achieve ‘AI Safety’.”Clara Maguire, Executive Director of The Citizens
“Governments should stop listening to companies opting for self-regulation and make sure a wider representation of experts and people impacted by AI driven disruption today are heard. AI is not a topic of the future, but is already causing problems in the present. We need the implementation of the full spectrum of democratic regulations, add new laws where needed, and ensure independent oversight of both.”Marietje Schaake, member of the Real Facebook Oversight Board, former MEP & Stanford tech policy expert
“There is not a single regulator is at this AI summit. Who is going to ensure this safety? Sunak just saying ‘be safe’? No, it is the job of regulators. Without real enforcers, this is just a talking shop.”Cori Crider, Director of Foxglove
“Regulators and the public are largely in the dark about how AI is being deployed across the economy. But self-regulation didn’t work for social media companies, it didn’t work for the finance sector, and it won’t work for AI. We need to learn lessons from our past mistakes and create a strong supervisory hub for all things AI, right from the start.”Carsten Jung, a senior economist at IPPR
“We have seen, time and again, Big Tech’s failure to reform its activities unless forced through meaningful external oversight and regulation. The EU’s passage and implementation of the Digital Services Act as well as the robust competition policy proposals in the US can be models here – not an ‘IPCC’ style commission that allows AI companies to grade their own homework.”Zamaan Qureshi, Policy Advisor for The Real Facebook Oversight Board
“We all want to live in a world where technology benefits all of us, and that includes AI – but so far the Government’s approach falls far short of taking us there. Those in power tend to be fixated on speculation about the future of AI, but the reality is AI is already here and is already causing harm. The AI Safety Summit must address the damage being caused by existing tech like facial recognition – and start working towards a rights respecting approach for the future.”Sam Grant, Advocacy Director at Liberty
Subscribe to read our journalism
If you’d like to be kept up to date with our investigations, ongoing campaigns and events, please sign up to our mailing list using the form below.