Newsletter Issue #4 - 14th Mar 2025
Democratic AI 2.0, the no-real-option AI & copyright debate, stop CSAM AI models, and more.
The Political Technologist (TPT) newsletter brings you the latest news, events, and opportunities in the field of political technology. TPT is run by a group of mid-career technologists searching for groundbreaking projects to affect meaningful change in the UK civic landscape. Want to contribute, or to have your news or projects featured? Reach out to us at thepoliticaltechnologist[at]proton[dot]me.
Editor’s welcome
Dear readers,
TGIF! Sincerely hope that everyone is holding up well and not catchi†ng a cold, given that the weather god has decided to trick us with a fake spring tease before hitting London with a drop in temperature and icy rain.
The field of political technology is just as fickle as the weather: Rupert Lowe v Nigel Farage hit a new low, major media outlets stopped being sworn enemies for one day, and we finally decided to run a second democratic AI workshop.
One might argue that the last item is not in the same league as the others, but we beg to differ. Don’t miss out on the upcoming events, opportunities (scroll down to make money), and much more!
Best,
Yung-Hsuan on behalf of the editorial board
In the spotlight
Discussing complex issues in large groups is often messy and inefficient—voices get lost, biases emerge, and key points are missed. What if AI could help us have better democratic conversations while keeping humans at the center?
After our successful first event showcasing Google DeepMind's Habermas Machine, The Political Technologist returns with an expanded workshop exploring three powerful democratic AI tools designed to assist—not replace—human deliberation.
Join us at Newspeak House on Saturday, March 15 from 1:30pm to 4:30pm!
In this event, you experience a deliberation exercise designed for three tools:
Polis: Discover how this algorithmic platform effectively maps diverse opinion landscapes and ensures all voices are heard
Dembrane: Participate in AI-assisted collective sensemaking that captures insights automatically—no notetaking required!
Harmonica: Experience how AI can help your group find consensus through decentralized conversation
What You'll Take Away
Practical knowledge of three tools you can immediately implement in your organization/group
Techniques for more inclusive and efficient collective decision-making
Connections with others passionate about improving democratic processes
Ideas for applying these tools to policy development, community engagement, and organizational challenges
We sincerely invite our readers to partake in this workshop, even more so if you are a government official, policymaker, campaign organizer, community leader, or citizen assembly facilitator.
No technical knowledge required—just bring your smartphone!
On everyone’s lips
In this section, policy researcher Claddagh and AI auditor Yung-Hsuan bring you bite-sized chunks (no full meals!) of our choice of gossip from the world of politics and political technology right now. This week:
AI vs regulation, Reform LTD vs Rupert Lowe, Lee Anderson vs everyone
It’s been a big week for minor scraps in Westminster this week, as it seems like at each other's throats has become the place to be for SW1ers. Secretary of State for Science and Technology, Peter Kyle, kicked off the week at the Tech UK conference, studiously avoiding the special-relationship-shaped elephant in the room and instead delivering a pro-business, anti-regulation sermon on AI and innovation (using everyone's favourite bizarre banana analogy). The tech bros (and gals) in the Telford Theatre lapped it up. Back in Westminster, where backbench Labour MPs are becoming increasingly concerned by the government’s approach to tech regulation, it went down like a lead balloon. One MP has dubbed the government’s upcoming AI and copyright decisions a “final litmus test”—the last chance to convince nervous backbenchers that the party’s approach to tech regulation hasn’t been entirely captured by Faculty AI employees; the very ones who spent the pre-election period embedded in certain MPs’ offices.
Meanwhile, over at Reform UK Ltd, the latest punch-up has left Rupert Lowe MP out on his arse, suspended just days after publicly questioning whether “messiah-like” Nigel Farage is PM material. The party says it was forced to refer allegations of his harassment and ill-treatment of staff to the police, after Lowe refused to cooperate with an internal probe. Lowe, never one to take a hint, has gone full nuclear, accusing party chairman Zia Yusuf of orchestrating a stitch-up in a flurry of interviews and tweets. But the real subplot for political technologists? Elon Musk. This power struggle can be traced back to Musk’s endorsement of Lowe over Farage in January, after Farage dared to criticise Musk for backing beleaguered EDL racist Tommy Robinson (whose legal fees Musk is now reportedly paying). Now, the rumour mill is in overdrive, with Musk allegedly eyeing up funding for Lowe’s breakaway party, with figures as high as £100 million being bandied about. With 4 years between now and our next general election, could the tech-bro-backed White House horror show across the water offer us a glimpse into our own future? Or could it just be that egotistical infighting has led the hard rights’ most successful political project of the day to implode, once again, only to be replaced by another gang of “civilisation savers” in a few months’ time? (Given that only 14% of the British public can correctly identify Rupert Lowe from a photo…)
With all this YouTube livestream sniping and gossip-driven backstabbing, we’ve started to miss the days when Westminster's brawls had a bit more meat on their bones. Luckily at least one MP remembers and respects tradition; enter Lee Anderson, a man who never met a fight he didn’t like. While his party collapses around him, he’s decided to get back to the basics, tangling with Westminster’s favourite human megaphone, Brexit protester Steve Bray.
AI & Copyright: the battle continues on
The rapid rise of Generative AI presents a major challenge for creative workers, whose work is increasingly used to train AI models—often without their knowledge or compensation. In response to these challenges, the UK government is undertaking an AI and copyright consultation, to seek input from creatives, rights holders, and developers on their proposed legislative reforms.
From the perspective of developers, the debate feels pointless and frustrating—UK copyright law doesn’t extend beyond Britain’s borders, which means UK data is already being used in other jurisdictions to train models; sometimes with the explicit support of the jurisdictions legal framework, and at other times through existing fair use provisions. For them, restrictions on copyright law aren’t helpful for creatives—it just means that the UK remains a less competitive place for developers as a whole.
Speaking to creatives, however, paints a very different picture. In a series of roundtables and interviews, creatives from across the arts sector shared horror stories of their voice, image, and work being sold to AI developers without their knowledge and consent, and in some cases, of jobs already lost as a direct result (one voice actor heard her own voice on the radio for a job she had never done; in fact, her former employer had begun using an AI voice model to repurpose her old recordings). These aren’t really new stories of the impact of Generative AI on the creative industries, though. Instead, it’s just another compounding effect experienced by people in an industry that has already been premised on unequal power relations and exploitative contracts for decades.
The fears expressed by creatives about these new government proposals are just the extension of existing fears. These workers are overwhelmingly freelancers and don’t feel they have the negotiating power needed to negotiate ‘AI use’ clauses out of their contracts even if the option is there. The turnaround on contracts is often so quick that industry standard is that they aren’t even read anyway; meanwhile, large rights holders often breach contracts because they believe the ability of a creative to challenge this is essentially negligible.
AI, generative or otherwise, will continue to change the experience of workers in all sectors over the coming decade. Each issue it raises will have small, complicated solutions, but an overarching approach to protecting workers must zoom out from these smaller questions. Instead, to ensure the survival of key industries, and the fair treatment of the workers within them, we must strengthen existing rights (to data ownership, fair treatment, and transparency), and their existing enforcement mechanisms (trade unions and membership bodies, labour market enforcement agencies) in a way that is sector agnostic, collaborative, and above all, urgent.
Children’s Online Safety in the Age of AI
With the announcement of the Crime and Policing Bill, the UK is set to become the first country to criminalise not just the possession of child sexual abuse material (CSAM) but AI models that are optimised to generate it. Furthermore, the bill is also the first in the world to make it illegal to possess ‘paedophile manuals’ that teach people how to use AI to sexually abuse children. This marks a significant step forward in ensuring children’s safety in digital environments, clamping down on the new technology that every venture capitalist is all too happy to jump on while ignoring its harms.
The Internet Watch Foundation (IWF), having campaigned for years on tightening laws targeting child sexual abuse (CSA) activities online, noted that the year of 2024 saw CSA reports quadruple. While this is the case in the UK, many other countries are seeing mounting evidence as well; the United States, Australia, and South Korea’s deepfake porn crisis extending into schools being a few of the most harrowing.
The road to protecting children’s safety online remains long, but the UN Children’s Fund (UNICEF) is tracking a growing number of legislative instruments in multiple jurisdictions dedicated to addressing children’s best interests in the digital environment. The challenge moving forward is both on the robust enforcement of policies and regulations, alongside cross-border collaborations to ensure no loopholes are created by differences in each jurisdiction’s approach.
Back from the field
This week, TPT contributor Ollie Bream McIntosh presented his ongoing research on ‘using AI for systems change’ at the Systemic Investing Summit, held in London from 11th-12th March.
He and his colleagues at Mútua Systems (a Brazilian AI start-up) and the TransCap Initiative (a Swiss think tank) have been exploring how actors and networks involved in orchestrating and facilitating the deployment of financial capital to accelerate ‘systems change’ for sustainability can use AI to scale and democratise the often long and complicated work of developing ‘systems intelligence’ about sustainability issues (i.e., mapping the forces, relationships, and feedback loops that influence the outcomes of a given system and the specific actors that feature in it to locate leverage points and orientate intervention strategies). They presented a range of frames through which AI can be understood as a disruption in systems change contexts as well as a range of exciting case studies where AI is being used for these purposes. To follow their work, check out their recent blog post here.
On the horizon
Looking for the chance to find your tribe, your next adventure, or that dream job? We recommend the following events and opportunities for you to keep track of.
Events
The Dorset Salon (weekly writing club for technologists) | every Thursday
Behind the Startups: Three Generations of Disruption and Capital | 14 March
Democratic AI Series #2 - Tools in Action: Enhancing Group Deliberation | 15 March
AI UK 2025 by Alan Turing Institute | 17–18 March
Campaign Lab Hack Night | 24 March
The Future of AI (with Dr Gary Marcus) | 27 March
Zero to Coder: AI-Powered Coding Workshop | 29 March
Opportunities
2025/26 Newspeak House Political Technology fellowship programme | Applications open
Call for proposals: Making online spaces safer, more trustworthy and inclusive (Civitates) | Deadline: 14/03/2025
Learning Manager (Raspberry Pi Foundation) | Deadline: 17/03/2025
Wikimedia Research Fund (Wikimedia Foundation) | Submission form opens 20/03/2025
Digital Communications Officer (Good Law Project) | Deadline: 23/03/2025
2025/26 Civic Media Fellowship (Annenberg Innovation) | Deadline: 28/03/2025
Research Coordinator (Movement Research Unit) | Deadline: 30/03/2025
Program Manager (open to U.S. Citizen and Residents) (Processing Foundation) | Deadline: 30/03/2025
Senior Researcher - Political Economy (Max Planck) | Deadline: 03/04/2025
Bookmarked
Queued up podcasts, saved articles, and half-finished video series, our editorial board shares with you bookmarked items we can’t wait to get to.
Quick Reads
[Article] Google Ad-Tech Users Can Target National Security ‘Decision Makers’ and People With Chronic Diseases
[Article] Rage Against The Algorithm
[Article] Re-envisioning AI safety through global majority perspectives
[Article] Pentagon abruptly ends all funding for social science research
[Article] Netflix is gobbling up World Literature. What could go wrong?
[Article] Stanford students used to chase jobs at Meta and Google. Now they want to work on war
Deep Dives
[Book] The Backstage of Democracy - India's Election Campaigns and the People Who Manage Them by Amogh Sharma
[Book] World Eaters: How Venture Capital is Cannibalizing the Economy by Catherine Bracy
[Book] Behind the Startup: How Venture Capital Shapes Work, Innovation, and Inequality by Benjamin Shestakofsky
Treasure Troves
[Website] Digital Watch Observatory: A global tech news tracking platform with expert analysis on key digital policies.
Any other recommendations of unmissable content? Share it with us via this form!
Ways to engage with us
Want to contribute to future newsletters? Or have some feedback for us? We’re always happy to hear from readers! Reach out to us at: thepoliticaltechnologist[at]proton.me.
Many of our editors and guest contributors meet up at the home of the London College of Political Technology, or Newspeak House (not affiliated, we just like to advertise it). Hang out with us at:
Ration Club Dinners: Every Wednesday evening from 7 pm in Bethnal Green. Register here.
Stay Overnight: Get the most out of the political tech community by staying right at the centre of the scene.
Coffee Roulette: It’s a non-romantic and political-tech-focused Tinder thing. Sign up to be matched with someone with complementary knowledge, skills, and experience to share!
The Dorset Salon: A weekly (every Thursday) writing club for technologists that takes place in the Classroom at Newspeak House. Subscribe to the calendar!
Events: Newspeak House hosts a diverse range of events every week—check out its calendar here, you never know who you might bump into in the space (see below!).
TPT contributors David and Andrew, with American whistleblower
Chelsea Manning at Newspeak House last night (13th March)
What’s cooking?
From the kitchen of the legendary Wednesday Ration Club comes a curious smell… What’s on the menu for the next few dinners, and who are the master chefs?
What is Ration Club: Every Wednesday (almost), for the past 10 years, Newspeak House has been hosting Ration Club dinners where the community of political technologists come together to eat, meet, and greet. It is open to anyone!
The TPT editors and writers constantly show up at Ration Club dinners almost every week. Come join us and share your stories!
Chef report: This week’s chief editor Yung-Hsuan didn’t send out the newsletter on time because he was busy frying Korean Green Onion Pancake Pajeon with Jyo.
Next up: Policy researcher Claddagh is cooking up falafel on 19th March (unless they fall apart again - in which case it was always meant to be chickpea crumble); digital creative Tristan is infusing his dishes with nostalgia (which era is a secret) on the 26th March.
Join Ration Club dinners by signing up here.
Bring your chefing A-Game by signing up here.
Thank you for reading this issue.
You’ve reached the end, but the possibility doesn’t stop here.