Newsletter Issue #1 - 16th Jan 2025
Democratic AI, UK AI Plan, Meta's new pledge of allegiance, and more.
Editor’s welcome
☕️ Good morning everyone,
You are reading the very first issue of The Political Technologist newsletter! We’re excited to bring you the latest trends and news in political technology, topped with a modicum of banters and some medium-spicy opinions. In this issue, we cover our doubts and hopes about Democratic AI tools, the UK’s AI Opportunities Plan, Zuckerberg coming out (as MAGA), gamers’ pleas to prolong the shelf-life of online games, and many exciting events and opportunities for the civic technology community.
It’s been a long time coming for us to finally put our thoughts on paper and make it happen, and we want to extend a big thank you to every contributor to our kick-off issue: Alex, Andreas (10 Downing Street), Casimir, Claddagh, Daniel (Campaign Lab), David, Hannah (Campaign Lab), Jyo, Mel, Ollie, and Tim (Campaign Lab).
Sincerely,
Yung-Hsuan
On behalf of the editorial board
P.S. We welcome any feedback, suggestions, or additional expressions of love in the form of messages and comments on Substack. Or better yet, send us an email at thepoliticaltechnologist@proton.me.
In the spotlight
An AI facilitator is coming to Newspeak House!
The Habermas Machine, created by Google DeepMind researchers, is a large language model (LLM) trained to foster group consensus over contentious topics. With the opinion landscape of democratic societies being polarised more than ever, many researchers (a shout out to Joshua Becker, our Newspeak House faculty member) have looked into whether and how AI might help us collectively tackle our complex problems.
The first event of the Democratic AI series is taking place this Saturday (18 January), and we’re expecting just over 100 participants to participate in a live democratic deliberation exercise facilitated by the Habermas Machine itself.
It strikes us as interesting that just over a few years, our opinion on the ways algorithmic and AI systems organise information for humans seems to have turned on its heel: John Danaher used to warn about the “threat of algorcracy” where algorithmic systems constrain human actions; Harry C. Boyte remarked on how the credo of efficiency ushered in by AI systems undermined the values of genuine social interaction and a citizen politics organised around people and relationship forming; Mark Coeckelbergh cautioned against the erosion of democracy as algorithms propagate disinformation and trap people in epistemic bubbles. Granted, all of these critiques on AI address different types of algorithmic systems that some would call narrow AI, some don’t even think that they’re AI, and still others believe that they are more rudimentary than state-of-the-art models. One major difference in people being intrigued by LLM’s potential but not recommendation algorithms is that… it can ‘speak’ human (natural) languages.
The question to ask is then: Is there genuinely a way to use algorithmic or AI systems to forge healthy and meaningful human relations? Is there a design that is simply pro-social, pro-democracy? Also, just because it speaks a comprehensible language now, does it really ‘possess’ the knowledge and expertise to facilitate human interactions just as a human moderator does?
The only way to answer is by experimenting with AI. And one great way to experiment with AI is by coming to our Democratic AI series. Stay tuned for our continued reporting on this topic!
On the horizon
Looking for the chance to find your tribe, your next adventure, or that dream job? We recommend the following events and opportunities for you to keep track of.
Events
16 January, 17:30 - 19:15 GMT | Campaigning in the Disinformation Age ('100 Campaigns' Podcast Recording)
18 January, 13:30 - 16:30 GMT | Democratic AI Series #1 - Exploring Deliberative Democracy with the Habermas Machine
23 January, 18:30 - 21:00 GMT | Building AI for Good - Showcase & Meetup #2
27 January, 19:00 - 22:00 GMT | Campaign Lab Winter Hack Night
29 January, 18:30 - 20:30 GMT | Code Green London January Meetup
29 January, 18:30 - 21:30 GMT | Science Futures: Mahmoud Ghanem on Cyberpunk Science
30 January, 18:30 - 22:00 GMT | Marked As Urgent: Tech Policy in 2025
31 January, 18:30 - 22:30 GMT | Art Made AI: Demos, Discussion, DJs
Career opportunities
Research Officer, Campaign Lab (deadline: 17 January)
Head of Operations, UK Day One (deadline: 19 January)
Governable Spacemakers Fellowship, Metagov (deadline: 20 January)
Digital Forensic Fellowship, Amnesty International (deadline: 23 January)
Summer research fellowship, Principles of Intelligent Behavior in Biological and Social Systems (deadline: 26 January)
Tarbell Fellowship, Tarbell Center for AI Journalism (deadline: 28 February)
Other opportunities
Call for papers, Hack Glasglow (deadline: 31 January)
Call for Focused Research Organizations in the UK via ARIA (deadline: 7 February)
Cooperative AI Summer School (deadline: 7 March)
On everyone’s lips
Trendy topics, juicy gossip, hot takes: in this section, our editorial board brings you bite-sized chunks (no full meals!) of our (slightly random) choice of three stories from the world of political technology right now. This week: UK AI strategy, Meta cosying up to Trump, and something for the gamers.
A miniature review of the UK AI Opportunities Plan
The UK AI Opportunities Action Plan has been the source of much praise and contention over the past few days; policy professional Claddagh brings you three quick takeaways:
The plan is overall good at answering the question it set out to answer: How do we go all in on growth? Unfortunately, any cohesive approach to AI at a government level needs to account for significantly more, such as safety and governance, regional infrastructural inequalities and investment, workers and workers' rights, and data privacy and ownership.
Crucially, adoption is absent: the UK as an ‘AI Maker, not an AI Taker’ requires the removal of barriers to adoption within the public and private sectors. Within the private sector, actors need adoption derisked via increases in transparency; within the public sector, actors like the NHS who are reluctant to adopt AI are unlikely to shift gear without broader, concerted, funded efforts to encourage and enable safe AI adoption across the board.
It may be broadly more indicative of the government's wider approach to AI strategy that they decided to soft launch government policy to Politico Pro subscribers two days before the public, making it available to Corporate-Only subscribers a weekend in advance.
Zuckerberg Comes Out as MAGA
Governance researcher Ollie Bream McIntosh on Meta’s decision to replace fact-checkers with community notes:
As the leader of the free world [spits out coffee] mourns the passing of one of its most respected political leaders with the funeral of Jimmy Carter, a new ascendancy looms. President Trump’s second term in office promises the sort of whimsy and chicanery that only fools would predict. But much of the existing speculation about the content of the next Trump administration is becoming more real by the day, from the cultural and economic agenda of Project 2025, to the US stance on Palestine and Ukraine.
This week, questions about how far Big Tech would go to align itself with Trump were answered, too. With Elon Musk already (creepily) close at his side, apparently, he now counts on the adulation of Mark Zuckerberg and the compliance of the vast Meta empire at his command. Two things stand out for us:
By bending the knee to Trumpism, these and other Silicon Valley giants are bargaining to enlist the US government in their regulation battles with national and international government bodies overseas. This is especially pertinent as the EU and UK step up their rhetoric on heavily penalising malpractice in the tech sector. If an explosive regulatory fight for the future of social media is just a matter of time, Zuck and co. want the global heavyweight of a Trump Whitehouse in their corner. Cue: another diplomatic standoff between global superpowers, where nobody really wins but certain people get richer.
In press engagements for Meta’s recent updates to what can and can’t be posted on their platforms, Zuckerberg argued that fact-checkers and their potential biases (towards facts, one presumes?) have been responsible for declining trust in news media and undermining healthy social dynamics online. This seems like an odd move. Especially because removing the (admittedly very expensive, and often very exploitative) labour of screening what users post, and replacing semi-independent fact-checking and content moderation functions with ‘community notes’, he obviously risks exacerbating the trend he is (suspiciously suddenly) concerned about. This is simply politics. He knows it. We know it. He knows that we know it. Etc. Expect more fuel on the fire of disinformation and hatred in 2025, and a more complicated resistance movement. (Also, does this mean we are now, like, post-post-truth?)
Stop Killing Games UK petition
Amateur game dev Casimir turns the spotlight on an interesting development in the gaming industry. The "Stop Killing Games" civil campaign has re-launched its UK petition this week, which had previously reached the required number of votes but was closed due to the dissolution of the parliament. The campaign, which argues from the point of view of customer rights but is primarily motivated by concerns about art preservation, seeks to force publishers of online video games to make considerations to allow the games to be playable long-term past their support date. The petition is part of a broader international push, including public inquiries in a number of other countries and a European Citizens' Initiative.
Back from the field
Waving Bye Felicia to 2024, we hit the ground running this year with a string of great events! Here, our editorial board shares their reflections on a few memorable events from the last few weeks.
Campaign Lab x Hope Not Hate Hackathon | 5 January
Out with bad energy, in with some hope! Digital system researcher Jyotsna partook in Campaign Lab’s latest edition of full-day Hack Days to tackle the on-the-rise UK far right.
At the Campaign Lab x Hope Not Hate Hackathon, 50+ participants showed up to counter the far right. Participants explored various themes, including developing tools to monitor by-elections, mapping community hubs like Facebook groups, and analyzing candidate statements and campaign materials. Teams tackled technical and non-technical challenges, leveraging skills in Python, GIS, web scraping, NLP, and data visualization. Notable projects included creating early warning systems for by-elections, mapping ward-level Facebook groups, monitoring political TikTok activity, and identifying hyperlocal issues to counter far-right organizing.
If this feels like your cup of tea, join the bi-weekly Campaign Lab Hack Night, the next one happening on 27 January from 19:00 to 22:00 at Newspeak House!
AI x Crypto LDN | 9 January
Software engineer Mel attended the New Year meetup of AI x Crypto LDN meetup. Apart from enjoying the peri peri chicken and chips on offer, what else was on her plate?
The first thing that caught everyone’s eye was the stunning, immersive 270° screen visual display of abstract artwork as soon as you entered Aures London’s event space. It immediately transported people to another world! The audience then heard from PeriLabs, OpenGradient, and FLock.io, each presenting their vision for a decentralized AI future and their ongoing work.
PeriLabs is building a vertically integrated AI infrastructure for DePIN, which is a decentralized physical infrastructure network that can enable individuals and organizations to collectively develop and operate wireless networks, energy grids, transportation systems, etc. Their whitepaper further talks about the use of Edge AI - running AI systems directly on edge devices or hardware that's physically close to where data is collected - rather than in centralized cloud servers. This is a really interesting and valuable offering, as many are rightfully concerned with the data they provide to LLMs being stored in centralized cloud servers with tools like Claude or ChatGPT.
OpenGradient is the first decentralized infrastructure platform for AI model hosting, secure execution, agentic reasoning, and application deployment. They allow developers to upload, deploy, and run inference on any AI model from anywhere within seconds!
FLock.io is a decentralised platform focused on aligning AI objectives with public ethics and societal aims. According to FLock.io, the danger of AI being controlled by centralised corporations is that their biases and values are amplified on a global scale. They decide who gains access to the models, and their value alignment often downgrades the performance of models. Under the status quo, the world’s largest corporations hold sway over the trajectory of AI development based on their own objectives, which do not necessarily align with the public interest. It, therefore, facilitates collaborative AI development by enabling developers to contribute models, data, or compute resources in a modular way.
Given the current progression of AI, it is crucial to have an awareness of how to make it accessible to everyone and ensure that users feel ‘safe’ when using it. It’s impressive how these Web3 companies are making a start on ensuring that the future progression of AI is not controlled by centralised corporations, and we can certainly hope that those who join them on this mission have the best interests of the public at heart.
Bookmarked
Queued up podcasts for the gym, saved articles for bus rides, and half-finished video series for mid-work procrastination, our editorial board shares with you some of the bookmarked items we can’t wait to get to.
Gym/icecream playlist
[Podcast] Computer says maybe: To be Seen and not Watched w/ Tawana Petty
[Podcast] Accidental Gods: Democracy Rising: Making 2025 the year we recover from Peak Polarisation with Audrey Tang
Bus rides quick reads
[Article] Wildfire tracking application helping evacuations in California
[Article] TikTok Refugees: RedNote and Xiaohongshu Chinese Users
In-between your Pomodoro sessions
[Resource] Public Domain Image Archive
Weekend afternoon deep dives
[Videocast] Brookings Institute: In Conversation with FTC Chair Lina Khan
[Book] The Mechanic and the Luddite by Jathan Sadowski
What have you bookmarked recently and can’t wait to get to consuming it? Share it with us via this form!
Ways to engage with us
This year’s Newspeak House cohort is exploring a range of possible services we could be offering to our peers and colleagues in the civic tech space. We’re always open to new ideas here, but for the time being, here's what we’ve got on offer:
Open coworking days: Our in-house techno DJ Paulina is hosting an open-house non-techno coworking day on 21 January. If you’d like to join some of the current and previous generations of fellows for a blend of quiet, focused work and accidental socialising, get in touch with us. Our code of conduct at the space follows.
Stay overnight: We have a guest room, and we’re always up for hosting people who wish to get the most out of the community. Book your Newspeak stay right now!
Coffee roulette: It’s a non-romantic and political-tech-focused Tinder thing. Sign up to be matched with someone with complementary knowledge, skills, and experience to share!
Stay tuned for more activities to come, including Word Lab, Sensing projects, etc.
What’s cooking?
From the kitchen of the legendary Wednesday Ration Club comes a curious smell… What’s on the menu for the next few dinners, and who are the master chefs?
Every Wednesday (almost), for the past 10 years, Newspeak House has been hosting Ration Club dinners where the community of political technologists come together to eat, meet, and greet. It takes place in the common lounge and terrace and is open to anyone who’d like to learn about the college, its work, and its people.
While most meals are cooked by fellowship candidates throughout the year, community members also often bring in their favourite dishes from time to time. Rumour has it that an MP once wore the Newspeak House apron.
As the editor was frantically writing this newsletter, Claddagh was serving İmam bayıldı, Turkish stuffed aubergines 🧅 🧄 🍆.
On the 22nd of January, we will have UX/UI designer Alex chef-ing for the first time 👀! And our aforementioned DJ and master chef Paulina is returning on the 29th for another of her banger meals. While her menu is still a secret, know that her past recipes included 🇰🇷 Korean Bibimbap and 🇮🇳 🇲🇽 Mexican-Indian fusion Tamales.
If you wish to join us for dinners, sign up here.
If you wish to volunteer as a chef, sign up here.
Thank you for reading this issue.
You’ve reached the end, but the possibility doesn’t stop here.
Reach out to us via messages and comments on Substack or send us an email at thepoliticaltechnologist@proton.me. We welcome guest contributions and suggestions.