How can Greens shape the future of AI?

This weekend, Autumn Conference will see a motion debated on a 'precautionary regulatory framework' for AI.

Laptop computer
Laptop computer

Image credit: Christin Hume (Unsplash License)

Tim Davies

Here’s a starter on how we can influence a key political issue

At the start of November, the UK government will host a select handful of foreign leaders and technology bosses at Bletchley Park for a ‘Global Summit on AI Safety’. The event is intended to galvanise action on the speculative existential risks from so-called frontier models: fast-developing AI systems that some people fear could be used by bad actors to create biosecurity risks, or that could gain autonomy and 'escape human control'.

As par for the course for this government, the Summit is characterised by exclusion and closed-door dealings. Though the possible outcomes are not clear, it is a high-profile intervention to seize the agenda and shape the political narrative on this key issue. 

A crucial task for the Greens will be contesting this space, resisting corporate capture of both the conversation and the policies on AI. 

This starts with insisting that issues we need to confront on AI are much broader than remote risks and includes grounding them in the experiences of communities here and now, such as concerns about being hired, managed and fired by algorithm, or profiled for access to loans or public services. It also means articulating positive and responsible ways for the public, private and voluntary sectors to make good decisions on the use of AI that account for all impacts on individuals, communities and the environment of energy and water-hungry AI systems.

We will need to establish some core positions and messages as a democratic party. This weekend at Autumn Conference, the Green Party of England and Wales will debate a motion to establish a policy statement on AI.  

Considering amendments that foreground the opportunity for participatory and democratic control over decisions on AI development and use, it builds on the ‘Emergency Resolution on the Effective Regulation of AI Technology for Democracy, Sustainability and Social Good’ passed by the Global Greens Congress in Korea in June, namely the need for ‘responsible development, safe use and human control of AI’ with robust and legally enforceable regulatory frameworks, as well as action to minimise environmental impacts of AI and focus on justice for workers involved in AI development. 

The motion coming to the conference aims to do at least five things based on 'precautionary regulatory framework'.

(1) Centre the environment.

The UK's AI Safety Summit is rumoured to include a meeting hosted by King Charles on how AI might support the environment and sustainability. Yet, claims about the potential impact of AI in addressing climate change should not be treated as an 'ethical offset' to allow the introduction of under-regulated, dangerous or energy-hungry AI systems. Instead, environmental sustainability needs to be the starting point for AI policy.

As the motion explains, "We will require that the burgeoning uses of AI within the commercial sphere to put social and environmental priorities ahead of financial returns to shareholders in line with [green] economic policies."

(2) Centre those affected

The amended motion opens by noting ‘in particular the impacts of AI bias on minority communities’, and calls for regulation that will ‘encourage and facilitate for all individuals the freedom to live a worthwhile life, devoting time in whatever proportions they choose to the following ‘seven C’s’: Curiosity, Conservation, Challenge, Creativity, Community, Charity, Care’.

In order to give this approach force, the motion places deliberative and democratic decision-making, and the voice of those most affected by AI, at the centre of governance processes. 

This offers a powerful alternative to current practices of industry self-regulation, or expert-led processes that focus on abstract principles and codes of practice, rather than listening and responding to the voices of the communities that see the intended, and unintended, impacts of AI.

(3) Centre workers and creators

AI tools could be deployed in ways that enhance the quality of work, but too often the concern is that AI will displace jobs, increase unemployment, and erode working conditions and job quality.

Building on long-established Green Workers Rights and Employment policies, and commitment to Universal Basic Income (UBI) the motion sets out an approach to ‘ensure workers’ rights and interests are respected when AI leads to significant changes in working conditions’, and to support investment in retraining and UBI to navigate possible societal impacts of large scale AI adoption.

Given the role of generative AI in co-opting the work of others, the motion gives emphasis to the inalienable rights of individuals to assert control and recognition over their creative outputs, while noting the existing Green Party commitment to making publicly funded work available for wide re-use.

(4) Build on existing good governance

Notwithstanding recent furores over the management of patient data at a national level, or perhaps because of them, the health sector has developed a range of good data governance practices. Health is also a sector in which we have seen some of the most exciting and socially valuable advances in AI, for diagnostics, drug discovery and managing treatment. A combination of robust governance, high-quality data infrastructure and public investment may help explain this.

Instead of seeing good governance as antithetical to innovation, the motion calls for frameworks and learning from healthcare to be ‘shared into other spheres including education, local government, the judiciary, transport and utility management and infrastructure’, and highlights the importance of strategic funding of independent AI research.

(5) Keep humans in the loop at all levels

Data and AI should support, not replace professional decision-making. The motion sets out a position to ‘regulate to ensure that wherever automated decision making significantly affects people's lives, this is done with care and humanity’, and to focus on the potential of AI ‘to augment, not replace, ... professionals’.

Through a focus on democratic decision-making around AI, a Green approach to AI governance can be both proactive and adaptive: recognising that the implementation and impacts of technologies are constantly unfolding. 

Contrary to hype-cycle-driven arguments that this is the moment in which all the big decisions on AI governance have to be made, an approach rooted in deliberative and democratic governance will seek to be continually anticipating, sensing and responding to the impacts of AI, and, from a robust set of values, working out the right response at the right time.

Policy statements as a starting point

If Motion C09 is passed, then it will just be the starting point for GPEW policy development on artificial intelligence. The motion creates an entry in the Record of Policy Statements, but further and ongoing work is required to develop positions that enter into long-term Green Party policy. As this is a cross-cutting issue, that process will require input from data and digital specialists and from members across the party touching on health, housing, education, science, social welfare and more. 

This is an area in flux, in which the Greens can and must influence the debate by offering a rounded, progressive and social justice-rooted approach to AI. 

Author: Tim Davies is Director of Research and Practice at Connected by Data, and a member of Stroud District Green Party. He contributed amendments to the motion in collaboration with the motion’s author Graham Tavener.