Open AI’s grant for deliberative processes
In the spring, the non-profit branch of OpenAI, the organisation behind ChatGPT, started a program to award ten 10,000 USD grants to fund deliberative processes on a global scale to help design a set of rules for AI systems to follow. The argument is that AI will affect everybody, and so its regulation needs the input of diverse perspectives.
They are keen to emphasise that they do not see this as an alternative to government regulation. The call for grant applications claims that “we view our efforts as complements rather than substitutes for regulation of AI by governments,” and even states that AI “must have strong public oversight”.
OpenAI is not alone with this general idea. In June, Meta published the results of its deliberative processes on the topic of bullying and harassment, run in collaboration with Stanford’s Deliberative Democracy Lab. Again, they understand their process to be complementary to the development of public regulation.
There seems to be a bigger trend of tech companies trying out deliberative democratic processes to help define the rules of how their industry operates. Should we be excited or worried about this development?
Mixed messages on regulating AI
These developments come at a time when AI and big tech more generally are facing increasing scrutiny around the globe, with governments scrabbling to develop regulations fast enough to keep up with the technology.
When it comes to external regulation, OpenAI’s willingness to let democracy decide is less than clear-cut.
Their stance on regulation has been inconsistent at best. They have lobbied to water down the EU AI Act, and have repeatedly been criticised for being untransparent and infringing copyrights. It makes me wonder, could its deliberative processes be a manoeuvre to distract from the need to move quickly on AI regulation?
On the other hand, OpenAI’s Sam Altman has repeatedly asked for regulation, painting a picture of existential threat coming from AI. Yet some critics have suggested that this is essentially PR on his part, a way of distracting from the more concrete threats posed by his product.
With this in mind, maybe the initiative is just another example of grandstanding. Wanting us to believe AI is more powerful than it actually is, and ensuring that OpenAI is a part of any discussion of regulation.
It matters who initiates citizens’ assemblies
Putting aside the speculations about their intentions, there is another key question we should ask. The best examples of citizens’ assemblies are generally run with government involvement. Is it really the place of the private sector to run these kinds of deliberative processes?
There is an obvious conflict of interest. A democratic process to regulate an economic sector should not be directly commissioned and funded by a company in that sector. Would we trust an oil company or car manufacturer’s funding processes on regulating their industries? Even if they are not intervening in the processes themselves; who decides on the questions?
The OpenAI call contains a list of suggested questions for the deliberative processes to address. I find questions on the topics that the company is most criticised for – such as transparency, copyright of training data and resource use – to be conspicuously lacking.
If OpenAI’s aim is to come up with enforceable regulations on issues like this, the process needs to be initiated by a body with the democratic legitimacy to regulate – a parliament or government. But is this actually what they are trying to do? The OpenAI call seems a bit ambiguous about it. They write about “democratic process for deciding what rules AI systems should follow, within the bounds defined by the law” but also that after first experiments it should “more directly inform decisions in the future”. Perhaps they are right that “no single individual, company, or even country should dictate these decisions.” But are they suggesting that their process ranks above those of democratic governments?
What is the place of companies experimenting with democratic processes?
So are these experiments all bad? Shouldn’t we support the idea of companies and other organisations that are not governments experimenting with innovative participatory or deliberative democratic processes? I think we should. But I would find democratic experiments of tech companies a lot more credible if they were focused more on internal matters instead of questions that clearly require government regulation. (I would be curious what their users and employees think about the question of whether the company should outsource the labelling of toxic content in their data to cheap labour in Kenya, for example).
As long as they are running these processes on matters that concern the rules governing a whole industry, I tend to believe that it is rather a new form of lobbying, aimed at instrumentalising “the voice of the people” to get what they want while simultaneously creating the impression that big tech is capable of self-regulating. And if this is true, it’s a worrying development, and threatens to undermine some of the valuable work done on deliberative tools.