06 July 2023
To help you decide whether to use a specific AI tool and to ensure a responsible and effective application of AI, we have put together a series of questions to ask yourself, the software providers, or the developers you are working with.
1. What problem are you aiming to solve?
The hype surrounding AI can end up blinding us to the fact that it is above all a tool. It has the potential to simplify our work processes, or even to improve them – but only if we’re clear about what it is we hope to achieve. It is vital to start with a specific problem or goal in mind, and only then to seek out an AI tool that meets your requirements. Investing in AI tools and only then trying to figure out what to use them for will only cause headaches.
2. What value does using an AI solution truly add?
In many cases, non-AI tools already exist that can do the job just as well or even better. Opting for an overly complicated solution is likely to cause more problems than it solves.
3. Is the AI tool already available, or is it still under development?
There are certainly perks to being ahead of the curve, getting your hands on cutting-edge technology before anyone else. But it can also come with a fair amount of frustration and expense. If you opt for tech that is still in development, be aware that it could be a long road ahead, and ensure that all relevant stakeholders (including your residents) are aware of potential delays.
4. What data was the model trained on?
The quality and relevance of the data a tool was trained on will determine how useful it ends up being, and could potentially lead to biases inherent in the tool.
There’s also a legal reason to ask about the source of the data. Open AI, the company behind ChatGPT, are already facing some legal challenges about whose data they used to train their AI tool. When using a new tool in the service of democracy, it’s crucial to rule out any potential violations of individuals’ rights during the data collection and training processes.
5. Will you have the chance to engage in a dialogue with the developers?
This is especially important if you don’t have your own team of developers to help you make sense of the technical documentation provided. Reliable channels of communication with the developers improve transparency, and will help you develop a deeper understanding of the tool’s functionalities.
6. Do the developers understand your needs?
Participation tools aren’t the same as other software tools. They require a nuanced understanding of the political contexts they will be used in, and an approach tailored to the specific challenges participation poses. Ensuring that the developers understand the work you do and the role their tool will play will result in a vastly enhanced end product.
7. How credible is the provider of the AI tool?
If a provider makes exaggerated claims about how AI is going to single-handedly save democracy, you should see that as a big red flag. Seek out providers who are open about the specific capabilities and limitations of their AI solutions.
8. Are your residents comfortable with AI solutions?
This is a big one. Ultimately, none of the other questions will matter if your residents don’t trust AI tools. The mainstream press is currently full of far-fetched visions of dystopian AI futures. It doesn’t matter how unfounded these kinds of stories are. If your residents don’t trust AI, using it could damage your relationship with them. This is likely to improve as AI becomes commonplace in more and more areas of our lives – but if in doubt, it may be worth waiting.
To all the AI experts out there: Which questions would you add? We welcome your feedback. Get in touch here.