NL | EN

Home > News

AI in your own hands - smarter together: non-profit applications

20-01-2024

Companies make AI applications for profit. To truly benefit from AI and solve societal problems, we need to be smarter together more than anything else.

The New York Times decided just after Christmas that it did not want to give its articles as gifts to tech companies. The newspaper is suing OpenAI, the developer of ChatGPT, for unlawful use of articles. This is because OpenAI used material from the newspaper - and pretty much the rest of the internet - to train its language model. Without a lot of written language to learn from, ChatGPT would not have existed. The New York Times considers the use of its pieces to train a language model to be copyright infringement. In some cases, ChatGPT even produces texts directly from The New York Times without citing the source.

Copyright is not ChatGPT's only ethical problem. Preventing ChatGPT from producing racist or other hateful responses still requires a lot of handiwork. Handiwork that is done by low-paid Kenyan workers, Time revealed. In addition, biases for all AI language systems are still a complicated problem.

Public initiatives

In the meantime, all kinds of AI applications, with and without language, are being developed that can help society move forward. Earlier in this series, we saw that AI could start supporting doctors in making diagnoses, that robots could help alleviate loneliness in the elderly, and that language models could enable journalists to better tailor their pieces to their target audience.

How do you deliver on this kind of promise of AI in an ethical way? Part of the solution is not to leave the development of AI systems to big tech companies, such as OpenAI, but to take matters into our own hands, argues Selmar Smit, researcher at TNO. He is the creator of the GPT-NL project. In it, TNO, together with the Netherlands Forensic Institute and ICT cooperative SURF, is trying to ethically develop a Dutch language model. Smit hopes that this model can be used to make government more accessible to citizens, for example.

Maaike Harbers, lecturer in Artificial Intelligence & Society at the Hogeschool Rotterdam, is in favour. 'We see that people at big tech companies are also working on responsible AI, but those companies are not very transparent about what they are doing. We don't know which choices are made with which interests. We do know that, at least at companies, the interest in making money is strong. For responsible AI, I am excited about more public initiatives; GPT-NL is a great example.'

Netflix

In her own research, Harbers looks at how to apply AI ethically within organisations. 'I often focus my research on a party with an AI application or plans to deploy AI. I then look at how to approach that ethically. That starts with mapping out what are actually the interests and ethical choices.'

'One project I am working on, for example, is the DRAMA project (Designing Responsible AI for Media Applications, ed),' Harbers continues. 'In this, we try to support media organisations in designing, developing and deploying AI applications responsibly. The deployment of AI within the media can be enormously broad, from automatically subtitling programmes to creating a system that recommends something to watch or read. In the development of AI applications, we are trying to support media organisations in the responsible use of AI.'

'When developing responsible AI, you have to constantly make ethical trade-offs. Consider a recommendation system. Commercial platforms, such as Netflix, mainly want to keep people as long as possible with their recommendations, but public media organisations also have to try to inform and connect the Dutch public from a legal point of view. This means you have to make certain ethical choices. As a public organisation, you want to avoid just serving people more of the same, but you also want to recommend something that people like.'

Requiring consent

The main ethical choice in developing GPT-NL, is also the project's main challenge: 'We want the material we use to develop GPT-NL to all come from sources that give permission for this. In order to have enough data for a well-functioning model, we are in talks with all kinds of organisations that produce a lot of texts themselves,' Smit informs.

Our reading tips

In the end, GPT-NL should form a basis on which companies and governments can build all kinds of applications, for example a helper that explains what is in the letter from the Tax Office. The drawbacks of the better-known AI models will not be entirely absent from GPT-NL. For example, because of the way language models are now created, even with GPT-NL it will remain difficult to explain exactly how GPT-NL arrived at its answer.

Also, according to Smit, prejudice cannot be ruled out. 'It is difficult to take this into account when creating a language model. We do reduce the chance of prejudice by using high-quality texts for training. That is why we do not use forums on which we know there are hateful and sexist posts. But we may not be able to keep prejudice completely out of the language model. You might not be able to solve that in the language model itself, but you have to do that in the application you build around it.'

Teams

Just taking matters into your own hands does not solve all problems surrounding AI. Choices remain to be made. Awareness of the fact that many choices when making and applying AI are not technical, but ethical choices, is important to make AI truly more responsible, says Somaya Ben Allouch. She is a lecturer in Digital Life at the Amsterdam University of Applied Sciences and a special professor of Human-System Interaction for Health & Wellbeing at the University of Amsterdam.

'We must tell our future and current developers that they are not just building an algorithm, but that the decisions they make have an impact on flesh and blood people. Not just developers, anyone who has a role in developing an AI system or the way it is applied must be aware that they have a responsibility in designing and then using AI.'

According to Ben Allouch, the teams that create AI applications should not only consist of developers. Designers, professionals from the field where the AI application is used, and the people you are trying to create an AI solution for, can also be part of the design process of an AI application much earlier. 'Then we can also realize AI's promise to do the right things.'

This article also appeared on the website of the University of Amsterdam.