This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

trusted source

proofread

Q&A: How do we ensure responsible AI technology?

chatgpt
Credit: Pixabay/CC0 Public Domain

The ChatGPT search engine has been met with reactions ranging from great enthusiasm to deep concern. At DTU, Professor Brit Ross Winthereik encourages us to always approach new technology openly and analytically to assess how it may be used responsibly.

Why do you think people react so strongly to ChatGPT?

I think one reason is that ChatGPT interacts with us in a different way than the search engines we know by personalizing its response in impeccable sentences that can be mistaken for reasoning.

Some see it as controversial because it uses data that may be freely available on the Internet but is not intended for corporate profit-making. The company behind ChatGPT has broken a social contract by exploiting something in the for its own gain.

At the same time, is collected in a different way than in the search engines we already know, because, as something new, you give something of yourself in the form of your attachment to ideas or thoughts that you share precisely because of the way ChatGPT interacts with you.

We also don't know if it's designed to affect the user in specific ways—for example if it tends to be racist and misogynistic, just like many other machine learning-based software systems, because they magnify predominant elements in the data on which they are trained.

Instead of focusing blindly on individual products, you encourage us to generally demand technology to be responsible. But how do you define 'responsible technology'?

Whether a technology is responsible depends entirely on the interaction between what it's created for and how it's used. Does it meet the goals set, or does it work completely differently in practice?

A manufacturer may aim to create responsible technology, but you cannot unilaterally label technologies as either 'responsible' or 'irresponsible'. This makes it very important to closely monitor specific effects, so we, for example, discover if technologies have inappropriate consequences, despite .

How do you assess whether a technology is responsible?

The first step is to describe the technology in context. How is it part of the bigger context of infrastructures, , and sets of cultural values? An irresponsible approach would be to say, "Oh, now there's a technology that's going to revolutionize the world, so we'd better react," and then rush to use it or ban it without examining it further.

Step number two is to analyze its effects: What happens in practice? Is the technology delivering on its promises? If not, what else does it do? And what requirements must be met for it to be a good technology?

The third step is to experiment with the technology. What is the limit of its capabilities? Does it exclude someone or something?

It's also important to assess whether the values that the technology represents align with the values of the organization, school, sector, or country in question. If not, you can say no and choose not to promote it or impose strict restrictions on its use.

In terms of public digital infrastructure—which is my field of research—it's about ensuring that the technology supports the society we want.

Does responsible technology equal necessary technology?

We can easily get all kinds of technologies to perform things for us, but where do we want to go as humans? What would we like to train? These are the kinds of questions we also need to ask ourselves in the context we're part of to determine whether it's necessary and beneficial to embrace a particular technology.

As someone wrote in the Danish high school journal 'Gymnasieskolen', you don't bring a forklift with you to the fitness center to lift weights. You lift the weights yourself because you want to develop your muscles. He didn't believe that chatbots facilitate learning, so it would be foolish to use them in schools.

The debate has since become more balanced, as several places have decided to allow the use of ChatGBT in classes and in connection with submission of assignments. In my opinion, the responsible approach to chatbots is to draw a line from above in relation to its use and then systematically collect experiences through dialogue with teachers, lecturers, and students.

It would also be a good idea to spread a basic understanding of technology by teaching that all technology has built-in preferences, values, standards, policies, and history.

Is regulation necessary to ensure responsible technology use?

I cannot think of technologies at infrastructure or societal level that are not subject to regulation. When we talk about digital technologies, the right to privacy is something that must be and is regulated.

Big Tech has proven notoriously difficult to regulate because they provide services free of charge in return for data. That's their business model. The EU is trying to control this with GDPR and other regulation. Unfortunately, legislation always seems to lag behind reality, but that is probably its nature.

I advocate that we conduct more research into the interaction between humans and automated systems to enable us to make better decisions at the societal level. It's important that our and our trust in the authorities are not eroded because we think something is smart offhand. We need to examine the practical effects thoroughly.

Citation: Q&A: How do we ensure responsible AI technology? (2023, June 1) retrieved 26 April 2024 from https://techxplore.com/news/2023-06-qa-responsible-ai-technology.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

ChatGPT sends shares in online learning giants into tailspin

5 shares

Feedback to editors