This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

trusted source

proofread

Examining the potential benefits and dangers of AI

artificial intelligence
Credit: CC0 Public Domain

Generative artificial intelligence is rapidly advancing and soon will be ubiquitous in everyday life, making us more productive and helping to solve complex problems while simultaneously creating new legal and ethical issues, a University of Cincinnati professor said.

Jeffrey Shaffer, the Joseph S. Stern Professor of Practice and assistant professor-educator in UC's Carl H. Lindner College of Business, sees AI as a tool that will transform lives, perhaps even more so than the internet did. He's given presentations on AI and will teach a class about it in the fall, embracing the evolving technology in his life and in the classroom.

"Students are going to graduate, work for a company and be expected to know how to use this stuff," said Shaffer, who also teaches business analytics. "It's going to be a tool that is ubiquitous."

Shaffer sees great potential in AI. ChatGPT, one of the most well-known generative AI tools, has passed a medical licensing exam and bar exam, and is quickly improving its capabilities in numerous areas.

Overall, Shaffer is optimistic about the potential for AI to improve lives, but he also warned that it can be used in nefarious ways.

"It's not all good," he said. "I mean, there are dangers."

AI's growing prominence

AI has already become ingrained in everyday lives. Search engines use AI to scan the internet and find results for inquiries. Online retailers use AI to suggest products shoppers might like. Streaming services and use AI to recommend content users might want to see.

"AI has been around for a long time," Shaffer said. "Artificial intelligence, that term goes back all the way to the 1950s. People had these ideas of what this was. And then, the thought was, can computers mimic the human brain?"

Generative artificial intelligence—a type of AI technology like ChatGPT that can produce various types of content, including text, imagery, audio and data—is a relatively new technology. The foundational paper, "Attention is All You Need," was written in 2017 and introduced transformer models, that learn meaning by tracking relationships in data. OpenAI launched ChatGPT in November 2022.

While other forms of AI can fetch content that already exists, generative AI can produce new content.

In its current form, AI still has limits, though. Shaffer likened generative AI tools to the internet in general. The internet is full of lots of information, but users have to evaluate the accuracy of what they find.

"It's collaborative right now with AI," he said. "You can't just let AI off and run and finish the project for you, but that could change. At least for now, you're working collaboratively more with AI, and I think that'll be the trend in the near future."

Potential benefits

People already are seeing AI at work. When typing a text message or email, phones and computers use predictive text to anticipate what words will come next.

The next phase is AI embedded in daily workflow, Shaffer said, meaning AI embedded into computer programs to help draft entire messages instead of just a few words at a time.

"It's going to be built into our workflow more and more," he said. "Gmail, Outlook, PowerPoint, Excel. It's going to be in the flow. And then these assistants or agents are going to do work for us. Imagine it goes through your inbox every day before you do, deletes all the spam, makes draft responses for all of your emails and looks at your calendar. This is already possible."

AI can now "join" meetings and provide summaries of meetings and assign action items.

Before committing to watching a long video or reading a paper, AI can provide a short recap, enabling people to determine whether they want to invest the time to view it in its entirety.

When reading a book, people can't remember every detail, but AI could supplement human memory and quickly find desired passages.

"AI is simply better and faster at recall and can generate new content," Shaffer said. "There's no way that you could memorize a large text and recall it as fast as a database system."

With menial and mundane tasks not taking up as much time, people will be able to be more productive.

"If we have the same number of programmers, we should be able to program 30% more," Shaffer said. "We should be able to get through the code faster. We should be able to write it faster. Comment faster. Debug it faster."

AI also could help break down barriers. While AI already can translate languages, Shaffer said AI can even translate conversations in real time. In a recent presentation, Shaffer demonstrated AI-created videos where he appears to be speaking Mandarin, Hindi and German.

Potential dangers

For all the good AI could offer, Shaffer cautioned that this rapidly advancing technology will present legal and ethical problems that will need to be solved.

Deepfake videos can impersonate a person and make them appear to say something they never actually said or do something they didn't actually do.

"AI can now imitate your voice," Shaffer said. "There are examples of all sorts of celebrities. That's now an issue. You could take 20, 30 seconds of somebody's voice, drop it into the system. People will be able to imitate your family's voice, a celebrity's voice. You won't know if that voice is real or not."

Scammers already pretend to be a family member in distress to trick people. The scams could be even more realistic with deepfake technologies.

People have used AI to create deepfake pornography of real people, both celebrities and non-celebrities alike, which can cause emotional harm to the person who's being impersonated.

AI also can create images of people who don't exist.

"We're going to have to consider the governance and ethics of these AI systems," Shaffer said, "and the diversity and bias in the training data and the guardrails."

Just in the past week, Google paused its new Gemini system because of diversity issues in the generated output.

AI also could be used to create new ways to harm people.

Researchers used AI to suggest 40,000 new potential chemical weapons in six hours. They shared their findings to warn others of the potential for harm if a bad actor did something similar.

"AI is probably the only solution to this problem," Shaffer said. "These researchers were using AI to find vaccines but, instead, realized they could create viruses. I think we will see this AI fight in many areas. AI will be used by bad actors and AI will be used to help fight against bad actors."

AI also could take on an enhanced role in warfare, both potentially saving lives and costing lives. The Defense Advanced Research Projects Agency, a U.S. Defense Department agency, reported in 2020 that an AI algorithm bested a human pilot in simulated aerial combat each time in five training missions.

"The reason the AI system was a better pilot was it would do things that normal pilots wouldn't do. There was an instinct in a pilot of, "This is too dangerous," and they're trained to do things in a certain way," Shaffer said. "If the fighter pilot makes a mistake, he dies. If the AI makes a mistake, the AI doesn't die. That cost function is very different, a $1 billion airplane instead of a pilot's life. AI can make different decisions in that dogfight."

Drastic improvements

In the little more than a year since ChatGPT was released to the public, it and other generative AI tools have drastically improved, Shaffer said.

OpenAI released its new GPT-4 model. There are now fewer what are called hallucinations, when AI makes up information and presents it as fact. Generative AI is better able to create images that don't feature people with extra fingers or text that looks like gibberish.

"The tools are advancing quickly. We are able to do things today that we couldn't do a year ago," Shaffer said while demonstrating AI's ability to create an image of an A&W root beer stand with accurate A&W and Coca-Cola logos. "They are much more accurate now."

Shaffer also expects AI models to improve as they're trained for specific purposes. Large corporations already are spending millions of dollars to create AI tools to complete specific tasks such as analyzing contracts.

And with the rapid advancement of the technology, AI is becoming more accessible to everyone and not just the largest organizations.

"When these models first came out, we thought that only the big companies were going to be able to do this," Shaffer said. "It takes huge [graphics processing units] to train these models. Now people have open-source models on this platform called Hugging Face. And they've proven that you can get pretty accurate with small models."

While there is opposition to using AI, Shaffer expects it to gain more acceptance in the years to come. He thinks more people will view doing tasks that AI could complete as a waste of time.

In his classroom, the ability to do things faster and better with AI will lead to higher expectations for his students' work.

"If you can't ban it from the classroom, what do you do?" Shaffer asked. "My response is, you expect more from the students. Ask them to do more because now they should be capable of doing more.

"Make the project harder, make it bigger, expect them to do more of it or create something better because now they have AI to assist them in doing it."

Citation: Examining the potential benefits and dangers of AI (2024, February 29) retrieved 27 April 2024 from https://techxplore.com/news/2024-02-potential-benefits-dangers-ai.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

'Haunted' ChatGPT cranks out gibberish for hours

4 shares

Feedback to editors