This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:


trusted source

written by researcher(s)


Opinion: We're talking about AI a lot right now, and it's not a moment too soon

We're talking about AI a lot right now—and it's not a moment too soon
Credit: AI-generated image

When OpenAI unchained the "beast" that is ChatGPT back in November 2022, the pace of market competition between tech companies involved in AI increased exponentially.

Market competition determines the price of goods and services, their quality and the speed of innovation—which has been remarkable in the AI industry. However, some experts believe we are deploying the most powerful technology in the world far too quickly.

This could hamper our ability to detect serious problems before they've caused damage, resulting in profound implications for society, particularly when we can't anticipate the capabilities of something that may end up having the ability to train itself.

But AI is nothing new—and while ChatGPT may have taken many people by surprise, the seeds of the current commotion over this technology were laid years ago.

Is AI new?

The origins of modern AI can be traced back to developments in the 1950s when Alan Turing worked to solve complex mathematical problems to test machine intelligence.

Limited resources and available at the time hindered growth and adoption. But breakthroughs in , neural networks, and data availability fueled a resurgence of AI around the early 2000s. That prompted many industries to embrace AI. The finance and telecommunications sectors used it for fraud detection and data analytics.

An explosion of data, the development of cloud computing and the availability of huge computing resources all later facilitated the development of AI algorithms. This significantly shaped what could be done with—for example, image and video recognition and targeted advertising.

Why is AI getting so much attention now? AI has long been used in , to recommend relevant posts, articles, videos, and ads. The technology ethicist Tristan Harris says social media is broadly humanity's "first contact" with AI.

And humanity has learned that AI-driven algorithms on can spread disinformation and misinformation—polarizing and fostering online echo chambers. Campaigns spent money on targeting voters online in both the 2016 US presidential election and the UK Brexit vote.

Both events led to public awareness about AI and how technology could be used to manipulate political outcomes. These high-profile incidents set in motion concerns about the capabilities of evolving technologies.

However, in 2017, a new class of AI emerged. This technology is known as a transformer. It's a which processes language and then uses that to produce its own text and have conversations.

TED talk by journalist Carole Cadwalladr on the topic of AI.

This breakthrough facilitated the creation of large language models such as ChatGPT, which can understand and generate text which resembles that written by humans. Transformer-based models such as OpenAI's GPT (Generative Pre-trained Transformer) have demonstrated impressive capabilities in generating coherent and relevant text.

The difference with transformers is that, as they absorb new information, they learn from it. This potentially allows them to gain new capabilities that engineers did not program into them.

Bigger issue

The now available and the capabilities of the latest AI models mean that as-yet unresolved concerns around the impact of social media on society—especially on younger generations—will only grow.

Lucy Batley, the boss of Traction Industries, a private-sector company which helps businesses integrate AI into their operations, says that the type of analysis that social media companies can carry out on our personal data—and the detail they can extract—is "going to be automated and accelerated to a point where big tech moguls will potentially know more about us than we consciously do about ourselves."

But quantum computing, which has experienced major breakthroughs in recent years, may far surpass the performance of conventional computers on particular tasks. Batley believes this would "allow the development of much more capable AI systems to probe multiple aspects of our lives."

The situation for "big tech" and the countries that are leading in AI can be likened to what game theorists call the "prisoner's dilemma." This is a condition where two parties must either decide to work together to solve a problem, or betray each other. They face a tough choice between an event where one party gains—keeping in mind betraying often yields a higher reward—or one with the potential for mutual benefit.

Let's take a scenario where we have two competing . They need to decide whether they should cooperate by sharing their research on cutting-edge technology or keep their research secret. If both companies collaborate, they could make significant advancements together. However, if Company A shares while Company B doesn't, Company A probably loses its competitive edge.

This is not too dissimilar from the current situation that the US finds itself in. The US is trying to accelerate AI to beat foreign competition. As such policymakers have been slow to discuss AI regulation, which would help protect society from harms caused by use of the technology.

Uncharted territory

This potential for AI to create societal problems must be averted. We have a duty to understand them and we need a collective focus to avoid the mistakes that have previously been made with social media. We were too late to regulate social media. By the time that conversation entered the public domain, social platforms had already entangled themselves with the media, elections, businesses and users' lives.

The first major global summit on AI safety is planned for later this year, in the UK. This is an opportunity for policymakers and world leaders to consider the immediate and future risks of AI and how these risks can be mitigated via a globally coordinated approach. This is also a chance to invite a broader range of voices from society to discuss this significant issue, resulting in a more diverse array of perspectives on a complex matter that will affect everyone.

AI has huge potential to increase the quality of life on Earth, but we all have a duty to help encourage the development of responsible AI systems. We must also collectively push for brands to operate with ethical guidelines within regulatory frameworks. The best time to influence a medium is at the very start of its journey.

Provided by The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation: Opinion: We're talking about AI a lot right now, and it's not a moment too soon (2023, August 24) retrieved 7 December 2023 from
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Facebook parent Meta makes public its ChatGPT rival Llama


Feedback to editors