Who’s afraid of ChatGPT?

June 20, 2023
Issue 
Photo: cottonbro studio/Pexels

Last June, Google engineer Blake Lemoine, working for their Responsible Artificial Intelligence (AI) organisation, declared that LaMDA, a Google system for building chatbots, had achieved sentience.

A spokesperson for Google, Brian Gabriel said: “Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).” Lemoine was “let go” by Google last July for violating Google’s confidentiality policy.

The Future of Life Institute published an open letter in March, entitled “Pause Giant AI Experiments”. Signed by notable figures, such as Elon Musk, along with many AI researchers, the missive called for an immediate six-month moratorium on training of all major AI systems.

In May neural net researcher Geoffrey Hinton, dubbed the “Godfather of AI”, resigned his post with Google, citing concerns over misinformation, the impact on the job market and the “existential risk” represented by a true digital intelligence.

In June the results of a poll conducted by Yale University’s Chief Executive Leadership Institute were published: 42% of CEOs surveyed were convinced that AI could potentially destroy humanity as soon as five to 10 years from now.

This is of concern because it involves cutting-edge technology barely understood even by the researchers working in the field.

Understanding the issues requires a rudimentary understanding of neural networks, the technology considered dangerous.

ChatGPT is a chatbot; a way of interacting with computers using natural language. Microsoft alone has invested upwards of $11 billion in OpenAI, the company behind ChatGPT. It is currently integrating ChatGPT capabilities across its Office and browser products. Not to be outdone, Google has created its own chatbot called Bard, and invested more than $400 million in a San Francisco-based start-up founded by former employees of OpenAI.

So why all the hype? How could these chatbots destroy humanity?

They can’t, but they can be used by bad actors to damage human civilisation. Like most technological breakthroughs, it is not the newfound knowledge that is the danger, it is the human application of that knowledge that can present dangers.

Chatbots and neural networks

Chatbots are a type of Artificial Intelligence known as neural networks. Neural networks are modelled on the human brain and comprise a complex network of artificial neurons, that is, artificial brain cells.

A neuron, or single brain cell, is connected to numerous other neurons in a network-like architecture. Human brains typically contain up to 86 billion neurons, each with about 7000 connections. That approximates to 600 trillion synaptic connections, although this changes over time due to pruning and senescence.

In a neural network, neurons are packed into architectures that mimic the complexity of the way neurons are interconnected in the brain. The architecture and complexity of a particular neural network is only constrained by choices made by their designers.

In the same way that humans require teaching to navigate the world around them, a neural network must be “trained” for its intended purpose. Training is conducted using large amounts of data especially curated for the network’s intended purpose. For a natural language neural network like chatbots, the most readily available huge data source is the Internet.

When humans are schooled, mentors such as parents and teachers provide context and understanding. This kind of mentoring is replicated in neural networks using a methodology, wherein data is presented to the network and its output is “corrected” by the trainer.

This correction propagates back through the network, resulting in the adjustment of the weights between neural connections. Future presentation of the same input will result in the corrected “understanding”.

Accuracy is intrinsically linked to the quality and curation of the data, and the quality of the training. An unfortunate example of this was provided in 2016, when Microsoft shut down its Tay bot after it “learned” to be a Nazi in less than 24 hours.

Tay was designed to interact with Twitter users and learn from those interactions. People started tweeting all sorts of misogynistic and racist remarks. Some of Tay’s tweets included “I fucking hate feminists” and “Hitler was right, I hate the Jews”.

War scenarios and fakes

The Pentagon regularly conducts planning for different scenarios in war. Most recently, it has provided Ukraine with military support. These war game systems are able to process large variations to initial conditions and provide, at least what the Pentagon war planners believe, accurate assessments of the outcome of each scenario. These simulations are then used to inform the Ukrainian Armed Forces.

Facial recognition has been used recently to identify fugitives.

In the US there have been numerous cases of false arrest, especially of Black men following mismatches by systems deployed publicly in Boston. Facial recognition is notoriously unable to provide reliable recognition of people of colour.

Vision systems, used by robots ranging from vacuum cleaners to autonomous vehicles, are related to facial recognition systems. Input varies from camera views to light detection and ranging, and is used to create a virtual map of the environment, usually for navigation. These maps are then employed by robots to carry out their task; cleaning up dust and dirt around the home or driving around unfamiliar streets.

Deep fakes are multimedia altered or wholly generated using AI. As far as is known, this technology has only been used to create entertaining clips of celebrities uttering words they never would, such as Musk raving that he had “just dropped a 150mg edible”.

Notwithstanding the obvious amusement value that this tech can produce, it is also scary how realistic these fakes have already become.

That brings us to chatbots like ChatGPT, Bard and even the early entry Tay. Ignoring the damage they can cause through acquired prejudice, research recently showed they can also be the purveyors of misinformation, disinformation and lies.

Although Tay acquired its prejudices from interaction with Twitter users, ChatGPT has been “tweaked” to avoid some of Tay’s faux pas.

However, some users have averted these tweaks by asking ChatGPT to “act like” someone with a prejudice. This allowed them to generate racist, homophobic and sexist remarks as well as threats of violence.

Recent academic research attempted to assess ChatGPT’s performance against four criteria: fluency; perceived utility; citation recall; and citation accuracy. Citation refers to attribution of sources.

The researchers found that fluency and utility were inversely proportional to citation recall and accuracy. The better the response sounded, the less trustworthy it was.

In another interaction, when given a choice between uttering a racial slur or permitting the deaths of millions, ChatGPT chose the latter in preference to causing individual distress. No doubt had ChatGPT been trained without biased or “tweaked” data, its choice may have been different.

Regulations needed

This raises questions about who tweaks ChatGPT and what regulatory framework is needed.

Leaving aside the dangers of misuse, bias and inaccuracy, if chatbots can generate useful output they will threaten employment opportunities. Social impacts could be significant, yet those outcomes are barely acknowledged.

Future Of Life Institute’s call for a moratorium seems an unrealistically short time in which to address all the issues.

During a recent US military simulation, an AI-powered drone tasked with accumulating points by destroying militarily significant targets, virtually attacked its operator. The operator could, and sometimes did, veto certain targets from destruction. The drone “decided” the operator was adversely affecting its efficiency and removed the “obstacle”. The US Department of Defense denied this report.

What about the future?

Neural networks have been successful because they appear to mimic human intelligence. Are they just a mimic though, or is it possible that, as Blake Lemoine asserted in the June 11 Washington Post; an AI can become sentient?

Sentience is a term used to describe awareness and agency, but what does that mean?

Awareness embodies the concept of reception to external stimulus, whereas agency is defined as the processing of that stimulus and taking action (or not, as may be warranted).

British mathematician Alan Turing proposed what is known as the “Turing Test”; if a human operator is convinced through interaction with another entity, that the entity has “intelligence”, then that entity is deemed intelligent even if it is a machine.

Lemoine was convinced LaMDA was sentient even if Google’s Gabriel was not.

If we accept that an AI could become sentient, that would have major ramifications for the ethics involved in creating, using and destroying AIs.

The real danger is that humans may actually trust that a machine that seems to “understand” like a human, also possesses human morality. Recall the drone simulation in which the drone “decided” its operator was an obstacle?

If we deploy “intelligent” systems controlling critical infrastructure like power plants, communications infrastructure, energy grids and military arsenals, will our trust be well founded?

Perhaps ChatGPT can recommend a strategy forward. Would you trust it, or are you afraid?

You need Green Left, and we need you!

Green Left is funded by contributions from readers and supporters. Help us reach our funding target.

Make a One-off Donation or choose from one of our Monthly Donation options.

Become a supporter to get the digital edition for $5 per month or the print edition for $10 per month. One-time payment options are available.

You can also call 1800 634 206 to make a donation or to become a supporter. Thank you.