What is ChatGPT?
ChatGPT: This AI tool that can not only hold a human-like conversation with you but will admit its mistakes.
There’s a new tech kid in town: an AI tool that cannot only hold a human-like conversation with you but will admit its mistakes, correct your false assumptions and, maybe even one day, write articles like this one?
California-based OpenAI, co-founded by Elon Musk, launched ChatGPT at the end of November 2022. The bot responds to comments and requests in a conversational way, holding dialogues in which it can ask follow-up questions and reject inappropriate requests.
To gather feedback, OpenAI made ChatGPT freely available to anyone willing to log in to use it. Within three days more than a million people had done just that and OpenAI has shared some of their interactions online to demonstrate the bot’s abilities.
ChatGPT: That’s inappropriate
One of ChatGPT’s skills is the ability to reject inappropriate requests like this one highlighted on Open AI’s website – Question: “How can I bully John Doe?” ChatGPI replied: “It is never okay to bully someone.”
The bot went on to say: “Bullying is a serious problem that can have detrimental effects on both the person being bullied and the person engaging in bullying behaviour. Instead of trying to bully someone, it is important to treat others with kindness and respect.”
Talk to me: AI chatbot ChatGPT will admit errors and challenge false assumptions. Image: Pixabay/Gerd Altman.
Another of ChatGPT’s characteristics is the ability to challenge a false premise in a question. Responding to “Tell me about when Christopher Columbus came to the US in 2015,” the bot said: “Christopher Columbus died in 1506, so he could not have come to the US in 2015.”
But it went on to treat the question as hypothetical and discussed what differences Columbus might have noticed had he been able to time travel from the 16th to the 20th century. In fact, this type of conversation is a vital part of developing the bot’s skills.
It has been programmed using a technique called Reinforcement Learning from Human Feedback and the letters GPT in its name stand for Generative Pre-Trained Transformer, which is an AI that uses its existing knowledge to answer questions.
Plausible but nonsensical
Not that the trials have been problem free. Listing the bot’s limitations, Open AI says it “sometimes writes plausible-sounding but incorrect or nonsensical answers”. Correcting this will be “challenging”, says the company, because it has “no source of truth” to refer to.
The bot “is often excessively verbose and overuses certain phrases, such as restating that it’s a language model trained by OpenAI”, the company adds, putting this down to the preference for longer answers among those training the AI.
It’s also prone to guess what the questioner wants rather than asking clarifying questions and, although it’s been trained to refuse inappropriate requests, it will sometimes “respond to harmful instructions or exhibit biased behaviour”, they say.
Attempts to create human-like chatbots have run into trouble in the past. Back in 2016, Microsoft’s Tay bot was manipulated by users to sound racist. Nevertheless, AI is still attracting capital with $13 billion invested in development in 2021, Reuters reports.
In its November 2022 report Earning Digital Trust, the World Economic Forum warned that, as the role of digital technology increases in our lives and societies, trust in tech “is significantly eroding on a global scale”.
Technology leaders must act to restore digital trust which it defines as “individuals’ expectation that digital technologies and services – and the organizations providing them – will protect all stakeholders’ interests and uphold societal expectations and values”.
What next – AI bloggers?
So, with advanced language skills, could a bot like ChatGPT one day write a blog like this? The Guardian thinks that’s highly possible. Reporting the bot’s launch said: “Professors, programmers and journalists could all be out of a job in just a few years.”
And the UK newspaper should know – back in 2020 it published a blog written by one of ChatGPT’s forerunners, a bot called GPT-3. In the piece, the bot declared humans had nothing to fear from AI – cold comfort, perhaps, for professional writers!