Artificial Intelligence (AI)

What Is ChatGPT And How Can You Use It?


OpenAI introduced a long-form question-answering AI called ChatGPT that answers complex questions conversationally. 

It’s a revolutionary technology because it’s trained to learn what humans mean when they ask a question. 

Many users are awed at its ability to provide human-quality responses, inspiring the feeling that it may eventually have the power to disrupt how humans interact with computers and change how information is retrieved. 

What Is ChatGPT?

ChatGPT is a large language model chatbot developed by OpenAI based on GPT-3.5. It has a remarkable ability to interact in conversational dialogue form and provide responses that can appear surprisingly human. 

Large language models perform the task of predicting the next word in a series of words. 

Reinforcement Learning with Human Feedback (RLHF) is an additional layer of training that uses human feedback to help ChatGPT learn the ability to follow directions and generate responses that are satisfactory to humans. 

Who Built ChatGPT?

ChatGPT was created by San Francisco-based artificial intelligence company OpenAI. OpenAI Inc. is the non-profit parent company of the for-profit OpenAI LP. 

The CEO is Sam Altman, who previously was president of Y Combinator. 

Microsoft is a partner and investor in the amount of $1 billion dollars. They jointly developed the Azure AI Platform. 

OpenAI is famous for its well-known DALL·E, a deep-learning model that generates images from text instructions called prompts. 

Large Language Models

ChatGPT is a large language model (LLM). Large Language Models (LLMs) are trained with massive amounts of data to accurately predict what word comes next in a sentence. 

It was discovered that increasing the amount of data increased the ability of the language models to do more. 

According to Stanford University: 

“GPT-3 has 175 billion parameters and was trained on 570 gigabytes of text. For comparison, its predecessor, GPT-2, was over 100 times smaller at 1.5 billion parameters. 

This increase in scale drastically changes the behavior of the model — GPT-3 is able to perform tasks it was not explicitly trained on, like translating sentences from English to French, with few to no training examples. 

This behavior was mostly absent in GPT-2. Furthermore, for some tasks, GPT-3 outperforms models that were explicitly trained to solve those tasks, although in other tasks it falls short.” 

LLMs predict the next word in a series of words in a sentence and the next sentences – kind of like autocomplete, but at a mind-bending scale. 

This ability allows them to write paragraphs and entire pages of content. 

But LLMs are limited in that they don’t always understand exactly what a human wants. 

And that’s where ChatGPT improves on state of the art, with the aforementioned Reinforcement Learning with Human Feedback (RLHF) training. 

How Was ChatGPT Trained?

GPT-3.5 was trained on massive amounts of data about code and information from the internet, including sources like Reddit discussions, to help ChatGPT learn dialogue and attain a human style of responding. 

ChatGPT was also trained using human feedback (a technique called Reinforcement Learning with Human Feedback) so that the AI learned what humans expected when they asked a question. Training the LLM this way is revolutionary because it goes beyond simply training the LLM to predict the next word. 

A March 2022 research paper titled Training Language Models to Follow Instructions with Human Feedback explains why this is a breakthrough approach: 

“This work is motivated by our aim to increase the positive impact of large language models by training them to do what a given set of humans want them to do. 

By default, language models optimize the next word prediction objective, which is only a proxy for what we want these models to do. 

Our results indicate that our techniques hold promise for making language models more helpful, truthful, and harmless. 

Making language models bigger does not inherently make them better at following a user’s intent. 

For example, large language models can generate outputs that are untruthful, toxic, or simply not helpful to the user. 

In other words, these models are not aligned with their users.” 

The engineers who built ChatGPT hired contractors (called labelers) to rate the outputs of the two systems, GPT-3 and the new InstructGPT (a “sibling model” of ChatGPT). 

Based on the ratings, the researchers came to the following conclusions: 

“Labelers significantly prefer InstructGPT outputs over outputs from GPT-3. 

InstructGPT models show improvements in truthfulness over GPT-3. 

InstructGPT shows small improvements in toxicity over GPT-3, but not bias.” 

The research paper concludes that the results for InstructGPT were positive. Still, it also noted that there was room for improvement. 

“Overall, our results indicate that fine-tuning large language models using human preferences significantly improves their behavior on a wide range of tasks, though much work remains to be done to improve their safety and reliability.” 

What sets ChatGPT apart from a simple chatbot is that it was specifically trained to understand the human intent in a question and provide helpful, truthful, and harmless answers. 

Because of that training, ChatGPT may challenge certain questions and discard parts of the question that don’t make sense. 

Another research paper related to ChatGPT shows how they trained the AI to predict what humans preferred. 

The researchers noticed that the metrics used to rate the outputs of natural language processing AI resulted in machines that scored well on the metrics but didn’t align with what humans expected. 

The following is how the researchers explained the problem: 

“Many machine learning applications optimize simple metrics which are only rough proxies for what the designer intends. This can lead to problems, such as YouTube recommendations promoting click-bait.” 

So the solution they designed was to create an AI that could output answers optimized to what humans preferred. 

To do that, they trained the AI using datasets of human comparisons between different answers so that the machine became better at predicting what humans judged to be satisfactory answers. 

The paper shares that training was done by summarizing Reddit posts and also tested on summarizing news. 

The research paper from February 2022 is called Learning to Summarize from Human Feedback. 

The researchers write: 

“In this work, we show that it is possible to significantly improve summary quality by training a model to optimize for human preferences. 

We collect a large, high-quality dataset of human comparisons between summaries, train a model to predict the human-preferred summary, and use that model as a reward function to fine-tune a summarization policy using reinforcement learning.”