ChatGPT’s Model simply explained

Mobile Banking

What makes ChatGPT seem so smart and sensitive?

The real “brains” behind OpenAI’s GPT (Generative Pre-trained Transformer) model is what’s known as its “Transformer Architecture“. This “Architecture” allows GPT models to rank and rate the context and dependencies between words in a sentence, as the basic building blocks with which its ‘software’ is able to ‘compute’ aspects of human understanding, such as sentiment or mood, in a way that lets the computer assemble its response (or responses).

The Transformer Architecture, consists of an encoder (which receives input, in the form of a query) and decoder. The Encoder and Decoder allows the the model to weigh the relative importance of different parts of the input sequence when it processes a query.

It then uses what is called an “Attention Mechanism”, which, as the name suggests, lets the model to “pay attention” to certain parts of the data, over other parts, and to give those it rates more important, more weight when making predictions, and to focus on those important parts of the input to make predictions about what to do next.

So it is the Attention Mechanism in the Transformer that lets the model focus on specific words or phrases that are more relevant to the sentiment or mood of the input.

So it is the elements in the Transformer Architecture  which is the ‘brains’ and “neural network” of the operation, that uses these mechanisms to weigh the importance of different parts of the input sequence, and to rank, rate and process them.

How does OpenGPT accurately detect mood and sentiment?

The transformer architecture is a neural network architecture that uses self-attention mechanisms to weigh the importance of different parts of the input sequence when processing it.
ChatGPT Processing
  • Step 1 – The encoder takes the input sequence and creates sets of “keys” and “values” pairs which are used by the decoder to make predictions.
  • Step 2 – The keys and values are created by passing the input through a series of layers, each composed of a multi-head self-attention mechanism and a fully connected feed-forward network.
  • Step 3 – The attention mechanism allows the model to attend to different parts of the input sequence at different positions, which helps the model understand the context and dependencies between words in the input.
  • Step 4 – The decoder takes the output of the encoder and uses it to make predictions about the next token in the sequence.

NOTE – The keys are used to determine which parts of the input the model should attend to when making predictions, while the values are used to determine what information should be used when making the prediction.

The attention mechanism weighs the importance of each different key/value

component of the input when making the prediction.

The transformer architecture uses attention mechanisms to allow the model to efficiently process input sequences of varying lengths, rather than requiring them to be of a fixed length as in previous architectures such as recurrent neural networks (RNNs) or convolutional neural networks (CNNs). This makes it particularly well-suited for tasks such as natural language processing, where input sequences can be very long.

This allows the model to effectively capture the context and dependencies between words in a sentence, which is essential for understanding the meaning of the input, which gives it  the appearance of generating human-like text.

But, GPT models also use  pre-training and fine-tuning (as well as the transformer architecture) to accurately detect mood and sentiment in complex natural language input sequences.

Pre-Training vs. Fine-Tuning explained

The model is pre-trained on a large dataset of text, which allows it to learn general patterns in language and develop a good understanding of the context and meaning of words. The model learns to identify patterns and relationships in the text on its own, based on the structure of the data.

There is also the element of fine-tuning on a smaller dataset labeled with sentiment or mood, which allows the model to learn specific patterns associated with those labels. This fine-tuning process is supervised learning, the model learns to classify text according to the labeled data.

The pre-training data is usually sourced from the web, and it can be a combination of different types of text such as news articles, books, and other forms of written content.

Supervised vs. Unsupervised learning

The pre-training process for GPT models is done using a technique called unsupervised learning. This means that the models are trained on a large dataset of text without any human-provided labels or annotations.

This fine-tuning process is called supervised learning, as the model learns to classify text according to the labeled data.

The data is usually cleaned and preprocessed by humans to remove duplicates, irrelevant content and other noise, but the training itself is not done by humans.

How does GPT’s key/value weighting method differ from other AI’s

So the key/value weighting method (see above) used in OpenAI’s GPT models is similar to the attention mechanism used in other transformer-based architectures, like IBM’s Watson, but there are some differences in the way the method is implemented.

While the Transformer architecture is the model which creates a set of “keys” and “values” for each position in the input sequence, it is the way the keys are used to determine which parts of the input the model should attend to when making predictions, while the values are used to determine what information should be used when making the prediction, that is different.

In GPT models, the keys and values are created by passing the input through a series of layers, each composed of a multi-head self-attention mechanism and a fully connected feed-forward network. What does this mean? Well, the multi-head self-attention mechanism lets the model to attend to different parts of the input sequence at different positions, which in turn helps the model understand the context and dependencies between words in the input.

Other architectures, such as IBM’s Watson, use similar attention mechanisms but with different methods, for example, Watson uses a LSTM (Long Short-Term Memory) network with attention mechanism, which is a type of Recurrent Neural Network (RNN) architecture. RNNs are good at handling sequential data like language but have a limitation on the length of the sequence they can handle, but transformer’s architecture overcome this limitation by using the self-attention mechanism.

So, in essence, the key/value weighting method used in GPT models are similar to the attention mechanisms used in other transformer-based architectures and other architectures like IBM’s Watson, but it differs in how it uses these to generate the appearance of ‘intelligence’.

The key/value weighting method in GPT models allows the model to weigh the importance of different parts of the input sequence when processing it, which helps the model understand the context and dependencies between words in the input.

The table below compares the two (roughly)

ChatGPT and Watson compared
ChatGPT and Watson compared

RETAIL, RISK, CREDIT & DIGITAL BANKING

Group, Institution, Corporate & Industry Training Events and Programs

TRAINING COURSES

Digital Innovation in Banking Masterclass
The Cards & Payments Industry professional Course is designed to help you understand the dramatic way in which product and payment services have changed