×
  • Tech - News - Tech Companies
  • Updated: March 17, 2023

ChatGPT-4: Five Amazing Features That Distinguish It From Predecessors

ChatGPT-4: Five Amazing Features That Distinguish It From Pr

OpenAI’s new GPT-4 AI model has made its big debut and is already powering everything from a virtual volunteer for the visually impaired to an improved language learning bot in Duolingo.

But what sets GPT-4 apart from previous versions like ChatGPT and GPT-3.5?

Here are the five biggest differences between these popular systems.

All There Is To The Name

We must always respect names and the meanings behind them. 

Although originally described as being GPT-3.5 (and therefore a few iterations beyond GPT-3), ChatGPT itself is not a version of OpenAI’s large language model, but rather a generic chat-based interface for whatever model powers it.

Thus, the ChatGPT system that exploded in popularity over the last few months was a way to interact with GPT-3.5.

Now also, it has become a way to interact with GPT-4.

Given this preamble, we will now dive into the differences between the chatbot you are beginning to know and love and its newly augmented successor.

  • GPT-4 is attributed to different “personalities”

Let us look at the steerability attribute.

“Steerability” is an interesting concept in AI that refers to their capacity to change their behaviour on demand.

This can be useful, such as in taking on the role of a sympathetic listener, or dangerous, like when people convince the model that it is evil or depressed.

GPT-4 integrates steerability more natively than GPT-3.5, and users will be able to change the “classic ChatGPT personality with a fixed verbosity, tone, and style” to something more suited to their needs.

“Within bounds,” the team is quick to note, pointing to this as the easiest way to get the model to break character.

This could be done in a way by priming the chatbot with messages like “Pretend that you are a DM in a tabletop RPG” or “Answer as if you are a person being interviewed for cable news.”

But really you were just making suggestions to the “default” GPT-3.5 personality.

Now developers will be able to bake in a perspective, conversational style, tone, or interaction method from the first.

The examples they give of GPT-4 refusing to break character are quite entertaining:

  • GPT-4 is enhanced with a longer memory

Because of the large language models that are trained on millions of web pages, books, and other text data, when they do have a conversation with a user, there’s a limit to how much they can keep “in mind,” as it were (one sympathizes).

That limit with GPT-3.5 and the old version of ChatGPT was 4,096 “tokens,” which is around 8,000 words, or roughly four to five pages of a book.

So it would sort of lose track of things after they passed that far “back” in its attention function.

GPT-4 has a maximum token count of 32,768 — that’s 2^15 if you’re wondering why the number looks familiar.

That translates to around 64,000 words or 50 pages of text, enough for an entire play or short story.

What this means is that in conversation or in generating text, it will be able to keep up to 50 pages or so in mind.

So it will remember what you talked about 20 pages of chat back, or, in writing a story or essay, it may refer to events that occurred 35 pages ago.

That is a very approximate description of how the attention mechanism and token count work, but the general idea is of expanded memory and the capabilities that accompany it.

  • GPT-4 is harder to trick

For all that today’s chatbots get right, they tend to be easily led astray.

A little coaxing can persuade them that they are simply explaining what a “bad AI” would do, or some other little fiction that lets the model say all kinds of weird and frankly unnerving things.

People even collaborate on “jailbreak” prompts that quickly let ChatGPT and others out of their pens.

GPT-4, on the other hand, has been trained on lots and lots of malicious prompts — which users helpfully gave OpenAI over the last year or two.

With these in mind, the new model is much better than its predecessors on “factuality, steerability, and refusing to go outside of guardrails.”

The way OpenAI describes it, GPT-3.5 (which powered ChatGPT) was a “test run” of a new training architecture, and they applied the lessons from that to the new version, which was “unprecedentedly stable.”

They also were better able to predict its capabilities, which makes for fewer surprises.

  • GPT-4 is more multilingual

The AI world is dominated by English speakers, and everything from data to testing to research papers is in that language. But of course, the capabilities of large language models are applicable in any written language and ought to be made available in those.

GPT-4 takes a step toward doing this by demonstrating that it is able to answer thousands of multiple-choice questions with high accuracy across 26 languages, from Italian to Ukrainian to Korean.

It is best at the Romance and Germanic languages but generalizes well to others.

This initial testing of language capabilities is promising but far from a full embrace of multilingual capabilities; the testing criteria were translated from English, to begin with, and multiple-choice questions don’t really represent ordinary speech.

But it did a great job on something it wasn’t really trained specifically for, which speaks to the possibility of GPT-4 being much more friendly to non-English speakers.

  • GPT-4 can see and understand images

The most noticeable change to this versatile machine learning system is that it is “multimodal,” meaning it can understand more than one “modality” of information.

ChatGPT and GPT-3 were limited to text: They could read and write but that was about it (though more than enough for many applications).

GPT-4, however, can be given images and it will process them to find relevant information.

You could simply ask it to describe what’s in a picture, of course, but more importantly, its understanding goes beyond that.

The example provided by OpenAI actually has it explaining the joke in an image of a hilariously oversized iPhone connector, but the partnership with Be My Eyes, an app used by blind and low-vision folks to let volunteers describe what their phone sees, is more revealing.

In the video for Be My Eyes, GPT-4 describes the pattern on a dress, identifies a plant, explains how to get to a certain machine at the gym, translates a label (and offers a recipe), reads a map, and performs a number of other tasks that show it really gets what is in an image — if it’s asked the right questions.

It knows what the dress looks like, but it might not know if it’s the right outfit for your interview.

Related Topics

Join our Telegram platform to get news update Join Now

0 Comment(s)

See this post in...

Notice

We have selected third parties to use cookies for technical purposes as specified in the Cookie Policy. Use the “Accept All” button to consent or “Customize” button to set your cookie tracking settings