GPT-4 Turbo is the biggest update since ChatGPT’s launch

Key Takeaways:

– OpenAI has unveiled updates to its large language models (LLM), including the release of GPT-4 Turbo.
– GPT-4 Turbo is more powerful and cheaper than its predecessors and has been trained on information up until April 2023.
– It has a significantly larger context window of 128,000 tokens, allowing for more in-depth conversations.
– The larger context window helps LLMs stay on topic and reduces the risk of producing unhinged responses.
– GPT-4 Turbo is cheaper to run for developers, with reduced costs for input and output tokens.
– It does a better job of following instructions and supports coding languages, images, and text-to-speech.
– OpenAI has introduced GPTs, custom versions of ChatGPT that can be made for specific purposes without coding knowledge.
– GPTs are available for ChatGPT Plus subscribers and enterprise users.
– OpenAI will take legal responsibility if customers are sued for copyright infringement.
– GPT-4 Turbo has a dark side, with the potential for drawbacks similar to other LLMs, but on a larger scale.

Digital Trends:

Matheus Bertelli / Pexels

OpenAI has just unveiled the latest updates to its large language models (LLM) during its first developer conference, and the most notable improvement is the release of GPT-4 Turbo, which is currently entering preview. GPT-4 Turbo comes as an update to the existing GPT-4, bringing with it a greatly increased context window and access to much newer knowledge. Here’s everything you need to know about GPT-4 Turbo.

OpenAI claims that the AI model will be more powerful while simultaneously being cheaper than its predecessors. Unlike the previous versions, it’s been trained on information dating to April 2023. That’s a hefty update on its own — the latest version maxed out in September 2021. I just tested this myself, and indeed, using GPT-4 allows ChatGPT to draw information from events that happened up until April 2023, so that update is already live.

GPT-4 Turbo has a significantly larger context window than the previous versions. This is essentially what GPT-4 Turbo takes into consideration before it generates any text in reply. To that end, it now has a 128,000-token (this is the unit of text or code that LLMs read) context window, which, as OpenAI reveals in its blog post, is the equivalent of around 300 pages of text.

That’s an entire novel that you could potentially feed to ChatGPT over the course of a single conversation, and a much greater context window than the previous versions had (8,000 and 32,000 tokens).

Context windows are important for LLMs because they help them stay on topic. If you interact with large language models, you’ll find that they may go off topic if the conversation goes on for too long. This can produce some pretty unhinged and unnerving responses, such as that time when Bing Chat told us that it wanted to be human. GPT-4 Turbo, if all goes well, should keep the insanity at bay for a much longer time than the current model.

GPT-4 Turbo is also going to be cheaper to run for developers, with the cost reduced to $0.01 per 1,000 input tokens, which rounds up to roughly 750 words, while outputs will cost $0.03 per 1,000 tokens. OpenAI estimates that this new version is three times cheaper than the ones that came before it.

The company also says that GPT-4 Turbo does a better job of following instructions carefully, and can be told to use the coding language of choice to produce results, such as XML or JSON. GPT-4 Turbo will also support images and text-to-speech, and it still offers DALL-E 3 integration.

A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.
Rolf van Root / Unsplash

This wasn’t the only big reveal for OpenAI, which also introduced GPTs, custom versions of ChatGPT that anyone can make for their own specific purpose with no knowledge of coding. These GPTs can be made for personal or company use, but can also be distributed to others. OpenAI says that GPTs are available today for ChatGPT Plus subscribers and enterprise users.

Lastly, in light of constant copyright concerns, OpenAI joins Google and Microsoft in saying that it will take legal responsibility if its customers are sued for copyright infringement.

With the enormous context window, the new copyright shield, and an improved ability to follow instructions, GPT-4 Turbo might turn out to be both a blessing and a curse. ChatGPT is fairly good at not doing things it shouldn’t do, but even still, it has a dark side. This new version, while infinitely more capable, may also come with the same drawbacks as other LLMs, except this time, it’ll be on steroids.

Editors’ Recommendations






Source link

AI Eclipse TLDR:

OpenAI has announced updates to its large language models (LLMs), including the release of GPT-4 Turbo. GPT-4 Turbo offers a significantly larger context window, allowing it to consider more information before generating text in replies. It has a 128,000-token context window, equivalent to around 300 pages of text, compared to the previous versions’ 8,000 and 32,000 tokens. This larger context window helps the model stay on topic during conversations. GPT-4 Turbo is also cheaper to run for developers, with costs reduced to $0.01 per 1,000 input tokens and $0.03 per 1,000 output tokens. OpenAI claims that GPT-4 Turbo does a better job of following instructions and allows users to specify the coding language of choice. Additionally, GPT-4 Turbo supports images and text-to-speech and integrates with DALL-E 3. OpenAI has also introduced GPTs, which are custom versions of ChatGPT that can be created for personal or company use without coding knowledge. In response to copyright concerns, OpenAI has stated that it will assume legal responsibility if its customers are sued for copyright infringement. However, while GPT-4 Turbo offers improved capabilities, it may also come with the same drawbacks as previous LLMs.