We’ve created GPT-4, the latest milestone in OpenAI’s effort in scaling up deep learning. GPT-4 is a large multimodal model (accepting image and text inputs, emitting text outputs) that, while less capable than humans in many real-world scenarios, exhibits human-level performance on various professional and academic benchmarks. – OpenAI
GPT-4 vs GPT-3.5 – Differences
While the difference in everyday use between GPT-3.5 and GPT-4 may seem subtle, Open AI ensures:
GPT-4 is more reliable, creative, and able to handle much more nuanced instructions than GPT-3.5. – OpenAI
Multimodality
The new model is multimodal, i.e. it is supposed to be able to handle different types of data – text, image, audio, or video. GPT-3.5 only used text. In my opinion, multimodality is responsible for the increase in the number of parameters.
To make it easier to understand multimodality, I’ll show you what it looked like in practice – a user drew out on a piece of paper what he wanted a website to look like, took a picture of it, sent it to Discord connecting via API to GPT-4, and in response got the finished, working code of that website.
Still, as far as the scope of the data is concerned, GPT-4 is limited to the state of knowledge as of September 2021. So we are not dealing with a model that has significantly more knowledge but knows how to operate it better.
Chat GPT Responses
GPT4 is able to process much longer prompts (queries you direct to it) and generate longer responses. Unfortunately, it still “hallucinates” – i.e. it can make things up, which is one of its biggest drawbacks according to SEO and marketing specialists
While still a real issue, GPT-4 significantly reduces hallucinations relative to previous models (which have themselves been improving with each iteration). GPT-4 scores 40% higher than our latest GPT-3.5 on our internal adversarial factuality evaluations. – OpenAI
GPT-4 and GPT-3.5 were tested on both academic and professional exams designed for humans, for which tools were not trained, and on tests designed for machine learning.
OpenAI shows a graph in the test results:
As you can see, GPT-4 performed significantly better than its predecessor in many categories. OpenAI makes the exact results available in its technical report.
GPT-4’s capabilities in various languages were also tested. These are the preliminary conclusions:
Access to GPT-4
Access to GPT-4 is not open. OpenAI has released sign-ups for a waitlist, where priority positions include developers who can bring the most value to the development of the model and researchers working on the social impact of AI.
Owners of ChatGPT Plus – which is the paid version – can use GPT-4.
OpenAI also provides open-sourcing OpenAI Evalls, which allows any user to report model shortcomings to help further improvements.
GPT-4 Materials
GPT-4 is a very fresh topic. I recommend watching the livestream of the GPT-4 presentation, as well as signing up for the waitlist for access.
Summary
Like the previous iteration, GPT-4 is a constantly evolving technology. So it’s worth following news related to it to always stay up to date!
I encourage you to keep an eye on our blog, where new information about technologies related to SEO and Google Ads, including, of course, AI, is constantly appearing.