AI Blog: ChatGPT 4.5 – Where AI Starts to Feel Human Part 1
23 May 2025
Welcome back to our weekly AI blog. Today, we start exploring the new model released by ChatGPT, GPT-4.5.
This model is arguably the largest and best model for chat and is expected to be the last non-reasoning AI model, setting a benchmark that addresses the limitations of models like o1 and DeepSeek R1.

What is ChatGPT 4.5?
ChatGPT 4.5 (GPT-4.5) is good at recognising patterns, drawing connections and generating creative responses. It is seen as more natural, delivering conversations with more emotions like human beings. Let’s see how ChatGPT 4.5 introduces itself:

Key Features of ChatGPT 4.5

GPT-4.5 has Arguably Better Emotional Intelligence
It appears to be good at recognising emotional cues and giving natural responses. When you have a conversation with it, you will find it is like talking with a friend with high “EQ” (emotional quotient). You can easily tell the difference from the answers provided by GPT 4.5 and 4o in the following examples.
Giving the AI the same prompt: “I broke up with my girlfriend yesterday”.


As you can see,

Significantly Lower Hallucination Rates
AI Hallucination occurs when the AI generates a false, or misleading response that sounds reasonable. This problem has existed since AI was released and is troublesome for all LLMs because the AI gives feedback based on patterns in training data rather than verifying facts.
GPT-4.5 takes a major leap forward by admitting uncertainty, and it will say “I am not sure” rather than fabricating an answer. As a result, it outperforms all the previous models by improving its grounding in verified data and logical consistency. It references information more carefully, recognising the data from trusted sources, making it a more dependable assistant when you are doing research or authoring professional articles.

Learn from Past Interactions
GPT-4.5 can automatically retrieve data and remember user preferences from historical prompts. This advanced memory functionality helps to generate more personalised, accurate and consistent responses.
Good at Searching and Multimodal Capabilities
GPT-4.5 has been improved by scaling unsupervised learning. It also supports multiple types of input, including image, uploaded file and voice, and can generate images via DALL.E.

Moreover, GPT-4.5 can search for information even without explicitly selecting the ‘Search’ model. When you do select ‘Search’ or ‘Deep Search’, it can query real-time information from the web. This is especially useful if you want to verify facts, check news or find sources for your papers.
For example, give it a prompt, “Search the latest AI tools for video editing released in 2025 and summarise their key features”.
ChatGPT will query the web (like Google or Bing), find relevant articles and summarise the results for you, and it can also show you source links where possible. Also, one more step than regular search, the Deep Search Function reads and retrieve information from many web pages and links, enhancing insights and contextual understanding.
For example, if you were to ask GPT-4.5 “Deep search the impact of GPT-4.5 on mental health chatbots. Include pros, and cons, and cite credible studies or expert opinions”, it will read full papers and articles not just the abstract or headlines at a speed that is much quicker than conducting the research ourselves.

Join us next time for a look at the limitations of GPT-4.5!