ChatGPT Limitations: The Long-Tail Case

In this post, I would like to add one observation about Google vs ChatGPT. We need to understand that ChatGPT is just a language model and not a knowledge model or a conceptual model. Thus, it only captures relationships between words and not relationships between entities or between concepts.

ChatGPT is a generative model that incrementally outputs the most likely word from a sequence of words, starting with the prompt (the question) offered by the user. Therefore, ChatGPT answers are always very clear texts, very well written and well structured, but there is no guarantee that they are conceptually correct.

To immediately find the shortcomings of this model, it is enough to focus on the Long Tail. Any question you ask about unfamiliar subjects is likely to get an incorrect answer. In some cases the model assumes its ignorance and does not respond, but in many other cases it produces an entirely wrong answer. You can make your own experiments by asking about subjects for which there are not many materials available.

In this Long Tail case, Google Search is still far superior to ChatGPT for good answers, because the search engine always finds the most relevant pages, and never “invents answers” based on probabilistic language models.

About Hayim Makabee

Veteran software developer, enthusiastic programmer, author of a book on Object-Oriented Programming, co-founder and CEO at KashKlik, an innovative Influencer Marketing platform.
This entry was posted in Data Science, Machine Learning and tagged , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s