ChatGPT is a Glorified Summarizer - Here's Why

Over the past few weeks, unless you've been living under a rock, you might have noticed people across Twitter and other social media talking about the new AI chatbot, ChatGPT, which was released by OpenAI a few weeks ago. In the short time it's been out, millions of people have used it! (You can play with it over here).

People are raving about it. You can create a complex meal plan in seconds - something that's highly useful and makes you wonder if nutritionists will become the new travel agents. You can also do amusing things like have it write a song about Elon Musk in the style of Bob Dylan. And there's a million other useful and not-so-useful things you can do with it, just spend a few minutes browsing the ChatGPT sub-reddit.

While I 100% agree that ChatGPT is useful and can act as a kind of virtual assistant (I would happily pay up to $50 a month for it), I have an issue with people who are overhyping it and calling it "scary" or "close to AGI" (Artificial General Intelligence). Elon included:

There's one simple reason why ChatGPT is far away from this: ChatGPT is only good at answering questions.

What's the problem with that you might ask? Don't the most difficult problems in life come down to being able to answer difficult questions? Well, not really.

Anyone who has solved difficult problems will at some point have come to the realisation that the usefulness of the answer is limited by the quality of the question. Identifying the right question to ask is 50% or more of the problem in most cases.

Think about the example I used above, I could use ChatGPT to develop a complex meal plan for me but it would only be useful if I articulated every constraint and objective that I have. Maybe I'm pescatarian or I'm allergic to nuts. Am I trying to build muscle or burn fat? The meal plan that it generates for me is useless unless I remember to tell ChatGPT all my constraints and goals. Importantly, the less experienced I am in a problem space - the less aware I am about all the constraints that I should understand in the first place. On the other hand,  a professional dietician or nutritionist will know what questions to ask me. Similarly, a doctor will ask you the relevant questions before diagnosing you. ChatGPT is likely to misdiagnose or give you the wrong answer because you haven't thought about every possible constraint when framing the question.

You don't know what you don't know.

Think about that for a minute.

If we don't know what we don't know, we will never know if we're asking the right questions and including the relevant constraints. Critically, ChatGPT also doesn't know what we don't know. Which is why it's a glorified summariser in my opinion. It takes what input we've provided, processes it and spits out an answer. This is similar to what Google does except even Google has more context because it understands your search history, knows your location, social graph etc. Again, ChatGPT is still useful because it saves time - it saves me from doing a Google search, consolidating information and then summarising the key points from my search results. But let's call it what it is, it isn't AGI, it isn't scary - it's really an automated searcher and summariser. A very smooth one, I have to say.

When will it become scary? When it starts to help us figure out what we don't know. When it prompts us for further constraints or poses intelligent questions back to us. This is what the best doctors or specialists in the world would do if you went to them with a problem. They'd know exactly what questions to ask and know where to dig deeper. It's a back-and-forth conversation where they're trying to extract the constraints and edge cases so they can come up with the optimal solution. Perhaps this is what V2 of ChatGPT will be like. When an AI can do this, it will start to replace people completely. That would be scary.

Share this article: Link copied to clipboard!