ChatGPT: A Reflection on OpenAI’s New Artificial Intelligence

There is a lot of talk these days about ChatGPT, the AI engine made freely available by OpenAI. For those unfamiliar: OpenAI has been developing Artificial Intelligence for various purposes for years. The innovative feature of this software is its ability to create content and correlate information from a vast library of texts, on which the engine has been “trained” so as to make it capable of producing responses completely indistinguishable from those of a human being.

What ChatGPT Can Do

Where an engine like Google limits itself to providing what it considers the most relevant search results from the internet, ChatGPT is able to give you a complete, detailed and reasoned answer. It writes texts, analyses, summaries. It correlates philosophers, solves mathematical problems, writes scripts and programs and then modifies them based on your every request.

The Misunderstanding Surrounding This Technology

What I want to focus on is the enormous misunderstanding that, in my opinion, surrounds this technology. I read articles even in prestigious newspapers about how the new AI doesn’t know mathematics or gives obviously wrong answers. Given that the engine’s creators have clearly explained that errors are possible… I wonder why nobody stops to reflect on why such errors are possible.

The common expectation seems to be to compare this engine to a perfect Oracle, capable of always giving correct answers. Yet people don’t realise that it is precisely in these erroneous responses, in my opinion, that lies its approach to having intelligence. ChatGPT creates responses based on what it has learned and what it has interpreted, exactly as we would. Does this mean it’s perfect? Absolutely not. It simply means it approaches the Intelligence defined as the ability to learn, remember, correlate and use information.

Critical Thinking: The Real Frontier

What we would want from this technology is reliable and correct answers. To achieve this, as we would ask of any conscientious human being, we would want the application of critical thinking: the ability to verify a result against other sources considered reliable, producing a final, proven and reasoned outcome. This is still missing — and this is why companies like Google, which have already created similar AI systems (see the LaMDA project), were still far from bringing their products to market: they cannot afford to provide tools that would sometimes give their customers obviously wrong results.

In this I see a double contradiction. On one hand, we consumers have the wrong expectations: an AI can make mistakes just as any evolved intelligence would. This isn’t serious — you just need to understand its error and teach it, exactly as you would with a colleague. On the other hand, these companies seem reluctant to commercialise fallible tools that already today have incredible utility, provided that those who use them understand that at least until they improve further, they should be used for what they are: tools that enormously reduce workload at the cost of some verification.

My Practical Experience

I’ve tried using ChatGPT over the past month for research and analysis of various types. I noticed that it gives different answers depending on how questions are posed, but not only that: based on our feedback it corrects itself and modifies its own responses. Fallibility and imprecision are therefore part of its nature — and this is precisely the beauty of it, provided we can improve it and the results of that improvement are acknowledged.

I tried having it write scripts of various kinds for my needs, asking for corrections and additions always in natural language, ultimately always obtaining a good result. Perfect? No. Worth checking? Yes. But good drafts to start from that saved me a fair amount of time.

As with any tool, the first thing to always have clearly in mind is where and what its limits are. Once these are clarified, it can be used in a reasoned and responsible way, obtaining only benefits.