When reading the news in these days, apart from those referring to the several wars currently afflicting the World, the most discussed topic is Artificial Intelligence and how this is going to change our lives and the economy of countries.
The UK government, for instance, intends to invest over £2 billion between 2026–2030 to accelerate AI development, featuring a £500m “Sovereign AI Fund” to boost domestic startups and reduce reliance on overseas technology. This strategy focuses on increasing computing power, creating new AI research labs, and driving breakthroughs in sectors like healthcare, science, and public services.
Many analysts still think that AI could be a financial bubble able to burn trillions of dollars for nothing or little in return.
Among all this I wanted to understand directly how AI could be worth dedicating time and money, therefore I started to explore ChatGPT and Claude and see how they work and, more importantly, how reliable they could be. So far, the only experience I had with AI tools was the AI-based Zoom Companion which – among other features – summarises videocalls and send a report a few minutes after the call ends. Well, I can say I was pretty satisfied with the result when the calls were conducted in English, much less when the calls were held in Italian.
Going back to the reliability of AI services, I wanted to challenge them to understand their added value.
ChatGPT – it seems quite good in retrieving information and making summaries of documents in several tones (e.g. generic, informative, professional, educational, addressing boards of directors, etc.).
Conversely, it is not that good when logic should apply. I asked it to generate two schemes of linear and circular economy. The first was generated well while the second was generated with some mistakes, thus not reliable.
It is true that both systems show disclaimers stating that generated outcomes could be affected by some mistakes, but I don’t know how many people take this into account.
Claude – it is a bit confusing when I try to understand which one features (Sonnet, Code, etc.) could be the right one for my scopes.
It works quite well, specially if we need some deep searches and related graphs to be created, but the free version get easily exhausted and payments are immediately requested to continue. This goes from $17 to $100 and more (per month) on an Annual contract, thus fully charged, otherwise more expensive.
Despite it is described as of easy use, someone needs to read a lot of guidelines before being capable of using the potential offered. Besides, some prior assessments should be done before spending a lot of money in advance and maybe not using it properly.
Last but not least, one year ago I was invited to use Perplexity, an AI-based browser, now integrated with Comet and based on Claude.
The invitation allowed me to use it for an entire year, then they asked me a credit card number to continue using it. It recently charged $20/month and when I asked something more to do, it returned the request of buying more “tokens” to proceed, which I haven’t done. Actually, I am not even sure I am going to continue paying the monthly fee.
Going through the news on the Financial Times, I realise the topic of AI tokens is often discussed and understood it could become a sort of revenue generation for some companies (mainly Chinese, it seems).
Taking into account the huge amount of energy needed to make AI machines work, I am not even sure about the amount of sustainability in this worldwide-ly recognised digital transformation.
As a matter of fact, I don’t know whether AI would change/transform the world. Certainly, those behind it know very well how to drag money out of our pockets.
N.B. To be consistent with this editorial, the picture used as title has been generated by Chat GPT (but not the text though, as usual)
Stefano Mainero
EPN Consulting and EPN Consulting Research and Innovation Founder & CEO
Article written by human beings without any use of AI. EPN Consulting Ltd. copyright 2026
Previous EPN Consulting Newsletters are available here.