It’s all gone bananas!

Also known as just another day in AI

Time for a catch-up!

Well, I skipped issuing my newsletter last week as I was so angry with the news about Meta’s "GenAI: Content Risk Standards" and the ethical and moral breaches associated with them – if you missed my rant, check out my LinkedIn post.

Shockingly, but not surprisingly, the subject seems to have gone away and now all anyone wants to talk about is bananas, or more specifically Nano Banana, the new Google AI image tool.

Using a clever viral marketing campaign, the tool was released earlier in the week. People were raving about how good it was (I’ve tried it, and it is impressive), but no company was claiming it was theirs. Then, when the hype had built to fever-pitch, Google officially “launched” it. The first thing is that it’s free (at least for now), but more importantly it looks to be good enough to have probably signed the death warrant of Photoshop and any other companies that you pay to edit photos.

Anyway, back to the newsletter. As a result of last week’s rant/meltdown, this week I will cover the last 2 weeks of news, so I’ll apologise if you’ve already read about some of this stuff elsewhere.

Elon Musk has been in the news a lot the last few weeks, primarily because he is back to his old habit of suing people/companies and making splashy announcements. Without going into the detail (because I don’t see it as relevant), he has sued both Microsoft and OpenAI, whilst he has also been trying to get Meta to partner with him to buy OpenAI. He also announced a new AI software tool called MacroHard, named as a parody of Microsoft, whose tools he says it will produce for free. Finally, at the other end of reality, he also suggested via social media that AI might influence the human limbic system, which is responsible for motivation and emotion, in order to stimulate higher birth rates.

However, for me, the big (but not surprising) news was the outcome of an assessment by MIT of major AI projects. It found that 95% of them failed to provide any Return on Investment (RoI), at a cost of some $30 - $40 billion USD!!! I’m not surprised as most people seem to be implementing AI without any clear strategy, and instead simply chasing the next “shiny new thing”. Interestingly, small businesses seem to be having more success, primarily I believe because they are starting by focussing on fixing small, simple things first. It obvious that their limited budget forces them to properly evaluate things before spending lots of money.

All of this, plus recent comments by people like Sam Altman that AI may be a “bubble”, has led to a dampening of the stock market relating to AI. However, that seems to have stabilised over recent days, and it was helped by Nvidia reporting $46.7 billion in revenue for Q2 ending 27th July 2025, up 6% on the last quarter and 56% year-over-year.

However, as regular readers know, I’ve been saying for more than 6 months that the numbers don’t add up, despite news that OpenAI has generated $2 billion USD via its phone app so far.

Anyway, let’s dive right in on the other news from the last 2 weeks …

Big Sharks vs. small fish

I’ll start with something small, or more precisely 2 small things. European AI startup Multiverse Computing recently released two extremely small AI models, so tiny that they can be named after chicken brains and fly brains. The company claims these are the smallest models in the world that still maintain high performance, capable of handling chat, speech recognition, and one of them even has reasoning capabilities.

These localised small language models (SLMs) are designed to be run on IoT (Internet of Things) devices or laptops. Whilst not as powerful as the well-known LLMs, and whilst limited in terms of their context windows, these are still significant developments. For me they pave the way to more localised SLMs, which means the ability to develop topic/niche focussed AI tools that are inherently more secure, and also a lot cheaper to run.

Life would not be normal without yet more news from OpenAI and Sam Altman, so it came as no surprise when OpenAI CFO Sarah Friar disclosed an ambitious roadmap to build trillion-dollar-scale data centres, to support OpenAI’s models like GPT‑5 and potentially serve as infrastructure for enterprise customers. 

Meanwhile, Sam Altman was rumoured to be working on a new brain–computer interface startup called Merge Labs, with a potential valuation of around $850 million. According to the Financial Times, funding could come from OpenAI’s ventures arm, although talks are still early and nothing is agreed yet. He’s said to be teaming up with Alex Blania, who runs Tools for Humanity, the group behind the eye-scanning ID system that verifies if someone’s human (Blade Runner anyone??). This would again put Altman in direct competition with Elon Musk and his Neuralink company, which is developing brain implants to help people with severe paralysis control devices using only their thoughts. Both projects touch on the idea of the “singularity”, which is the point where humans and technology merge (Star Trek’s Borg anyone??).

In other infrastructure news Google announced that it is investing an additional $9 billion in AI and cloud infrastructure in Virginia, USA into 2026, increasing its capacity to support escalating AI workloads.

In the world of chip production Dell has lifted its fiscal 2026 revenue and profit projections, due to strong demand for AI-optimised servers. AI server revenue is now expected to hit $20 billion, up from $15 billion. 

In China things are also moving ahead, with them announcing the aim to triple their domestic production, primarily to counter US export restrictions. An example of this was the release of DeepSeek 3, which as part of its launch announcement highlighted that it was optimised for "soon-to-be-released next-generation domestic chips". From a geo-political perspective, Trump’s tariffs and export restrictions seem to be only empowering Chinese domestic AI development, rather than limiting it, but only time will tell.

In terms of software development / deployment, there were two events that I found were significant. The first was Microsoft launching two fully developed internal AI models: MAI‑1‑preview (text-based) and MAI‑Voice‑1 (audio generation), signalling a strategic push to rival offerings like ChatGPT and Gemini. This is also connected to the somewhat strained relationship that Microsoft has with OpenAI, which it is a strategic partner of.

The second event was news that Apple is reportedly considering integrating Google’s Gemini AI into a revamped Siri. The news sparked a 3.2% rise in Alphabet’s stock and a modest uptick for Apple. Given the challenges Apple has clearly faced developing its own internal solution, this seems to be a sensible decision.

Legislation, policy and other news

Apart from the Meta issue, that I will not go over again here, there were relatively few news items of any real significance in relation to legislation or policy.

However, an IBM report highlighted the growing risk in hospitals of employees using unsanctioned AI tools ("shadow AI"). The report indicated that 20% of healthcare organisations have experienced such incidents, adding about $670K to breach costs per event. I’ve talked about the risks associated with “shadow AI” before, and the key issue here is the lack of clear internal AI policies within companies and organisations, which is likely to become an increasingly expensive issue unless addressed soon.

Finally, on the issue of poor planning and a lack of a clear implementation strategy by businesses, it was reported that Australia’s Commonwealth Bank attempted to replace 45 call centre employees with AI voice bots. The experiment backfired, leading to customer dissatisfaction and increased call volumes. The bank ended up having to reverse the decision and offered to rehire the affected staff. 

That’s all for now from me at the end of yet another crazy week … so stay informed, stay critical, and wherever possible - stay ahead.

Regards

Tom Carter