The clock is ticking ...

Excited or worried?

Time for a catch-up!

This week seemed to be the quiet before the storm, as we edge closer towards several significant events.

The first is the coming into force of the EU AI Act on 2nd August, which was heralded by the release this week of the EU’s new voluntary code of practice for general-purpose AI. No sooner was it released then Meta refused to sign up to it, claiming that the code creates legal confusion and goes beyond the AI Act. I’m sure all of those lawyers specialising in IT and associated issues will rubbing their hands with glee at the prospect of years of expensive litigation over the nuances of the legislation, it’s possible interpretation, and the placing of commas.

The other big cross-cutting policy related document released this week was the release by President Trump of his 28-page “AI-Action-Plan” for “winning the AI race”.

This includes over 90 federal “recommendations” focussed on expanding AI infrastructure, increasing private-sector innovation, and exporting American AI for global influence. Whilst the plan outlines a variety of recommendations to help the US maintain its global leadership position, the administration is clearly still working some stuff out, as it doesn’t specify exactly what or how it will achieve AI global dominance. 

The other event being teased this week is the imminent release of ChatGPT 5, but again no official date has been announced. So, let’s dive right in …

Big Sharks vs. small fish

We also had a bit more detail this week about some other things that were announced in the last few weeks. The most important was AWS revealing the Amazon Bedrock AgentCore. This is a comprehensive suite of tools for secure, scalable agentic AI which are effectively “off-the-shelf”.  AWS also launched an AI Agents & Tools section in AWS Marketplace and introduced S3 Vectors, which are optimised storage for large vector datasets. This probably doesn’t mean much to most people, but vector databases are key to the development of AI Agents, and this marks a major shift toward practical, enterprise-ready autonomous AI.

OpenAI were also busy this week, announcing a strategic project with the UK government, with support from Nvidia, HPE, Dell, and Intel, all designed to bolster AI security research and expand national compute infrastructure. The UK government has committed to a £1.34 billion investment with the aim to increase public AI compute capacity by 20 times over the next five years.

OpenAI will support applications in healthcare, education, legal, defence, and seek to grow its UK workforce, leading to an estimated economic boost of £47 billion in productivity annually.

However, this is not a binding deal, but rather a shared roadmap for future AI work. As mentioned in recent newsletters, this matches the growing trend for countries to develop their own national AI capabilities.

OpenAI also announced that ChatGPT now sees over 2.5 billion prompts a day. This is still a long way short of Google’s 14 billion daily searches, but it again signals the impact of AI LLMs in the search engine environment.

OpenAI also announced a plan for a target of one million GPUs, making it the largest AI compute infrastructure in the world. The company is trying to overcome global GPU shortages, which previously delayed the release of GPT-4.5.

Of course, all of this requires computing infrastructure and power to run it. So, it was no surprise that OpenAI also signed a deal with Oracle, valued at $30 billion a year. This long-term contract locks up huge blocks of Oracle’s GPU racks and 100% renewable power.

The deal nearly doubles Oracle Cloud’s AI revenue overnight and signals OpenAI’s demand will outstrip Azure alone. Meanwhile, Oracle gains model-training prestige whilst OpenAI hedges against single-vendor risk.

Google also raised its 2025 capex forecast to $85 billion, up $10 billion with two‑thirds earmarked for servers and data centre expansion. This reflects growing demand for AI infrastructure, aided by favourable tax legislation. 

It’s worth noting that the growing number of AI data centres in the U.S. is prompting state level concerns over rising energy costs, increased water consumption, and environmental impact. These AI-driven facilities are projected to consume 10% of national electricity by 2027, raising questions about power infrastructure and sustainability.

However, some of the newer players in the market were also making the news, with Swedish AI startup Lovable raising $200 million just eight months after launch, putting its “vibe-coding” platform at a $1.8 billion valuation.

The tool lets non-developers create full apps and sites via natural-language prompts and already counts 2.3 million active users. This follows the success of other similar apps like Base44, Figma, Cursor and others. This in turn is likely to impact on low-code vendors, who are almost certain to lose business to these platforms.

In fact, this week AI-coding startup Cursor snapped up Koala, an enterprise code-assistant that already runs inside regulated banks. This means you will have access to a combined toolset that offers on-site installation, air-gap options, and SOC 2 logs. Again, this probably doesn’t mean much to most people, but it means that you can develop and build secure apps behind a firewall. For anyone dealing with a regulated industry, this is significant.

Legislation, policy and other news

In the World of policy, the British Standards Institution (BSI) announced that it will launch a new AI auditing standard on July 31, 2025, designed to regulate the booming AI assurance industry. It aims to ensure independent, transparent evaluations and build trust in AI systems, which is especially critical ahead of broader international regulations.

In terms of employment Sam Altman predicted that entire job categories such as customer service could be eliminated as AI surpasses human capabilities. He also cautioned against AI-powered voice cloning being weaponised for fraud, urging maintained human oversight in critical areas like healthcare. Meanwhile Aravind Srinivas, Perplexity’s CEO, identified customer support and paralegal jobs as most at risk from AI automation, while encouraging workers to shift toward roles requiring creativity and emotional intelligence.

Their predictions don’t sound unbelievable when you discover that AT&T had been struggling to manage 40 million+ customer service calls per day. However, by developing an AI model supported by ChatGPT to address customer issues, they reduced operational costs by 65% and call analysis dropped from 15hrs to 5hrs p/d.

So, let’s see how things develop over the coming weeks, but in the meanwhile … stay informed, stay critical, and wherever possible - stay ahead.

Regards

Tom Carter