At long last!!

But was the wait worth it?

Time for a catch-up!

Well, after significant delays, OpenAI finally released Chat GPT5 this week. I have yet to have the time to test it, but I will do so this coming week and include my thoughts either in the next newsletter, or more likely a LinkedIn post.

By all accounts (and most tests) it has made significant progress, although Grok does score higher on one of the key indices. However, if the hype and feedback so far is to be believed, then this marks a substantial change in AI capabilities across coding, reasoning, creative expression, health, maths, and multimodal tasks (again!!).

Interestingly it’s not one single model, but a range of them. It uses a smart, unified system to better determine when to think deeply versus respond quickly, choosing the most appropriate model accordingly. OpenAI also claims that it features notable improvements in factuality, safety, honesty, and reduced sycophancy. Does it take us any closer to the “Holy Grail” of Artificial General Intelligence? Unlikely, to be honest.

However, there’s also been other models and updates released, along with a scandal concerning sharing of personal and other sensitive data. So, with all of that in mind let’s dive right in on the other news from this week …

Big Sharks vs. small fish

Interestingly, the GPT‑5 launch has clearly rattled OpenAI leadership, with CEO Sam Altman expressing serious concern about the model, describing its capabilities as unnerving and likening its potential impact to that of the Manhattan Project.

However, it does not seem to have worried investors as OpenAI raised $8.3 billion in new funding. Valued at approximately $300 billion, OpenAI’s funding round led by SoftBank has now pushed its fundraising goal for 2025 to $40 billion.

This all comes amid negotiations with Microsoft over long-term structure changes. Despite that, it hasn’t stopped Microsoft from integrating OpenAI’s newly released open-source model, gpt-oss-20b, into Windows via the Windows AI Foundry platform.

This is significant as the model can now run locally on personal computers, with a macOS version coming soon. gpt-oss-20b is a lightweight model built for local execution, including coding and tool integration, but it requires at least 16GB of VRAM, making it usable only on high-end NVIDIA or AMD Radeon GPUs.

Microsoft optimised the model for local inference, with plans to expand device support soon. This marks the first time an OpenAI model is natively supported on Windows and also the first time Amazon is offering OpenAI models through its cloud services. It reflects a major shift in AI strategy, toward more localised, privacy-conscious, and offline-capable AI tools. What does that all mean? Well, it will allow businesses to run local LLMs, which are both faster, cheaper and more secure.

However, this week has not all plain sailing for OpenAI, with Anthropic terminating the partnership between OpenAI and their own Claude models. This all boils down to the main players starting to guard their models, data and commercial advantages.

OpenAI had been accessing Claude through Scale AI’s “chatbot arena” benchmarking platform, which uses multiple models in blind tests for evaluation. Anthropic claims that OpenAI exploited this partnership to gather training data for its own models, violating informal norms around fair use in model evaluation.

Microsoft were busy elsewhere, with their Windows 11 update bringing six AI-powered features, such as Copilot Vision, AI agent-based settings control, and object-select tools. These will initially be available only on Copilot+ devices.

They also unveiled “Project Ire” that Project Ire autonomously analyses software to detect threats, correctly identifying 90% of flagged malicious files in recent trials, although it captured only 25% of malware. A deeper integration inro Defender is planned next.

In similar news, Hewlett Packard showcased "SASE copilot" tools to streamline network security workflows across its Aruba and Juniper platforms using AI-based decision support at Black Hat 2025.

All of this as new analysis reveals that approximately 45% of code generated by large language models contains security vulnerabilities, highlighting major risks as AI tools become more integrated into software development workflows.

In other release news ElevenLabs released Eleven Music, a platform for generating full songs, both instrumental and vocal, via text prompts. It secured licensing deals with Merlin and Kobalt to train on select artists, with safeguards against impersonation or copyrighted lyrics.

Google DeepMind also introduced a Genie 3 model for robotics. This next-gen simulation AI is designed to train robots in lifelike virtual settings such as warehouses, representing a new step toward general-purpose agentic systems.

However, Google’s search functions were again in the news, with Google is arguing that the click volume from its search engine to websites has been “relatively stable” year-over-year, and in fact the average click quality (which is a measure of how long a user stays on a website) had actually increased.

This was linked to an announcement that Google’s AI Mode in Search had surpassed 100M users in the U.S. and India alone. Responding to numerous reports and studies showing that AI search features, like Google's AI Overviews, are stopping people from clicking links and visiting websites, Google’s Head of Search, Liz Reid, stated that she believed these third-party studies “inaccurately suggest dramatic declines” in website visits because they’re often “based on flawed methodologies or isolated examples.” 

This comes just weeks after Pew Research published a report that showed that people are “less likely” to click on links if they’re served an AI-generated summary at the top of the search page. Also, a study by Similarweb established that the number of “no clicks” from search results have risen from 56% (before Google AI Overviews was launched in May last year) to 69% in May this year.

Google has not provided any data to back up their argument, but instead it feels like they’re trying to refocus publishers’ attention on click quality rather than click numbers. 

Legislation, policy and other news

In legal related news, YouTube announced that starting 13th August in the USA they will use AI to estimate users’ ages and automatically restrict access for under-18s. Protections include blocking sensitive content, limiting targeted ads, and applying privacy reminders with verification available via ID. There is no indication as yet concerning rolling this out to other markets.

As mentioned in the introduction there was also a “scandal” involving OpenAI chats being shared on the internet. Basically, people have shared their enquiries with others, not realising that they had the settings on “public”, resulting in their enquiries being made public. These have been found to contain sensitive personal and business information, and despite efforts to remove them, as they say “the internet never forgets”. This isn’t the first time this has happened, with similar occurencess happeningwith postings from Grok and Anthropic’s Claude. Embarrassing, but also a reminder of the risks of both the internet and AI.

In less positive news, Dario Amodei, the CEO of Anthropic projected that AI could eliminate up to 50% of entry-level white-collar jobs within five years and cause unemployment up to 20%. He stressed the need for urgent policy planning and public awareness. 

Interestingly, a report by Netskope reported a 50% rise in usage of unauthorised generative AI tools (“shadow AI”) in workplaces. Over half of AI activity now occurs outside approved channels, exposing organisations and businesses to both data loss and regulatory risk

That’s all for now from me at the end of yet another hectic week … so stay informed, stay critical, and wherever possible - stay ahead.

Regards

Tom Carter