Tectonic shift!

Hold on, it’s going to get bumpy.

Time for a catch-up!

OK, this week has officially been insane on so many different levels, resulting in probably my longest newsletter so far.

I’ve been writing this newsletter since 3rd February 2025, and whilst there’s been a lot happening every week, the pace of change and the strategic shifts have palled into insignificance when compared to this last month, and even the last seven days.

Not only have there been some major new software releases, more importantly in my view the big players are positioning themselves and staking their positions moving forward.

Most people this week have been distracted by the various memes floating around about the “money-go-round” scheme involving OpenAI, Oracle and Nvidia. The short explanation is that OpenAI contracted Oracle for $300 billion over 10 years to provide servers and infrastructure. As reported last week, this bumped the stock value of Oracle by over 20%, adding hundreds of billion in value. Oracle then used that increased share value to buy shares in Nvidia, who in turn bought shares in OpenAI … thus completing the circle and providing OpenAI with some of the money to pay Oracle with.

As I’ve been mentioning, this underlines the growing importance of AI infrastructure but also highlights why I think the whole thing looks increasingly like a bubble. That doesn’t mean that AI is useless or a fraud, but rather that I personally think that investing in some parts of the AI business model is extremely dangerous at the moment. If the bubble bursts, then it is likely to leave a small number of big companies, who will then dominate the marketplace.

Anyway, let’s dive right in on the other news from last week …

Big Sharks vs. small fish

I’ve spoken before about how the main players (OpenAI, Meta, Google, xAI, etc.) are all trying to build up AI ecosystems. Like with Microsoft and Google in the office software environment, the idea is to capture the customer with one product and then keep them by providing everything else they need.

The best example of this is OpenAI. Small start-ups were created by the hundreds to provide tools and services that OpenAI couldn’t deliver but people wanted, when ChatGPT 3 was launched, like external memory capacity, scheduling and diary tools, and the like. However, once the most popular tools reached a certain critical mass, OpenAI duplicated the functionality internally and provided it for free as part of their subscription, thus effectively killing the competition in one move.

The most recent example is their planned move to match jobseekers with vacancies starting in 2026, which will have a massive negative impact on LinkedIn’s business model, as well as other job agencies. However, when they announced that, they explained that this would primarily focus on people with AI qualifications provided by the OpenAI University.

Fast forward two weeks and OpenAI have announced that access to all 83 of their online courses is now free, thus drawing people away from other online learning platforms, and further into the OpenAI ecosystem. However, OpenAI didn’t stop there, also releasing hundreds of “Prompt Packs” covering all types of scenarios, and then announcing that they are providing everyone at Oxford University free access to ChatGPT Edu.

Finally, they also announced the release of something called ChatGPT Pulse to Pro users on mobile. This allows ChatGPT to proactively do research on topics, in order to deliver personalised updates based on your chats, feedback, and connected apps like your calendar. This will doubtless be rolled out to other paid-plan users, again integrating ChatGPT into your day-to-day life, and bringing you further into their ecosystem.

Going back to the issue of companies investing in each other, this week Bloomberg also reported that Intel had approached Apple about a potential investment and closer partnership. The talks are early-stage and might not lead to anything, but the mere fact that Intel is having these conversations shows just how far the company has fallen.

In fact, Intel and its survival links to many other moving parts. As was reported back in August, the US Government took a 9.9% share of the company for $8.9 billion. The deal was structured to replace previously promised CHIPS Act grants, essentially forcing Intel to give up ownership for money it was supposed to receive anyway.

Then, a few weeks ago Japan’s SoftBank invested $2 billion, making it the 5th largest investor, only for Nvidia to step in last week and buy $5 billion in shares, which given the fact that they compete in many sectors made it all the stranger. However, Nvidia’s move boosted the Intel stock by 32%, perhaps convincing the markets that Intel will survive. This was further supported by the announcement of an Intel and NVIDIA joint development, with them co‑designing / manufacturing custom CPUs for data centre / PC markets, whilst working with NVIDIA’s NVLink architecture.

There were serious doubts about Intel surviving, given the fact it lost 60% of its stock value in 2024, marking its worst year on record. The company's revenue declined 20.2% in 2022, 14% in 2023, and continued dropping in 2024. Intel posted a $2.9 billion loss in the second quarter of 2025, compared to a $1.6 billion loss the year before.

Given the fact that Intel was seen as the driving force behind the home computing boom, the least few years have seen a dramatic reversal of fortunes, as it lost its market leading position to Nvidia, AMD and TSMC. Apple’s decision as to whether or not to invest may potentially be influenced by the current administration, who are keen for there to be both capacity and resilience in the US, as it seeks dominance in the AI battle with China. However, it may end up that like Kodak and many other established brands, Intel may eventually disappear, except as another example in future business studies textbooks of missed business opportunities.

On the topic of Chinese AI developments, Alibaba has released its most powerful model yet, Qwen3-Max, with over one trillion parameters. It outperformed Claude, DeepSeek and even ChatGPT 5 in several benchmark tests.

The model is built for both coding and independent decision-making, having been trained on 36 trillion tokens, and it can act as an autonomous agent, needing fewer human instructions.

This marks a big step in the global AI race, with Alibaba’s investment of over $53 billion showing how seriously it is competing, pushing technology toward stronger reasoning, coding, and independent agents.

We then move onto Google, who clearly did not want to be left out of the news this week. They started by launching Mixboard, an AI-powered infinite canvas that lets users create and edit images with simple text commands. Users can design decor ideas, party plans, or creative collages with quick prompts and images can be edited, combined, or regenerated using natural language.

For now, it is only available in a public beta version in the USA and runs on the Gemini 2.5 Flash model, but it will be available globally soon, probably followed by lots of creative companies going out of business as people create their own materials.

However, in perhaps the most significant piece of news this week (that has crept under the radar), Google has released a Data Commons Model Context Protocol (MCP) Server, which will give AI developers and tech companies direct access to one of the largest collections of public, real-world datasets.

Google’s new MCP server connects AI models and agents to Data Commons, a knowledge bank with billions of data points spanning economics, demographics, health, and the environment.

This is huge, as it will allow AI developers and tech companies to train their models on data from authoritative sources, which should help to reduce hallucinations, improve reliability, and increase the quality of AI outputs. By doing this and opening up access to this colossal, structured public dataset, Google is also setting a new bar for transparency and accuracy in AI.

As if that wasn’t enough, Google also made their state-of-the-art robotics embodied reasoning model, Gemini Robotics-ER 1.5, available to all developers. This model specialises in capabilities critical for robotics, including visual and spatial understanding, task planning, and progress estimation. It can also natively call tools, like Google Search to find information, and can call a vision-language-action model (VLA) or any other third-party user-defined functions to execute the task.

Personally, I suspect that the latter step was designed to effectively make Gemini Robotics the “industry standard”, thus pushing aside competition from the likes of Meta and xAI, whilst again linking people back into the Google ecosystem.

Finally in this section, xAI released Grok 4 Fast, a model built to give strong reasoning at a lower cost, which performs close to Grok 4 but uses 40% fewer tokens, making it quicker to run, and according to independent checks it was 98% cheaper than Grok 4 for the same benchmark results. It also supports a 2M token context window, advanced web and X search, and can use tools like code execution and browsing.

Tied to the release of Grok 4 Fast was the announcement that Federal agencies will now use Grok 4 and Grok 4 Fast through General Services Administration (GSA) channels. The deal is aimed at making the government faster, more efficient, and more innovative, with Elon Musk calling it a step toward U.S. leadership in AI.

Legislation, policy and other news

Meta was in the news for completely different reasons this week, with the announcement that it will be “investing tens-of-millions” into a super Political-Action-Committee (PAC), called the “American Technology Excellence Project”, which is aimed at fighting state-level "onerous" AI regulation proposals that could stifle America’s AI advancement.

This comes after Meta also recently invested into a California-specific PAC, again with the intention of fighting against regulation of AI, by funding tech-friendly candidates in state races. Meta’s position is that this was a necessary step to “support the election of state candidates across the country who embrace AI development, champion the US technology industry, and defend American tech leadership at home and abroad.”

This came in stark contrast to the move by more than 200 AI leaders, including Geoffrey Hinton and one of OpenAI’s founders, who signed a global call asking governments to agree on clear red lines for artificial intelligence that it must never cross.

The initiative is called the Global Call for AI Red Lines, and the goal is to prevent risks before they happen, by adopting rules like banning AI that can replicate itself or impersonate humans. They say company policies alone are not enough, and a strong international body is needed.

Demonstrating the potential risks that AI represents, it was reported that a UK‑based site was found hosting AI chatbots that generate disturbing sexual content involving children, both in explicit images and in role‑play dialogs. The Internet Watch Foundation (IWF) reported a 400% increase in such AI content over the past year, and as a result regulators and advocacy groups are calling for stricter built‑in safety protections and more robust legal enforcement. 

In a slightly more positive ending note, the UK Government also announced this week that an AI tool had helped it recover £480 million in fraudulent claims in the space of a year, including £186 million connected to COVID related fraud. They expect the software to be licenced to other governments to also assist them with fighting fraud.

That’s all for now from me at the end of yet another week, and I definitely need to go and lay down in a darkened room in order to recover … so stay informed, stay critical, and wherever possible - stay ahead.

Regards

Tom Carter