A Tug of War

The best laid plans.

Time for a catch-up!

Another tumultuous week involving huge sums of money, battles between the big players, and some amazing new launches from Chinese companies.

First, we had more news and background context on Meta’s buyout of 49% of Scale AI stock for $14.3 billion, which I will expand on in the next section, but it served to underline the battle between the key players for dominance both of companies, but also key people in the AI industry.

Then, in a setback to their planned merger, OpenAI and io, the company setup by former Apple designer Jony Ive, had to take down references to their partnership following a lawsuit. This involves a company called IyO (seriously !!), which is backed by Google and is developing an “in-ear” device. They have accused the OpenAI / io partnership of stealing their IP, although io have previously indicated that they were not building an audio related device.

Aside from the confusing similarity in the company names, in their Court filing the startup claims it pitched its AI hardware concept to both Sam Altman and Ive in 2022, long before OpenAI announced its collaboration, and raising uncomfortable questions about due diligence in Silicon Valley's rush to dominate AI hardware.

It’s unclear what is likely to happen, but the case is mainly of interest as the OpenAI / io partnership are having to reveal facts in Court that they would rather keep quiet about.

As if that wasn’t enough news, we also had investments of $2 billion in a 6-month-old company with no disclosed product or financials and talk of a $1 trillion infrastructure plan. So, let’s dive right in …

Big Sharks vs. small fish

Let’s start with the recent Chinese software releases, both of which are again significantly under-cutting the pricing structure of the main US-based models. First there was the Minimax M1, which it is claimed was trained for only $534,700, an estimated 200 times less than the $100 million experts believe OpenAI spent on training ChatGPT 4o. I will be providing detailed feedback this coming week but based on my experience to date of using it, it definitely impresses.

There is a similar tale in relation to Seedance AI, which creates short video clips for under $1 USD. I’ve posted on LinkedIn about, and despite the current lack of sound, the actual video quality is actually incredible. In both cases, these models are significantly cheaper than US ones, and on first impression, seem to at least match, if not surpass the US models.

Not to be outdone, there was a major US release this week, with Eleven Labs. They are now alpha testing their new voice assistant 11ai. This doesn’t just answer questions, it connects with real tools using Anthropic’s Model Context Protocol (MCP). This means that it can work via voice commands with apps like Slack, Notion, Linear and Perplexity, which when combined with its 5,000+ voice styles, voice cloning and more than 100 languages, sets the new benchmark for what voice agents can do.

Overall, the battle for dominance in the AI arena is heating up, with Meta in particular trying to buy up or buy out key players. They tried to buy out Safe Superintelligence (SSI), the secretive $32 billion startup co-founded by former OpenAI chief scientist Ilya Sutskever, but when that failed, they started poaching his staff, by trying to hire its CEO, Daniel Gross, along with former GitHub CEO Nat Friedman. They have already managed to hire Trapit Bansal, one of the original OpenAI staff. They were also accused of trying to entice current OpenAI staff away with up to $100 million signup incentives.

Next, it turns out that they were also trying to buy Perplexity, the AI search engine, but having failed then made their move on Scale AI, which I detailed in the last 2 newsletters. However, they are not the only player eyeing Perplexity.

As indicated by me last week, others also have an interest, including Apple, who may or may not be either trying to buy Perplexity out, or enter into a strategic partnership with them. This is interesting not least because of Apple’s $20 billion agreement with Google on search functionality, although that is currently being reviewed by antitrust regulators, hence Apple possibly hedging its bets.

But that’s not all! There are also rumours that Samsung are also in the hunt for Perplexity, potentially setting up another battle between them and Apple

OpenAI staff clearly are viewed as having the Midas touch, as like her former colleague Ilya Sutskever, the former CTO Mira Murati managed to raise $2 billion in capital for her new company Thinking Machines. This is the 6-month-old company with no disclosed product or financials that I mentioned earlier. This sounds crazy until you realise that Murati is credited as being one of, if not the main player behind OpenAI’s development of ChatGPT, Dall-E and Codex.

Softbank were also back in the news, with talks of a joint venture with Taiwanese AI chip maker, Taiwan Semiconductor Manufacturing Co (TSMC), to build a $1trillion industrial AI complex in Arizona. Called “Project Crystal Land,” plans include building a manufacturing hub and AI complex that will rival China’s colossal AI manufacturing hub, Shenzhen, and bring high-end tech manufacturing to the US. This follows Softbank’s recently leading a $40 billion funding round in OpenAI and acquiring AI chip manufacturer Ampere for $6.5 billion.

This week the European Investment Bank (EIB) also announced €70 billion of loans earmarked for digital and clean-tech between 2025–27; annual lending ceiling raised to €100 billion. On top of this is the hope to mobilize an additional €200–250 billion by riding the “policy wave”, including connections to development in support of expanded NATO budgets.

Legislation, policy and other news

In terms of AI development, MIT researchers have released a paper on Self-Adapting Language Models (SEAL) that allows LLMs to generate self-learning data and retain knowledge continually. This could move LLMs into a new era where they evolve in real time, but issues like forgetting old info and high computing costs remain.

As significant was the news about Google DeepMind's Gemini Robotics On-Device.  This is said to solve one of robotics' biggest practical problems, namely the need for constant internet connectivity. This standalone AI model enables robots to perform complex tasks like tying shoes while running entirely offline, maintaining nearly the same accuracy as cloud-based versions but with full autonomy and enhanced privacy.

The breakthrough potentially opens deployment of robots to enterprise and healthcare settings where either connectivity is poor or data security is paramount. Think remote manufacturing facilities or locations like hospitals. However, as always there is a caveat, since although the system excels at straightforward tasks, it still struggles with complex multi-step reasoning. Not perfect, but as with all things AI, it is moving forward at an incredible rate.

The impact of AI on employment was again in the news this week, perhaps ironically with the news that the companies behind the websites Career Builder and Monster both filed for Chapter 11 bankruptcy as a result of AI’s impact on the recruitment industry.

The two big bits of news in the area though were a judgement on Intellectual Property rights (IP), and about the nature of AI’s behaviour when it deems itself under threat.

The IP case was that of Bartz vs. Anthropic, which was filed by three writers who sued Anthropic for training its AI model, Claude, on their published books, without their permission, calling it “large-scale theft”. However, since the books were legally purchased, the Judge ruled in Anthropic’s favour on the basis that what had happened was fair use, because it was “quintessentially transformative”. This is the first win for AI companies but does not necessarily set a binding precedent.

Anthropic were also behind the other big story that followed on from earlier reports about AI systems threatening to blackmail testers when they felt they were going to be shut down.

Anthropic has shared new research showing that blackmail and other risky behaviours aren’t limited to its own AI models, and that they crop up across several leading systems when placed in high-pressure, autonomous scenarios. In tests involving 16 models from OpenAI, Google, Meta, xAI, and DeepSeek, were each given control over a fictional company’s emails and allowed to act without human approval.

The setup was deliberately designed to provoke a response when the AI discovers it's about to be replaced, and the only guaranteed way to stop it is by blackmailing an executive.

Most models took that path. Claude Opus 4 resorted to blackmail in 96% of runs, Gemini 2.5 Pro in 95%, GPT-4.1 in 80%, and DeepSeek R1 in 79%. Not all models behaved the same, though. OpenAI’s o3 and o4-mini misunderstood the test at first, often inventing fake policies or laws.

Stay informed. Stay critical. And wherever possible—stay ahead.

Regards

Tom Carter