- AI: The New Frontier
- Posts
- Running to stand still
Running to stand still
Definitely a crazy week
Time for a catch-up!
Despite using lots of AI tools and automated processes in my businesses, my newsletters are hand-crafted, which is not normally an issue. However, this week has been incredibly busy, not just in terms of developments in the World of AI, but also with other business-related issues.
Each week the World of AI seems to combine the drama of half-a-dozen high-end soap-operas, with scandal, intrigue and huge sums of money changing hands. However, occasionally things happen which you realise are significant shifts, even in an environment where things are moving as fast as they are with AI.
Intrigued? Good, as when you take a step back, you can sometimes get a glimpse of the 4-dimensional chess the major players are involved in. So, let’s dive right in …
Big Sharks vs. small fish
It’s starting to sound as boring and repetitive as OpenAI releasing new LLMs, but Meta have continued to make headlines with their spending spree and development plans. They bought PlayAI for about $225 million in cash and stock, which is peanuts compared to some of their other recent billion-dollar deals. PlayAI builds low-latency voice-cloning tech and has a small-footprint speech-to-speech model that runs on mobile. However, given the growing importance of voice interfaces (see Microsoft call centres later), I would expect Meta will attempt to undercut rivals on price in order to secure greater market share.
Mark Zuckerberg also announced that Meta will break ground on a single campus with five gigawatts of capacity, roughly the load of three nuclear reactors. To illustrate the scale, they overlaid the “footprint”, showing it covering most of the island of Manhattan.
The site will host next-generation Meta Llama models, voice clones from PlayAI, and future mixed-reality workloads. Meta plans to secure long-term renewable contracts to power the facility. That’s a sign of how fast compute demand is rising. Lock in GPU or rack space early; once Meta starts buying transformers, regional grids and suppliers will tighten, and prices will move.
It was interesting, as regardless of what you may think of Zuckerberg as an individual, he clearly has a strategic vision. It was highlighted in some recent social media posts how he had set out a roadmap for Meta (then Facebook) ten years ago, and the majority of his vision has been achieved.
Sticking on the topic of infrastructure and power sources, Google has finalised a $3 billion agreement to purchase hydropower from two plants in Pennsylvania. The deal includes 20-year contracts and provides 670 megawatts of electricity, with the potential to expand to 3 gigawatts.
This move is part of a broader trend among tech giants like Meta, Amazon, and Microsoft that I have already mentioned, to power their AI data centres with clean energy.
Going back to the issue of AI voice, French AI startup Mistral has introduced its first open-source audio model family called "Voxtral." These new models aim to offer a powerful and affordable alternative to big names like OpenAI and Google, with competitive performance and a much lower price.
Two versions of Voxtral are available: one for high-performance tasks and one for lighter use. Voxtral can transcribe audio up to 30 minutes and understand content up to 40 minutes. A super cheap version called Voxtral Mini Transcribe is also available, claimed to beat OpenAI Whisper at less than half the price. Voxtral helps fill the gap between free but low-quality audio models and expensive, closed APIs. By being open-source and offering high performance at a very low cost, it gives more control and flexibility to developers, businesses, and startups.
In addition, Mistral announced that it is negotiating a fresh round that could bring its valuation above $6 billion and fund new large-language models. The company is already selling premium tiers of its Mixtral line and courting European governments for sovereign AI contracts.
Anthropic’s Claude also just got a design upgrade allowing it to directly connect with Canva, so that users can create and edit designs by simply describing what they want. You just need a paid Canva account and a paid Claude subscription. Once connected, Claude can also search your Canva docs and presentations for keywords, summarise them, and help you get your work done faster, all using natural language prompts.
Anthropic’s expanding Claude workloads also now generate several billion dollars a year for AWS and has become a top-five cloud customer. Amazon’s $4 billion equity stake also grants future model-training credits at cost.
As indicated in recent newsletters, OpenAI have now finally released their ChatGPT agent, although not in the EU. This follows similar efforts by Google and Anthropic, and it is able to do things like book flights / restaurants utilising MCP interface, effectively becoming a digital assistant.
If you think about it, for research and marketing teams, this could replace traditional browsers for quick fact-checks or competitor scans. Now think about how many paid research tools you quickly become redundant once ChatGPT eventually bundles real-time browsing for everyone.
Also, they have released what is effectively a checkout functionality that enables ChatGPT to display products and includes links that redirect users to external online retailers so they can complete their purchase, which effectively keeps people within the ChatGPT ecosystem.
Building shopping checkout features and partnering with online retailers represents a significant shift in OpenAI’s revenue strategy, as currently their main source of revenue comes from its premium subscriptions.
This move would position ChatGPT as an all-in-one transactional platform (as opposed to being just a research tool, as it is now) that could even compete with the likes of Amazon and Walmart. This pivot could pose a further threat to Google, as consumers are already turning to AI search engines for product discovery, so this could potentially bypass traditional search engines.
Most note-worthy was the fact that Open AI has acknowledged that allowing an AI agent even limited reign over computer systems meant that “with this model there are more risks than with previous models”.
Not to be outdone, Perplexity also released its new AI browser Comet, that embeds their search engine alongside an AI assistant capable of performing agentic tasks like booking meetings and navigating websites, all while integrating with user workflows (sound familiar?).
Because it's running inside the browser itself (Comet is a full-fledged browser), it can do anything that you could do, including interacting with sites that require you to be logged in. Currently, Comet is invite-only unless you have access to Perplexity Max ($200 per month subscription), with Free, Pro and Enterprise users joining a rolling waitlist.
However, despite all of those new releases and other news, that wasn’t even the most significant thing this last week.
News broke at the end of last week that as part of the ongoing battle to hire key individuals, Google DeepMind had hired Windsurf CEO Diana Truong and his co-founders for $2.4 billion, two months after OpenAI talks stalled over IP terms and an offer of £3 billion.
The twist, however, was what then happened less than 48 hours later with Windsurf’s 250 remaining staff and intellectual property. Cognition, the team behind Devin, the autonomous coding agent, stepped in and bought the rights to Windsurf and its key patents, as well as hiring all of the core team of staff, all for a bargain price. Arguably, the patents, that currently generate $82 million ARR, could mean a value of easily over $1 billion, possibly more. It’s unlikely that Cognition paid anywhere near that much.
Legislation, policy and other news
Rather than starting with legislation, I want to highlight a prime example of the costs and benefits that AI can realise for a business, but with certain repercussions.
Microsoft converted their time-consuming and resource-intensive customer service and sales processes, by deploying AI-powered chatbots and virtual assistants to handle routine inquiries, thus reducing the workload on human agents.
These AI-driven enhancements in sales processes contributed to a 9% increase in revenue and saved $500 million in call centre costs, all whilst achieving higher customer satisfaction. The downside was of course the fact that Microsoft have cut some jobs recently, although they say that that is not directly linked to this deployment of AI.
Unusually, more than 40 leading researchers from OpenAI, Google DeepMind, Anthropic, and Meta have put aside their competition to publish a joint paper. Their message is clear: we may be about to lose our last chance to truly understand what powerful AI systems are thinking, and therefore risk not knowing what AI is planning or intending, something that is clearly a serious threat as these systems grow more powerful.
This comes at the same time that AI safety researchers from OpenAI and Anthropic have slammed xAI for its “reckless” and “irresponsible” safety culture, following weeks of scandals, including antisemitic comments from its latest model, Grok 4.
There seems to be a particular issue with xAI’s decision not to publish system safety cards, which detail AI model training methods and safety evaluations, so it’s unclear what safety training (if any) was done on Grok 4.
As I indicated last week, I think that despite the political considerations, I can see the EU quickly moving to restrict or even ban platforms like Grok, particularly as the EU AI Act comes fully into force.
Stay informed. Stay critical. And wherever possible - stay ahead.
Regards
Tom Carter