- AI: The New Frontier
- Posts
- Unexpected results
Unexpected results
Or maybe not so dumb(o) after all
Time for a catch-up!
I think that in years to come, we will look back on this week as being one of the most significant in the history of AI development for potentially opposing reasons.
Well, that didn’t take long! Both of my main opening stories remained in the news during the last week, and I expect that the debate caused by both will not die down very quickly.
First, we had the news that following Meta’s buyout of 49% of Scale AI stock for $14.3 billion, a number of key customers of Scale AI have cancelled their contracts, including Google and OpenAI. This begs the question as to how those companies will now annotate their data, and whether it will impact upon their development of new AI models.
Equally, the question could (and should) be asked as to whether Meta have just killed the company that they’ve just spent $14.3 billion for? If Meta ends up being their only client, that could prove to be a very expensive gamble.
Next, we have the fallout and pushback regarding the Apple research paper. In a clever piece of marketing Anthropic responded to the press-release by having its Claude Opus 4 LLM; which was criticised in the research; respond to the paper. Claude suggested that the problems were not due to a lack of reasoning ability, but rather the way the problems were presented and the constraints of the testing environment.
If nothing else, the paper has sparked an invaluable conversation around the both the benefits of AI, but also its ability to deliver real value. It also suggests that the current progression, whilst impressive, is unlikely to achieve the “Holy Grail” of Artificial General Intelligence (AGI) anytime soon.
Not surprisingly though, there’s still been masses of things happening, so let’s dive right in …
Big Sharks vs. small fish
After a couple of weeks of relative quiet, OpenAI were back with a vengeance this last week, with lots of new features and updates, even if not new LLMs. These included:
· ChatGPT image generation in WhatsApp;
· ChatGPT Record: Auto-transcribe meetings and pull notes effortlessly (see my post on 19th June);
· ChatGPT Connectors: ChatGPT connects with Gmail, Google Drive, Teams, and more;
· Custom GPTs now run on any model, like GPT-4o, o3, o4-mini;
· Enhanced Project Support for files, memory, voice mode, and improved research;
· Improved Canvas Features such as being able to easily export documents, code, and PDFs;
· Smarter Voice + Translation: Advanced features for better communication;
· Memory Updates, that allows ChatGPT to now recall all your past chats;
· And … OpenAI o3-pro rolling out now for Pro users.
All of this came the same week that OpenAI announced $5 billion losses in 2024, despite earning more than $4 billion in revenue.
The “Monopoly” money nature of the business was also underlined in the latest update in the “infrastructure war”, Amazon is backing $13 billion AUD to increase AI and cloud infrastructure across Australia by 2029.
The other big LLM news relates to a German AI company named DeepL, which translated the entirety of the internet in less than 3 weeks (previous attempts took 30 times longer!!) – from English to Chinese, from Aboriginal to Arabic, whichever languages you want. To be that in context, it can translate the Oxford English Dictionary into any language in less than 2 seconds.
The last news this week was that Midjourney has rolled out V1, a feature that turns still and AI-generated images into short, animated clips (5–21 seconds), available via web or Discord for about $10 per month, which provides a cheap opening for small companies to potentially create marketing materials.
Legislation, policy and other news
Midjourney were also in the news this last week for less positive reasons, as Disney and NBC Universal sue them for copyright infringement in relation to image creation, underlining the challenges faced with respect to intellectual property rights and generative AI.
The “tug-of-war” around AI in the USA continued, with Big Tech pushing for a Federal Moratorium on State AI Laws. Industry heavyweights are lobbying for a 10‑year ban on states drafting their own AI regulations—a proposal that raises democracy and oversight concerns.
In contrast a California State report chaired by Fei‑Fei Li highlighted risks including AI-enabled biothreat creation and sophisticated deception and urged enforceable governance.
Talking of potential death and destruction, OpenAI secured a $200 million US Department of Defence contract to develop AI prototypes for cyber defence, military healthcare, and bureaucratic tools.
Again, in contrast, during his address to the College of Cardinals, Pope Leo XIV cautioned that AI endangers human dignity, labour, and justice. He announced an upcoming Vatican summit with tech giants to forge ethical AI oversight.
The impact of AI on employment was also in the news this week, with AI pioneer Geoffrey Hinton cautioning that rapid AI development could displace many office-based roles, whilst he advocated for proactive regulation. The impact on jobs was also underlined by Amazon Andy Jassy who said that AI will reduce corporate headcount over the next few years, as he highlighted the 1,000 internal AI tools transforming workflows within Amazon.
Jassy’s call for employees to get conversant in AI or risk being left behind was also backed up by other data this week, that revealed that AI proficiency is growing 66% faster than other skills, with practitioners commanding 56% higher wages. Gallup polls also reported that 40% of U.S. employees now use AI at work, up from 21% in 2023.
To underline a potential benefit, Grand Traverse County in the USA is considering deploying an AI assistant for 67% of non-emergency calls, freeing up dispatchers for critical situations while maintaining citizen connection. This is exactly the type of low-risk, repetitive jobs that AI is perfect for, whilst also enabling humans to focus on more important tasks.
Finally, some good news for animal lovers. In India, the South-Eastern Railway division tested an AI-based Elephant Intrusion Detection System (EIDS) that uses pressure sensors, fibre-optics, and algorithms to detect elephants near tracks and alert engineers, aiming to prevent collisions with the animals.
Stay informed. Stay critical. And wherever possible—stay ahead.
Regards
Tom Carter