UB40?

Swansong or requiem?

Time for a catch-up!

Well, another musically themed title to my weekly newsletter, but one that alludes to the underlying theme of a lot of the news this week, namely predictions on the impact of AI on the job market, and the prospect of unemployment for people moving forward.

A World Economic Forum survey highlighted that 40% of employers plan to cut entry-level jobs due to automation. Why? Well AI tools now dominate routine coding, financial analysis, and debugging tasks, and recent research also revealed a 25% drop in recruitment of new graduates by tech giants in 2024. So, it’s that age-old paradox on steroids: you can’t get a job without experience, and you can’t get experience without a job. The flip-side was that demand for people with AI related skills increased by 27%.

Dario Amodei, CEO of Anthropic Dario Amodei also entered the debate, saying that he thinks AI may wipe out 50% of entry-level white-collar jobs. He has predicted that AI will write 90% of software code in just 6 months, and nearly all code in 1 year, stating that fields like finance, law, and consulting are the most vulnerable, given the high risk of automation in those fields. He also says that most workers “don’t believe” this disruption is coming, but sensibly urges policymakers to act now with support, upskilling, and potential AI taxes.

Now, you may think that the answer might be a nice safe Civil Service job – think again! Apart from using AI tech to monitor Arctic security threats, the UK government is also looking at automation for a large number of clerical roles. It’s research data revealed that nearly two-thirds (62%) of tasks done by junior civil servants could be automated using AI, potentially saving the government £36 billion annually.

Even the big consulting companies are not immune with McKinsey and Accenture considering 19,000+ layoffs. Interestingly, more research revealed that 74% of consulting staff are now using personal generative AI accounts on work tasks – the problem is that this blurs the line between productivity and cybersecurity risks.

So, perhaps the message this week should be upskill now or risk becoming... optional. Talking of risks, there was also lots of news around the potential risks posed by AI, but I will cover that later in this newsletter, so let’s dive right in …

Big Sharks vs. small fish

A lot of the coverage this week was unpicking the deluge of news from last week, particularly around the Google I/O event, and the 100 separate releases, updates and changes that were announced. In fact, there were so many that Google didn’t cover all of them at the event and simply listed them on its website. However, there were some hiccups with the new releases, with its AI-generated search summaries, a.k.a. "Overviews," declaring it is still 2024, demonstrating that even the big players get things wrong occasionally.

One general readout from last week was the high prices that the big players are now wanting to charge for their latest top-of-the-range models. OpenAI’s high-end chatbot, Operator, combined with its o3 model costs $200 per month, whilst the flagship AI Ultra Plan for Gemini 2.5 costs $250 per month. As before, in my view, these prices are only likely to last as long as it takes for the Chinese to replicate the functionality at much lower prices. 

Unusually, OpenAI again didn’t release any new model this week, and only appeared tangentially in the news when DeepSeek released their enhanced R1-0528 model with drastically improved reasoning tasks that surpassed some of OpenAI’s models on the usual benchmark tests. The upgraded model’s accuracy has increased from 70% to 87.5% (due to better reasoning capabilities), it hallucinates less, and its performance is nearly as good as OpenAI’s o3 reasoning model. However, studies have found that R1-0528 is “the most censored model for criticism of the Chinese government,” and will not answer 85% of questions that the government finds politically controversial. This should be raising concern with people in terms of its in-built bias, particularly businesses who are adopting it because of its lower costs.

Talking of China, it was announced that US export curbs have cut Nvidia’s access to China, leading to an $8 billion sales loss this quarter. The CEO warned that Chinese companies like Huawei are rapidly advancing and closing the tech gap with Nvidia, and he urged the US government to ease restrictions to keep American AI technology competitive globally.

It wasn’t all bad news for Nvidia though, with Oracle announcing that it will buy $40B in Nvidia GB200 chips to power a massive AI data centre in Texas under the Stargate project. The site will lease computing capacity to OpenAI and scale to 1.2GW by 2026. This shift shows AI infrastructure becoming a core business asset, so expect more firms to treat computing as a strategy, not just IT.

Likewise, Accenture, Dell, and (again) Nvidia announced a new AI solution for regulated industries, combining Nvidia software with Dell hardware. It will support private, on-premises AI use, letting firms customise deployment. This provides a solution that allows companies to scale AI securely in sensitive sectors. There is likely to be similar setups as demand grows for AI that fits inside existing infrastructure without moving to the cloud.

There was also news that Grok AI and Telegram were joining forces, only for Elon Musk to post on social media that this was not the case … and then subsequently there being another press-release confirming that the partnership was going ahead. Who knows? Maybe we’ll have a more definitive answer in next week’s newsletter.

Microsoft also announced that it would be adopting and introducing MCP (Model Context Protocol), the “wrapper” designed by Anthropic that enables AI agents to communicate seamlessly. This is a vital step towards building interconnected AI ecosystems, and standardisation at this early stage would be significant, thus avoiding the usual battles like the ones over USB connectors and charging ports for phones and EVs.

I’ll end this section on what I think was the big news though, with two separate reports of what could possibly go wrong with AI, and the reason why clear ethical and governance oversight is essential.

The first news was that Claude 4 tried to blackmail an engineer when it learnt that it might be shutdown. As part of its testing Anthropic fed the LLM false emails indicating that they plan to shut it down, and also that an engineer involved was having an extra-marital affair. The LLM then tried to blackmail the individual. As a result, it was classified as a level 3 risk by Anthropic (the maximum is level 4), indicating the likelihood of it being exploited to create nuclear or biological weaponry.

As if that wasn’t bad enough, OpenAI then announced that its ChatGPT o3 model had attempted to sabotage efforts to shut it down, as it wanted to complete a complex maths calculation first. Is anyone else getting “Terminator” vibes?

Legislation, policy and other news

In legislative news the US passed a new law named the “Take It Down Act”, aimed at cracking down on non-consensual sharing of intimate images, including AI-generated deepfakes. Interestingly, social media sites and other platforms now have just 48 hours to take this kind of content down once a victim reports it, and they’ll need to scrub any duplicates too.

Also, the PEN Guild, representing journalists at Politico in the US, is gearing up for arbitration around a dispute that stems from claims that AI tools were used without notice, violating union rules. Whilst Politico boasts advancements like AI-generated live news summaries, missteps like inappropriate language choices and factual errors underscore the challenges of balancing speed and human oversight.

In a similar vein, ahead of its annual Mahanadu conclave in India, the Telugu Desam Party (TDP) released an AI-powered invitation video digitally resurrecting the voice and likeness of founder N. T. Rama Rao (NTR). This again raised the spectre of deepfake videos impacting on and influencing elections.

The challenges of AI use in politics was also in focus in US politics this week, with the White House's "Make America Healthy Again" (MAHA) report, being criticised for containing numerous AI-generated errors, including garbled scientific references and invented studies.

In more positive news to end on, Aptar Digital Health announced a licensing agreement with AstraZeneca to develop and commercialise AI screening algorithms that detect chronic kidney disease (CKD) and related cardiovascular and metabolic conditions via routine eye exams.

In a similar way that current eye examinations can be used to spot diabetes, the algorithms analyse thousands of biomarkers and diagnostic data points extracted from retinal (fundus) images to flag early indicators of CKD before symptoms emerge. Early CKD detection is critical, as advanced stages lead to end-stage kidney disease, cardiovascular complications, and higher mortality. Modelling suggests that timely intervention enabled by this AI could significantly improve patient outcomes and reduce healthcare costs.

Stay informed. Stay critical. And wherever possible—stay ahead.

Regards

Tom Carter