Crystal clear vision!

Just don’t call me

Time for a catch-up!

Another busy week in the world of AI this week, and we're going to start with news from Meta, who at their annual developers’ conference “Meta Connect”, launched their latest product which was a partnership with Ray Ban and Oakley sunglasses.

At the conference they launched their latest line of smart accessories, in this case the Meta Ray Band Display glasses, which includes a full-colour high-resolution screen in one lens where the users can conduct video calls and see messages, and also features a 12-megapixel camera. Linked to that was the so-called neural wristband, that pairs with the glasses and allows people to carry out tasks by sending messages using small hand gestures.

However, not everything went as planned and the demonstration call via WhatsApp to the glasses did not go through despite repeated attempts. However, that wasn't the worst thing that happened for Meta this week, as two former safety researchers testified before the US Senate, that Meta covered up potential harms to children stemming from its virtual reality products. Specifically, the company allegedly told in-house researchers to avoid work that could produce evidence of harm to children from its products, an allegation that Meta claims was “complete nonsense”.

However, given the recent revelations concerning the guidelines Meta set for interaction between its AI products and young children, their denial does not come across as credible.

It was also a bad week for NVIDIA. The bad news started with the Chinese government accusing NVIDIA of violating its anti-trust laws over the acquisition of the chip designer Mellanox Technologies, despite the fact the deal had previously been approved by Beijing. Then, a couple of days later the Cyberspace Administration of China officially banned Chinese tech companies, including e-commerce giant Alibaba and TikTok owner Byte Dance, from purchasing or testing AI chips from NVIDIA.

This comes after NVIDIA released in July a custom chip specifically designed for the Chinese market, which had already received a huge number of orders. This also puts a dent in the deal negotiated between the trump administration and NVIDIA, that would allow the US Government to take a 15% cut of any revenue of chips sold in China.

Anyway, let’s dive right in on the other news from last week …

Big Sharks vs. small fish

The Trump Administration was also in the news for other reasons, namely the joint announcement that the UK had secured £150 billion pounds worth of investment from the US.

The happened on the day that US President Donald Trump and Prime Minister Sir Keir Starmer signed an agreement dubbed the "Tech Prosperity Deal", which sees firms such as Microsoft and Google pledging to spend billions in the UK. The vast majority of the £150bn investment - £90bn - will come from Blackstone over the next decade, although how most of this money will be spent has yet to be decided. The US private equity firm announced in June it would spend £370bn across Europe over 10 years.

In other news, Google were busy with important announcements, including the fact that it has added Gemini (its generative AI) into the Chrome browser, including a chatbot button for querying/searching across open tabs, and context‑aware AI prompts via Omnibox, etc. Initially it is for U.S. desktop users only, with a mobile rollout in progress, it gives more users AI‑powered browsing tools by default. 

More significantly, Google also launched the Agent Payments Protocol (AP2), which is an “open protocol to securely initiate and transact agent-led payments across platforms".

AP2 uses two mandates/signed digital contracts to verify that 1) a user has authorised an agent to search for products and negotiate with sellers (an “intent mandate”), and 2) to purchase on a user's behalf (a “cart mandate”).

Why is this important – well it means that AI agents can now undertake purchases independent of human interaction, allowing them to move closer to full autonomy. When a human isn’t present, the user will sign a detailed intent mandate, containing specific purchasing rules, like price limits, which pre-authorises the agent to create a cart mandate, once each rule has been met. 

This is a significant step closer to AI agents being able to fulfil their full potential, and although the success of AP2 largely depends on the support from others in the ecosystem, it’s already secured backing from over 60 major merchants and payment providers, including Mastercard, American Express, Accenture, and Adobe,

Heading back to China, there were two other developments this week. The first was the release by Byte Dance (owner of TikTok) of Seedream 4.0, their answer to Google’s Nano Banana, and again the market has been shocked by how good it is, and how much cheaper it is.

This echoes the situation back in February when DeepSeek released their LLM, sending shockwaves through the AI World. They have now confirmed that that was trained in just 80 hours at a cost of only $294,000, significantly cheaper than OpenAI’s estimated cost of $100 million to train ChatGPT 4.

Legislation, policy and other news

Arguably, the main news this week was in the legislative sphere. In the EU Italy became the first EU country to pass a law fully aligned with the EU AI Act. Key features included oversight by two national bodies called the Digital Italy and the Cybersecurity Agency. The focus was on traceability and human oversight in sensitive sectors including healthcare, education, workplaces, and the justice sector. Ireland also designated competent authorities responsible for enforcing the EU’s AI Act.

In the US, California finally passed AI Safety Bill SB 53, which was then signed into law by State Governor Gavin Newsom, having previously vetoed the earlier version of the legislation, SB 1047. SB 53 requires AI companies to disclose what their AI safety testing processes are and demonstrate how they’re following them.

Also in the US, the U.S. Federal Trade Commission has launched investigations into several major companies (OpenAI, Meta, Snap, etc.) to obtain information about how their AI‑chatbots are tested, monitored, and evaluated for risks, particularly regarding children and teens. 

In terms of Intellectual Property disputes, the very big news was Disney, Universal, and Warner Bros. Discovery suing MiniMax, the Chinese AI unicorn valued at over $4 billion. MiniMax has basically branded itself as a "Hollywood studio in your pocket", using Mickey Mouse, Minions, and Marvel characters in their own advertisements, and slapping their watermarks on the generated content. Not surprisingly the studios were not impressed. They're alleging wilful infringement, stating that MiniMax knew exactly what they were doing when they ignored cease-and-desist letters and continuing violating copyright. The result will have significant repercussions moving forward, given the continued ability of AI to mimic both photos and video.

Finally in relation to AI assisting with medical issues, there was both good and bad news. The good news was that researchers trained AI on almost 37,000 eye scans to predict which keratoconus patients will go blind years before human doctors can detect progression. The system sorted patients into high-risk and low-risk groups with 90% accuracy, preventing unnecessary procedures while catching cases that would otherwise lead to corneal transplants.

However, on the downside in the US, 60 out of 182 recalls of devices by the FDA were AI‑enabled medical devices. The most common problems were diagnostic/measurement errors, followed by functional failures or delays. About 43% of recalls occur within a year of the product being authorised for use in the marketplace. 

That’s all for now from me at the end of yet another week … so stay informed, stay critical, and wherever possible - stay ahead.

Regards

Tom Carter