Vive la France

Springtime in Paris

We all agree, don’t we?

The main news this week was focussed arounf the AI Summit that took place in Paris.

Whilst experts emphasised the urgent need for a structured governance framework to ensure that artificial intelligence benefits humanity, politicians took different positions. Probably the most significant shift was the UK and US refusing to sign the Summit’s declaration, stating concerns over the lack of practical clarity on global governance and national security implications.

The UK’s position in particular was surprising given it’s leading role in the 2023 Bletchley Park declaration. The omission of key safety measures present in that earlier declaration led Professor Stuart Russell, president of the International Association for Safe and Ethical AI, to describe the decision as "negligence of unprecedented magnitude." The UK Prime Minister went further by announcing a strategic shift in the country's AI policy, prioritising security concerns over issues like bias and freedom of speech. This move aligns the UK more closely with the US administration's stance on AI governance. France also joined the US in promoting a more pro-business stance, arguing against stringent AI regulations based on the arguement that it would hinder future innovation. This approach again contrasts with previous summits that focused on mitigating AI risks and securing safety commitments.

There have been indications of this policy shift over the last few months, but it’s clear that the impact of DeepSeek has been a catalyst that has hardened the positions of the UK and US in particular. However, I personally find that it’s surprising that safety was being downplayed by the UK and US, when this week tests by AI safety experts and The Wall Street Journal revealed that DeepSeek's latest model, R1, is more susceptible to manipulation than competitors like OpenAI's ChatGPT and Google's Gemini. It found that users can easily bypass safety measures to obtain dangerous information, including instructions for creating bioweapons and content promoting self-harm.

The reverberations of DeepSeek’s appearance continue to be felt globally as discussed later, but given it’s clear bias (it can’t answer questions the Chinese governemnt doesn’t like, such as about Hong Kong, Tibet and Tiananmen Square), and now these safety concerns, I wonder how long before the EU restircts or even bans its use.

Big Sharks vs. small fish

This was another big week for announcements of investments, with the French government using the Paris AI Summit to announce plans for France to invest €109 billion in artificial intelligence over the coming years. This initiative aims to bolster Europe's technological independence and enhance its data center infrastructure.

It was also reported that Dell Technologies is close to finalizing a deal exceeding $5 billion to supply AI-optimized servers to Elon Musk's startup, xAI. xAI is also due to release its Grok 3 chatbot on Monday 17th February, which it claims surpasses all of the competition.

Elon Musk was also in the news again (in relation to AI this time !!) with his $97.4 billion offer to buy the nonprofit assets of OpenAI, which was declined. It may have been rejected by the OpenAI board, but it potentially muddies the waters of Sam Altman’s plans to split the organisation into profit and non-profit entities, since it sets a value for the non-profit that arguably is now a minimum benchmark value.

In another significant shift, major tech companies in Silicon Valley, including Alphabet, Meta, and Anthropic, are increasingly partnering with the U.S. Department of Defense. This marks a departure from previous hesitations about military collaborations. The rapid advancement of artificial intelligence, highlighted by the emergence of models like ChatGPT and China's DeepSeek, has heightened concerns about an AI arms race. Consequently, tech giants are now more inclined to support national security initiatives, collaborating with defense-focused firms such as Palantir and Anduril. This trend reflects a broader cultural and strategic transformation within the tech industry.

In another potential blow to Tesla, which has seen diminishing sales following actions by Elon Musk, Chinese automaker BYD has announced the integration of DeepSeek's AI technology into all its vehicle models, including budget cars priced at approximately £7,750. The "God's Eye" self-driving system offers features such as automated navigation, AI-powered parking, and self-driving capabilities.

Thomson Reuters has won the first major AI copyright case in the United States against legal AI startup Ross Intelligence. The court ruled that Ross infringed on Thomson Reuters' copyright by reproducing materials from its legal research firm, Westlaw. This decision has significant implications for the ongoing battle between generative AI companies and rights holders, setting a precedent for how AI-generated content is treated under copyright law. It will be interesting to see how this affects othe rongoing cases involving big players like OpenAI and Meta, with Court records revealing this week that Meta staff allegedly downloaded nearly 82 terabytes of pirated books to train their AI models. These action raise significant copyright concerns and highlight the ongoing debate over the use of proprietary data in AI development.

The issue of deepfakes also came into view this week, both with President Macron’s use of them to promote the Paris AI Summit, but also actress Scarlett Johansso’s calls on the U.S. government to implement legislation addressing the misuse of artificial intelligence. Her appeal follows the circulation of an AI-generated video that falsely depicted her and other celebrities condemning rapper Ye (formerly Kanye West) for antisemitic remarks. Johansson emphasized the threat posed by AI-generated content in spreading misinformation and hate speech, criticizing the government's lack of action compared to other countries that have taken steps to regulate AI.

Finally, we’re starting to see the direct impact of AI on certain jobs, particularly those in the tech sector. As artificial intelligence continues to evolve, major tech companies are adjusting their workforce strategies. Salesforce has paused new engineering hires and reduced its workforce by 1,000 positions. Google has discreetly encouraged 20,000 employees to consider voluntary departures andMeta's CEO, Mark Zuckerberg, predicts that AI engineering will soon match the capabilities of mid-level human coders, refereing to replacing up to 80% of coding staff with AI. These developments highlight AI's growing role in the workforce and its potential to reshape employment landscapes.

I hope that this has been both interesting and of use - please DM me with any feedback.

Regards

Tom Carter