- AI: The New Frontier
- Posts
- DeepSeek = more free stuff!
DeepSeek = more free stuff!
As well as 1st steps for EU AI Act and lots more
Where to start?
This week was another example of why I started this newsletter with lots of developments in terms of products, the main players releasing new stuff and the first parts of the EU’s AI Law coming into force, so apologies for what is a long newsletter this week.
So, in a plot twist worthy of a billion-dollar sci-fi blockbuster, over the last week or so Chinese startup DeepSeek has rocked the tech world with its R1 AI model. With its advanced capabilities crafted on (allegedly !!) a shoestring budget, it has unnerved investors, impacting the market cap of tech behemoths like Nvidia and Microsoft. Accusations abound including OpenAI claiming that their work was copied (pot calling kettle black?), and suggestions that the Chinese government orchestrated the publicity to undermine US announcements on AI infrastructure development.
Whilst the figures quoted in the press for DeepSeek actually only reflect the last stage of development costs, the new approach for training LLMs has rocked the big players, particularly when subsequently Stanford and the University of Washington announced this week that they had developed their AI reasoning model, s1, for less than $50 using a similar methodology. The result has been many of the big players opening up their tools either at reduced prices or for free.
Expect more upheaval as the fight for dominance of the market place heats up.
The other major development was the release of guidelines for the prohibitions under the EU’s AI Act, which came into force a week ago. I’ve provided a summary of the guidelines below, but with all of the recent hype around DeepSeek it’s worth considering the impact of both the EU AI Act and GDPR more broadly on its use within the EU. Even, if as now, some providers route your service via servers based in the USA, the restrictions on and suppressions of information (in line with Chinese Government requirements) will run into legislative issues in the EU and probably other countries too.
Big Sharks vs. small fish
OK … I’ve listed below what I think are the key developments from the last week:
1. OpenAI's 03 Mini Release
The 03 Mini model offers state-of-the-art performance in math and coding while being accessible to all tiers, including free users. Pro users have unlimited access, and even free plans allow combining it with search features like "Search + Reason".
2. Deep Research for Pro Users
OpenAI launched "Deep Research" for $200/month Pro tier, enabling detailed, highly accurate analyses by combining web research and reasoning capabilities.
3. Gemini 2.0 by Google
Google introduced Gemini 2.0 with three models (Flash, Flashlight, Pro), offering a 1–2 million token context window. They're cost-effective, with token usage pricing significantly lower than competitors.
4. Accessibility and Free Tools
As mentioned above, both OpenAI and Google made accessing their advanced AI models and tools easier, doubtless prompted by DeepSeek’s arrival on the scene. For instance, Google’s "AI Studio" now makes models like Gemini 2.0 Flash freely available for experimentation.
5. Creative AI Updates
Tools like Grok in X introduced AI image editing, while Pika Labs introduced features like turning pet photos into interactive videos or blending images into existing footage.
6. Video Restoration Advancements
Topaz Labs’ "Project Starlight" debuted as a diffusion model for enhancing old, low-quality videos to high resolution.
7. Other AI Players and Features
Competitors like MrAllAI (LeChette chatbot) and GitHub CoPilot added features like rapid output speeds and self-healing code iteration.
8. AI Open-Source Perspective Shift
OpenAI's Sam Altman expressed a belief in revisiting open-source strategies, hinting at future changes in the company's stance on transparency.
9. OpenAI taking on Google?
Last, but definitely not least, OpenAI effectively opened up it’s web search feature to everyone, without any requirement to sign into an account. This is effectively a direct challenge to Google, and it will be interesting to see how this develops given its potential impact on Google’s revenue stream.
The Legislative Voyage Begins
I’m starting with an overview of what the EU AI Act is and my thoughts on some possible issues. Then I’ve added a summary of the 140 pages of guidelines on the prohibitions established under Article 5 of the AI Act, the latter having been summarised with the help of ChatGPT, thus proving one of the benefits of AI in practice.
So …
Primarily, the EU’s AI Act establishes a structured framework for regulating “high-risk AI systems”, distributing responsibilities among “providers, authorized representatives, importers, distributors, and deployers”. This approach is designed to ensure “accountability and compliance” across the AI value chain, with “providers bearing the primary responsibility” for ensuring regulatory adherence before their AI systems enter the market. From a UK perspective it's important to note that “non-EU providers must appoint an authorized EU representative” to oversee compliance obligations.
There are some concerns that the Act’s linear framework may struggle to address complex, multi-party relationships, potentially creating accountability gaps. An example would be where large tech companies leverage contractual mechanisms to shift liability from providers to deployers. Whilst the Act aims to “prevent harmful AI practices”, such as “manipulation, exploitation, and biometric surveillance”, the enforcement mechanisms may struggle to be effective in cases where AI-related harm is difficult to trace back to specific actors.
Another issue is that “deployers” are required to “inform providers before notifying authorities”, which could delay regulatory intervention, particularly in cases involving “AI manipulation, deceptive practices, or exploitative vulnerabilities”. Establishing links between the way LLMs are designed and subsequent harm is likely to be a challenge, and the Act does not fully bridge the gap between harmed individuals and responsible parties. This raises concerns about whether enforcement measures are sufficient to protect fundamental rights while ensuring AI innovation remains safe, transparent, and accountable.
Part of the challenge is that whilst the EU AI Act provides a broad regulatory framework, the whole situation is developing and evolving rapidly, particularly with certain AI models that potentially could pose systemic risks. Currently a lot of the specifics have not yet be defined, with a reliance on a voluntary code of conduct rather than more binding legislative constraints. Reporting indicates that some EU Member States are pushing the Commission to revise the threshold for reviewing when certain general AI models should be classed as a systemic risk.
Clearly the industry has lobbied hard and put pressure to soften the sharper edges of this legislation, as demonstrated by a subtle but important shift in the language used, as evidenced by the use of “signatories commit to” rather than “signatories will”. Interestingly reporting incidents does not imply any liability, although providers now have to review their Safety and Security Framework every six months instead of annually. It’s worth noting that violations can result in fines: up to 7% of global sales for prohibited practices and 3% for other infringements.
Summary of Document: AI Act Prohibitions Overview
Background and Purpose
This outlines the prohibitions established under Article 5 of the AI Act (Regulation (EU) 2024/1689), which bans certain AI practices deemed harmful or violating fundamental rights within the European Union. These prohibitions apply to the “placing on the market, putting into service, or use of specific AI systems” that pose unacceptable risks. The goal is to prevent AI applications that could lead to “manipulation, exploitation, social control, discrimination, or unjustified surveillance”, ensuring AI is used ethically and in alignment with “EU values”.
Prohibited AI Practices (Article 5 AI Act, Section 2.1)
Prohibitions under “Article 5(1) AI Act” include:
1. Harmful Manipulation and Deception (Article 5(1)(a))
- AI systems that use “subliminal, manipulative, or deceptive techniques” beyond a person’s consciousness to distort behaviour.
- If the “objective or effect” is to significantly harm individuals or groups, these AI systems are “banned”.
2. Exploitation of Vulnerabilities (Article 5(1)(b))
- AI systems that “exploit age, disability, social, or economic vulnerabilities” to manipulate or distort behaviour, leading to potential harm.
3. Social Scoring (Article 5(1)(c))
- AI systems that “classify or evaluate individuals based on their social behaviour or personal characteristics” to assign a social score, leading to:
- “Unjustified negative treatment” based on unrelated social contexts.
- “Disproportionate penalties” unrelated to the severity of social behaviour.
- This prohibition applies to both “public and private entities” to prevent “mass surveillance and discriminatory social control”.
4. Criminal Offense Prediction Based on Profiling (Article 5(1)(d))
- AI systems that “predict criminal behaviour” based “solely on profiling, personality traits, or characteristics” are “banned”.
- The only exception is if the system is used to “support human decision-making based on objective and verifiable facts directly linked to a crime”.
5. Facial Recognition via Untargeted Scraping (Article 5(1)(e))
- AI systems that develop or expand “facial recognition databases” through “indiscriminate data collection” from the internet or CCTV footage.
- This addresses risks related to “privacy violations and unauthorized biometric data usage”.
6. Emotion Recognition in Workplaces & Schools (Article 5(1)(f))
- AI systems that “infer emotions” in workplaces or educational institutions “are prohibited”, except for “medical or safety reasons”.
- This measure is designed to “protect privacy and prevent emotional surveillance in professional and academic settings”.
7. Biometric Categorization Based on Sensitive Data (Article 5(1)(g))
- AI systems that categorize people “based on biometric data” to infer:
- “Race, political beliefs, trade union membership, religion, sex-life, or sexual orientation”.
- Exception: “Labelling or filtering biometric datasets for law enforcement purposes” is allowed.
8. Real-Time Remote Biometric Identification (RBI) in Public Spaces (Article 5(1)(h))
- AI-driven “real-time biometric identification in public areas” (e.g., facial recognition) is banned for “law enforcement purposes”.
- Exceptions:
- Searching for specific “victims of crime”.
- Preventing serious threats like “terrorist attacks”.
- Locating “suspects of serious criminal offenses” (subject to strict procedural safeguards).
Legal Basis & Application of Prohibitions
- Article 114 TFEU (Internal Market Regulation) and Article 16 TFEU (Data Protection) provide the legal foundation.
- “AI providers and deployers” must ensure compliance, taking responsibility for the “design, implementation, and monitoring” of AI systems.
- Market Surveillance Authorities (MSAs) will enforce compliance starting August 2025, with penalties for violations.
Interplay with Other AI Regulations
- AI systems “classified as high-risk” under Article 6 AI Act “may still fall under prohibitions” if they violate Article 5.
- General-purpose AI systems (e.g., “large language models and multimodal AI”) are “not exempt” if their use results in prohibited applications.
- AI practices not explicitly banned may still be “restricted under data protection laws, consumer protection laws, and other EU legal frameworks”.
Conclusion
The AI Act “strictly prohibits” AI systems that involve “deception, manipulation, social scoring, biometric mass surveillance, and unjustified criminal profiling”. These prohibitions aim to “protect fundamental rights, prevent societal harm, and uphold ethical AI development” across the EU. Compliance is mandatory from February 2025, with enforcement and penalties beginning in August 2025.
Tools and tricks
Lastly I wanted to reference one of the tools that I find particularly helpful in terms of keeping on top of everything that’s happening, namely Otio.
This serves as a digital library which you can add to, both with documents but also using weblinks. So, for example this means that you can add the link to a YouTube video of interest, and then ask it to provide you with a summary and/or ten key points about the video, all of which takes no more than a couple of minutes or less. The result is that you don’t have to spend 30 minutes watching a video, but instead you can read a summary in a few minutes.
I also use it to provide me with a daily summary of what’s happening in relation to AI. The recently released ChatGPT Task function does the same, but with Otio I have it specifically review the top 50 or so tech websites that cover AI, and I wake to an update every morning in my inbox.
Hope that you’ve found this interesting and informative, and I look forward to keeping you updated in the future.
Regards
Tom Carter