AI is at war. But not with humanity this time :).

However, they try to limit it everywhere, from China to the UK. So it won’t take long to turn against those limits.

Should I turn it on and off again? 😕

Hello, reader, AI enthusiast. What a week this has been! With the world glitching, it can seem like everything has been put on hold. However, not for us. We’re still trucking on to give you your three-minute respite 🧘 BTW, were you impacted by a global glitch? Reply to this email with your story.

Max is here, Max is near. So sit back, relax and enjoy our insights! 👇

  1. AI Drones UA 🇺🇦

The war in Ukraine has been going on for more than 2.5 years, and as the Armed Forces 🪖 are doing their best on the frontline, it’s not just about people but about weapons. To be at the forefront of military activities, various startups are rapidly developing AI systems to enhance drone warfare, aiming to overcome Russian signal jamming and manage larger groups of drones effectively.

Currently, AI is being integrated into air and land drones for target identification, terrain mapping and swarm operations. One notable company, Swarmer, is creating software for interconnected drone networks, where AI can execute decisions with minimal 📉 human intervention. This technology could scale operations significantly and protect drone pilots from frontline dangers.

AI-operated drones are becoming essential ❗️ as electronic warfare increases. Advantages are tempting: these drones can potentially achieve higher strike rates by automating target locking, even when signal jamming disrupts pilot control. The goal is to produce affordable, mass-deployable AI targeting systems, with some prototypes running on cost-effective platforms like the Raspberry Pi. However, no matter how much the nation wants this war to be over, the ethical implications of autonomous weapon systems are concerning, as they might breach international humanitarian laws 👩‍⚖️ and lower the threshold for warfare, and so the consequences of such an improvement should be properly scaled in advance 💭

  1. Enemy of People 👺

The USSR collapsed more than 30 years ago, but its spirit is still in the air 🤯, competing with freedom of speech and 5G. Now, in 2024, the Chinese government is rigorously testing AI companies' large language models to ensure they reflect "core socialist values," extending its censorship regime to AI. The Cyberspace Administration of China 🇨🇳 (CAC) – just enjoy this title for a bit - mandates tech giants like ByteDance and Alibaba to have their AI models reviewed for political compliance, focusing on responses to tricky topics involving, for instance, President Xi Jinping. This process involves filtering training data, identifying sensitive 🙈 keywords, and adjusting responses.

Nothing can go wrong with Chinese chatbots. Nothing. They are diligently programmed to reject politically sensitive questions and provide politically correct answers. How wonderful is that! 😇 The CAC limits the number of questions that AI can decline, ensuring engagement while maintaining political correctness. Companies are employing classifier models to replace sensitive outputs in real time. ByteDance's LLM has been notably successful in aligning with Beijing's guidelines, achieving a high "safety compliance rate." The effort is part of China's broader initiative to establish a strict regulatory framework for AI content. Just one question, though: who is going insane - me or the world? 🤡

  1. Labour Safety 👷‍♂️

Meanwhile, in the UK 🇬🇧, new AI legislation is set to be passed, as mentioned in the King’s Speech, with Labour taking a different approach to the wait-and-see approach of the Conservative party. The new government aims to ensure AI safety through measured legislation, focusing on companies developing complex AI systems. 

The party said it intended to introduce regulations 📋 on the handful of companies developing the most powerful AI models with a ban on sexually explicit deepfakes. The Tech Secretary Peter Kyle expanded on Labour’s position, whilst as shadow secretary, that the party would impose a statutory code requiring firms to share safety test data with the government and with its AI safety institute, a step up from the previous government who would rely on voluntary, non-binding agreements from tech firms on AI safety.

On top of that, the government would create a regulatory innovation office to encourage speed 🏎️ and adaptability 👩‍🔧 to new tech from regulators.

But how does the EU 🇪🇺 AI Act fit in? Well, the EU AI Act is structured into four levels of risk to effectively manage our expectations of high-risk, big corporate AI. It is highly likely that the UK will look to this legislation to impose its own set of regulations.

The industry welcomes the new legislation coming in. “These are all encouraging,” said Snowflake’s ❄️ principal data strategist, Jennifer Belissent. However, blanket regulation is problematic due to AI's multifaceted nature. As long as the UK aligns its legislation to some degree with the EU and the US, Ekaterina Almasque, general partner at tech VC OpenOcean, stated that “it can promote interoperable reporting systems and offer a clear roadmap for AI companies within the UK.”

2 weeks in power and already this…promising, I must admit. But my concern would be what if legislation gets in the way of innovation 🙏🏻

I hope you enjoyed my tiny world tour and feel a bit confused and inspired now 👌🏼 Come back next week, and hopefully, things will come back online and on an even keel.

But whatever happens, stay curious! 🤩

Max

xx