- 3-minute AI and Marketing Weekly Newsletter
- Posts
- AI’s New Horizons: Smarter Minds, Global Tensions, and Deepfake Battles
AI’s New Horizons: Smarter Minds, Global Tensions, and Deepfake Battles
Discover OpenAI’s groundbreaking o1 model, China’s military AI stance, and the fight against deepfakes in Max’s three-minute AI newsletter.
Hello, idle surfers of the internet, aka AI enthusiasts! 🏄
Welcome to Max’s three-minute newsletter on all things AI. I’ve got some real hard hitting stuff for you today, so feast your eyes 👀, kick back and prepare to swallow this hard pill as we delve deep into the hinterlands of the recent news agenda.
No time to explain - away we go, reader! ⬇️
Think Smarter 🧐
Long time no see. OpenAI has made an announcement that it will release a new LLM called OpenAI o1 that will use “reason,” to logically think through difficult problems 💬 and is significantly smarter than existing AI. The new model within OpenAI, code-named Strawberry, is considered not a successor to GPT-4o but rather a complement to it. On the next scale up we can expect it to be integrated in the new GPT-5 😱
Through reinforcement learning, giving positive feedback for correct answers and negative feedback for incorrect answers, we can expect an improvement 👍 in the reasoning process. This kind of learning has enabled computers to play games with superhuman skill and do useful tasks like designing computer chips.
One can expect OpenAI o1 to be a PhD-educated neighbour 🎓 (who, unfortunately, lost their diploma in the basement) and handle complicated questions in science and mathematics, on the American Invitational Mathematics Examination (AIME) - a test for maths students. GPT-4o solved on average 12 percent of the problems while o1 got 83 percent right, according to the company. Isn’t that too smart?...🤖 There will also be a o1-mini version, which is a cost effective LLM designed for STEM related tasks, that is 80 percent cheaper than the o1-preview - which should work perfectly for developers needing powerful reasoning without extensive knowledge.
Furthermore, Open AI has made a commitment to safety: both models incorporate an approach that enhances their ability to follow safety and alignment guidelines. Open AI’s work includes collaboration 🤝 with internal governance and federal government, reinforced by regular testing, red-teaming and board-level oversight so here is hoping…however…
Fight Club 🥊
A drum, a drum Macbeth doth not come as China 🇨🇳 recently took a raincheck on an international pact aimed at keeping artificial intelligence in military settings on a tight leash. The agreement, established during the Responsible Artificial Intelligence in the Military Domain (REAIM) summit in Seoul and backed by around 60 nations - including the US 🇺🇸 - seeks to make sure humans stay in control when it comes to life-or-death decisions on the battlefield.
China, and here is the problem, gave the plan a hard pass, joining a group of about 30 other holdouts. Chinese officials have expressed scepticism about external constraints, viewing such multilateral efforts as potential attempts to limit their strategic advancements in AI. Instead, China promotes 🚩 its own AI governance framework, the "Global Initiative for AI Governance," which outlines its principles for AI development and use.
China’s absence is hard to miss, given its clout in AI and past support for international AI governance. The fact that other major players like russia 🇷🇺 and iran 🇮🇷 are also sitting this one out highlights the complexities of achieving global consensus on the regulation of military AI.
As the debate over military AI governance continues, the focus remains on balancing technological advancement with ethical considerations and ensuring that international norms can effectively address the evolving nature of warfare in the AI era. And, let’s be honest – balance is something the current environment is missing overall ⚖️
NO FAKES Indeed 😅
Overlooked and controlled – these are the core adjectives to describe the AI’s predictable future. And the future begins now. On September 12, 2024, US Representatives Madeleine Dean and María Elvira Salazar introduced the NO FAKES Act, a new piece of legislation aimed at combating the misuse of artificial intelligence, particularly focusing on deepfakes 💁🏼♀️ This bill is designed to protect individuals from the exploitation of their digital likenesses and to hold accountable those who create, distribute or profit from unauthorised AI-generated content. Additionally, it provides legal protection 👨⚖️ for media platforms that act to remove such offensive material.
However, not everyone is inspired by the idea. For instance, Corynne McSherry of the Electronic Frontier Foundation, digital rights advocates, is concerned that the bill could lead to excessive private censorship and impose burdensome legal requirements on individuals defending their rights 🤷🏻 McSherry contends that the bill’s process is more complex and less accessible compared to existing protections under the Digital Millennium Copyright Act (DMCA).
Still, the need for action is clear. In just the second quarter of 2024, AI deepfake scams snatched at least $5 million in cryptocurrency 💸 Experts caution that as AI gets smarter, so will the scams, potentially cracking into even more personal digital vaults, like the facial recognition systems that guard crypto wallets. So you might want to reread this newsletter again from this perspective…🫢
That’s it from us today peeps, it’s been a lot. AI usually gives us pleasure and ease to our complicated lives, but we need to take heed and remember that to every yin there is a yang, a dark side, that one can’t ignore. Be careful out there, Max has alway got your back 👌🏼
Stay curious 🧑🏻💻
Max
xx
Ready to revolutionize your workday with AI?
Discover the key to unlocking unparalleled productivity with HubSpot’s free guide to using ChatGPT at work. You’ll find practical insights, useful integrations, and 100 prompt ideas to help you unleash the power of AI for a more efficient, impactful professional life.