💵 $15k to Jailbreak LLMs
If you’re a fan of AI, you’ve probably seen endless “how to make $1,000 a day with ChatGPT” style videos on YouTube.
Clickbait promises painting a picture of effortless wealth, just a prompt away.
Most of these don’t play out in reality.
If you’re looking to make $1,000 a day with AI, there’s probably not many 15 minute YouTube tutorials that’ll do it.
Today’s Ai5:
- 💵 Anthropic Bug Bounty Program
- 😳 Flux Continues to Impress
- 📆 Nvidia Delays Next-Gen Chips
- 🍓 Strawberry 101
- 🗣️ Voice Mode Delays Explained
Prompt of the Day 🎨
This could be you, anon. 😬
Anthropic Bug Bounty Program 💵
Over the weekend, Alex Albert from Anthropic announced a new bug bounty program. It’s set to reward those who find novel jailbreaks in frontier models.
Those who are accepted into the program get early access to new models. And if you find a jailbreak in a high risk domain like CBRN (chemical, biological, radiological, and nuclear) or cybersecurity, you can be awarded up to $15k!
This is not just about getting the model to say a curse word, it’s about eliciting actually harmful capabilities that we wouldn’t want future models to have.
Smart move from Anthropic. If you think you’ve got what it takes, you can apply for the program here.
Flux Continues to Impress 😳
The new image gen Flux is continuing to capture attention. The latest trend to go viral are these ultra-real TED talk style images.
Naturally, the next move is to put these images into Runway Gen-3. The results are really impressive.
These images are by far the most realistic we’ve seen. BUT… for those with a keen eye, we can still check to see they’re AI generated (for now).
We’re likely using the last generation of AI image tools that still have these minor flaws!
Nvidia Delays Next-Gen Chips 📆
Nvidia’s next generation “Blackwell” chips are facing delays of up to 3 months due to a design flaw found late in the production process.
The B200 chips were set to replace the popular H100 chips that have sent Nvidia profits soaring over the last 18 months.
The delay will almost certainly impact the progress of AI with companies like Meta, Google, and Microsoft having already placed billions of dollars worth of orders for the next gen GPUs.
In other chip news, Nvidia competitor and silicon valley startup Groq just raised $640 million for their very own AI chips.
Unlike Nvidia GPUs, which are used for both training AI as well as powering the model output (a process known as “inference”), Groq’s AI chips are strictly focused on improving the speed of model outputs. Meaning: they provide super fast text output for LLMs at a lower cost than Nvidia GPUs.
If Groq could take a percentage of Nvidia’s huge 90% share of the AI chip market, they could be massive in the next few years.
Strawberry 101 🍓
You might have heard the hype around OpenAI’s (supposed) new model strawberry lately. Here’s everything you need to know about what could be the next big thing in AI.
Back in November 2023, rumors began of a breakthrough new model from OpenAI known as q* (q-star). In July 2024, it was revealed that q* had evolved into “Strawberry.” And according to unnamed sources, The model excels at autonomous web navigation, task planning, and deep research.
Fueling the hype over the last week has been a number of OpenAI employees, including Sam Altman, posting random Strawberry images on X.
Making sense of all this, I think the next big release from OpenAI is going to be an AI agent. Strawberry ticks the boxes for characteristics needed for a capable AI agent.
Web navigation, task planning, and deep research…
And if you remember OpenAI’s 5 steps to AGI from a few weeks ago, it only makes sense that this is the next step. 👇
Dropping a truly capable autonomous agent would really put OpenAI back in front after a rough few months.
Voice Mode Delays Explained 🗣️
In more OpenAI news, they’ve just released a report outlining the safety work taken out before releasing Voice Mode.
The report gives some insight into why Voice Mode might have been delayed. It outlines several challenges that required extensive work before release. These included:
- Voice Cloning: Risks of unauthorized voice generation and impersonation
- Safety/Privacy: Privacy concerns with speaker identification (the AI was recognizing individual users/voices and behaving differently for each)
- Copyright: Potential for outputting harmful or copyrighted content
- Bias: Issues with performance variations across different accents and languages
Other audio-specific vulnerabilities were also discussed including sensitivity to background noise and audio perturbations.
It seems OpenAI didn’t catch all the bugs though. This Reddit post shows Voice Mode in a normal conversation before yelling “NO!”… then proceeding to clone the user’s voice. 😅
Snack Sized 5 🍪
1️⃣ If you’re interested in web scraping this conversation covers a bunch of interesting overlaps between it and AI.
2️⃣ Check out this cool single prompt comparison between 6 top AI image generators.
3️⃣ CrowdStrike gracefully accept the award for “most epic fail” after causing a global IT outage a few weeks back.
4️⃣ AI plays Snake and not surprisingly it’s far better than we could ever be.
5️⃣ Robot dogs (potentially from Boston Dynamics?) are being deployed on the front line in Ukraine.
Join the conversation on X.
Connect on LinkedIn.
Read the Blog on Rareconnections.
Feedback? Yes please! matt@myai5.com