😈 Famous Deepfakes
At this point almost everyone knows what deepfakes are.
You’ll always hear how “scary” and “dangerous” they are.
And yes they can be dangerous. But I believe the long-term impacts will be positive.
That’s because non-verified media sources will become obsolete.
Today’s Ai5:
- 😈 Famous Deepfakes
- 🧠 Gigabrain LLM Prompt Techniques
- 📰 GPT4o Mini + Sora Update
- ❌ Crowdstrike’s Blunder Shown in Photos
- 🕹️ No-Code Text-to-Videogame Tool
Prompt of the Day 🎨
This one gets the imagination going 💭
Famous Deepfakes 😈
A few days ago I came across the video below.
It tells the story of a Ukrainian YouTuber who had her identity stolen. It’s being used to spread Chinese and Russian propaganda while speaking fluent Mandarin.
Let’s take a look at some of the most effective deepfakes in recent history.
March 2023: A bunch of photos showing Donald Trump being arrested went viral. Made with Midjourney, it lead to them cancelling their free plan and banning words like Donald Trump, Joe Biden, and Arrested. You can see all the originals here.
January 2024: Deepfaked nude images of Taylor Swift started circulating on X. The images got over 27 million views in 19 hours before they were taken down. If you want to see them, I’ll leave that up to you!
February 2024: A finance worker pays out $25 Million after a deep faked video call with the Chief Financial Officer. It wasn’t just the CFO on the call either, there were several other members of staff in attendance – all deepfakes. The full story can be read here.
2019: This early example shows a clip from the movie The Shining with Jim Carrey as the lead. Pretty funny.
To give you an idea how easy it is, I whipped up the image below. I don’t think this timeline plays out anytime soon. 😅
April 2023: This example shows how a mother was tricked into a kidnapping scam using her daughter’s deepfaked voice. They demanded $1 million and the mother “Never doubted for one second it was her.”
That last example leads me to one final point that we should all probably take notice of.
Discuss a safe word with your family.
We’re all familiar with phishing emails, scam calls, and texts. But I don’t think many people are ready for personalized deepfake scams involving family members.
Rather than sensationalized headlines like “Deepfakes might steal the election”, I think we should be paying more attention to small scale deepfakes like these.
It sound farfetched I know, but better to be safe than sorry!
Gigabrain Prompt Techniques 🧠
I just love a good prompt. If you do too, check out this article I found. It’s got a bunch of awesome prompting techniques I’d never heard of.
I decided to give a quick rundown of the useful ones you can use for improved problem solving and more.
These are great if you use LLMs as a tutor for technical subjects.
Tree-of-Thoughts (ToT)
ToT prompting allows for more complex problem solving than usual. It works by considering multiple solutions to a single problem at the same time. Take this prompt as an example:
A farmer needs to cross a river with a fox, a chicken, and a sack of grain. The boat can only carry the farmer and one item at a time. The fox can’t be left alone with the chicken, and the chicken can’t be left alone with the grain.
- Consider multiple starting moves
- For each move, think through potential consequences and next steps
- If a path leads to a dead end, backtrack and explore another branch
- Evaluate which path seems most promising at each stage
- Continue this process until you find a valid solution
Least-to-Most Prompting
Least to Most breaks down complex problems into simpler sub-problems, which are solved in order. It’s an intuitive approach that mimics how humans often tackle difficult tasks. For example:
Write a short story about a time traveler who accidentally changes a major historical event.
- Start with the simplest elements of the story (characters, setting)
- Gradually add more complex elements (plot twists, conflict)
- For each step use the output as input for the next, more complex section
- Continue building upon previous sections until the full story is developed
- Review and refine the final output with the user
You can see how this would give a much more controlled output overall.
Chain-of-Verification (CoVe)
CoVe uses a multi-step process where the model generates an initial response, creates verification questions to check its own work, answers those questions, and then produces a revised response based on this self-verification process.
What were the major causes of World War II?
- Generate an initial list of causes
- Verify each cause by considering historical evidence and scholarly consensus
- Identify any contradictions or oversimplifications in the initial response
- Research additional factors that might have been overlooked
- Assess the relative importance of each cause
- Synthesize a final, well-rounded answer that acknowledges the complexity of historical events
You can see how taking the time to create an effective prompt can significantly improve your results when working with LLMs.
If you want a full study on these advanced prompting techniques you can find it here. Totally worth bookmarking for later.
GPT4o Mini + Sora Update 📰
OpenAI has announced GPT-4o mini, their most cost-efficient small language model. Here’s a few key details:
- Surpasses GPT-3.5 Turbo and other small models on various benchmarks (GPT 3.5 is also now retired 🫡)
- Excels in reasoning tasks, math, coding, and multimodal reasoning
- Outperforms Gemini Flash and Claude Haiku on several benchmarks
The biggest thing to note about mini is the price. It’s more than 60% cheaper than GPT-3.5 Turbo. Sam Altman posted “Towards intelligence too cheap to meter.”
It also helps confirm speculation that Small Language Models (SLMs) are the future of conversational AI.
Sora Update
OpenAI just dropped 7 new Sora videos on their YouTube channel and no one seems to be talking about it.
I put together some of the best examples here. Give it a like!
No-Code Text to Videogame 🕹️
Buildbox is a game development platform designed to create games without coding knowledge.
The latest release, Buildbox 4, is pushing the boundaries of AI-assisted game creation, moving us into a future of no-code game dev.
Just like Midjourney or Runway, we’re now starting to see games created with text commands. Pretty awesome.
You can even publish your games to the app store. And there’s been quite a few successful Buildbox titles already!
Crowdstrike’s Blunder ❌
Over the weekend CrowdStrike (Ironically, a cybersecurity company) pushed out a faulty software update. This update contained a critical error that caused Windows computers to crash with a “blue screen of death.”
This mistake affected millions of computers worldwide.
This gives you an idea of how massive it was. The skies across America hadn’t been this quiet since 9/11.
Japanese train stations still using Windows 2000. 😅
Images from Times Square in New York.
Durango Casino in Las Vegas.
Madrid Airport.
Medical equipment. 🤦♂️
Pretty amazing/scary that a single line of code can have so much impact.
If you want a technical deep dive on the cause, I found one below. 👇
Snack Sized 5 🍪
1️⃣ New longevity study extends the life of mice by 25% by genetic deletion of Il11 (whatever that means). They even look younger.
2️⃣ Check out this cool real-time diffusion example. It takes a man dancing on stage and turns his moves into a waterfall animation.
3️⃣ This guy uses computing power from an iPhone + iPad + Galaxy S24 + Macbook + 2 x 3090 GPUs to run Llama 70B at home.
4️⃣ The power of braindumps and LLMs.
5️⃣ Stanford researchers created a Sims-like game filled with NPCs powered by ChatGPT. They planned a party together.
Help us grow by sharing Ai5 with a friend. I’ll give you a cookie. 🍪
Join the conversation on X.
Connect on LinkedIn.
👈 Read Previous Issue | Read Next Issue 👉
Read the Blog on Rareconnections.
Buy 1 get 2 free on PromptBase.
Feedback? Yes please! matt@rareconnections.io