- Product Upfront AI
- Posts
- 🤺Meta Versus OpenAI: Which Side Are You On?
🤺Meta Versus OpenAI: Which Side Are You On?
Turn Your Favorite Images into Videos Using AI
Hey there!
Happy Sunday!!! I’m back!
This time I have totally ditched my chores for reading and sharing with you some cool & shocking AI stuff! 🤓
So, grab your favourite snack (mine's a big bowl of chips), and ignore that pile of laundry.
In this edition, I’m sharing how Facebook's parent company is giving OpenAI a tough competition. Also, how will an AI remember stuff about you?
I've got all that and more, explained in a way that won't make your head spin.
Let's jump in!
What’s Inside Today’s Newsletter
🪰 Buzz Around AI
😀 AI Recap
🪄 AI Creation
🔹 AI Learning
♨️ What’s Hot In AI
📰 AI News
Read time: 10 mins
🪰 Buzz Around AI
Meta's Tech Leap Challenges OpenAI
Just days after OpenAI unveiled its Advanced Voice mode, Meta matched that innovation.
They're introducing a feature that lets users chat naturally with famous voices.
It is similar to OpenAI's version. You don't require a subscription, you can try it.
Voice Chat: Meta's new feature lets you talk with famous voices or make your own. It's like OpenAI's, but free and ready now.
Llama 3.2: Meta's latest AI is a jack-of-all-trades. It handles both images and text smoothly and is powered by Nvidia GPUs.
Smart Glasses: Meta's Ray-Bans are getting an upgrade. They'll see and hear what you do, helping find lost cars or pick outfits. They'll even translate talks in real-time.
AI Studio: Create a digital twin that looks and sounds like you. It can chat for you and dub your videos in different languages.
Orion: The big reveal. These are AR glasses in normal specs' clothing. You'll see 3D holograms of people and control them with thoughts and hand moves. Orion could be Meta's iPhone moment. It's not just a new gadget, but a whole new way to use tech. Zuckerberg's team spent almost 10 years on this in secret.
These changes aren't just cool - they could reshape how we interact with the world. Meta's not just catching up; it's aiming to leapfrog the competition.
Berkeley Startup Aims to Enhance LLMs with Long-Term Memory
I can totally relate to AI assistants. As I always forget things and wander around cluelessly!
It’s so annoying, and while I have to bear the consequences, AI doesn’t have to suffer through this anymore.
You know how sometimes you forget your friend’s birthday, and they never let you hear the end of it?
Now imagine if your AI assistant forgot your name every single time you asked it a question.
That’s the reality for most AI models today!
They suffer from a severe case of amnesia, losing track of our lives and conversations as soon as we log off.
Enter Letta, the brainchild of Berkeley PhD students Sarah Wooders and Charles Packer.
They’re tackling this memory dilemma with a fresh approach. So that AI models can remember the details that matter, like your past chats and preferences.
Why It Matters:
Think of an AI that can remember your coffee order and adjust its responses accordingly.
The use of long-term memory can improve customer service and healthcare symptom tracking.
Letta's innovation could radically improve our interactions with AI. It could bring an end to awkward “Who are you again? moments.
😀AI Recap
Major Restructuring at OpenAI
With key executives leaving OpenAI, concerns are being raised within the AI community.
Key Points:
Mira Murati Steps Down:
The CTO of OpenAI, Mira Murati, resigned after over six years to focus on her personal projects.
Additional Departures:
Shortly after, Chief Research Officer Bob McGrew and VP of Research Barret Zoph also announced they were leaving the company.
Shift to For-Profit Model:
Reports suggest OpenAI is finalizing plans to transition into a for-profit entity. CEO Sam Altman is expected to receive equity in the company for the first time.
My Take:
These departures and structural changes could significantly impact OpenAI's future. Although the company may benefit from the move to a for-profit model, losing key leadership figures like Murati poses challenges.
🪄 Just For Fun
🔹AI Learning
How to Turn Your Favorite Images into Videos with PixVerse
Turn your favourite images into videos in a snap—no software, no fees, just your creativity!
⇨ Step 1:
Go to PixVerse.
⇨ Step 2:
Click “Create” on the main page.
⇨ Step 3:
Select “Image” to access the Magic Brush feature.
⇨ Step 4:
Upload the image you want to use for video generation.
⇨ Step 5:
Click “Magic Brush” to enter the editing page and start creating!
♨️ What’s Hot In AI
Meta Launches Llama 3.2: A Game-Changer for AI on Edge Devices
Meta has unveiled Llama 3.2, a major upgrade for AI on edge devices and vision tasks.
Key Highlights:
- New Model Sizes:
It includes lightweight text-only models (1B and 3B) and larger vision models (11B and 90B).
- Edge and Mobile Optimization:
The 1B and 3B models are designed for Qualcomm and MediaTek hardware.
They support a context length of up to 128K tokens. That is ideal for tasks like summarization and instruction following.
- Enhanced Vision Capabilities:
The 11B and 90B models excel at image understanding. They can perform complex tasks like document analysis and visual reasoning better than many alternatives, including Claude 3 Haiku.
- Customizable and Open:
Llama 3.2 models are open for fine-tuning. Developers can use tools like Torchtune and deploy them locally with Torchchat or test them via Meta’s smart assistant.
- Llama Stack Distributions:
Meta is releasing Llama Stack to simplify development across different environments. This makes it easier to deploy advanced AI applications like retrieval-augmented generation (RAG).
- Open Access:
The models are available on Llama, Hugging Face, and partner platforms.
Llama 3.2 provides developers with efficient, customizable AI models. It enhances both edge and cloud-based applications, pushing the limits of AI development.
📰 AI News
Spotify is launching an AI playlist feature in the US. Users can create custom playlists with prompts.
OpenAI has released an open-source AI evaluation tool. It can assess LLM performance in 14 languages, including German, Bengali, and Arabic.
Microsoft has launched a platform called Correction. It highlights AI-generated text that may be factually incorrect, helping reduce hallucinations.
Snap is using Gemini to power its chatbot. This is part of a larger effort to enhance its AI capabilities.
Researchers at Harvard Medical School have developed an AI model named TxGNN. This model is capable of identifying existing drugs, which can be repurposed to treat rare and neglected diseases.
I hope these updates got you as excited as they got me.
Catch you next time at the same time, in the same place and with new mind-blowing AI updates!
Bye!Bye!