Hello Engineering Leaders and AI Enthusiasts!

This newsletter brings you the latest AI updates in a crisp manner! Dive in for a quick recap of everything important that happened around AI in the past two weeks.

And a huge shoutout to our amazing readers. We appreciate you😊

In today’s edition:

🤖 ChatGPT takes on Alexa with new Tasks feature
🏛️ OpenAI releases an ambitious plan for America’s AI future
🔬 Berkeley team builds O1-level AI model for under $500
🤖 Musk’s Grok breaks free from X, goes mobile
🧠 Omi AI assistant claims to read your thoughts
💻 Nvidia drops major AI gaming and auto tech lineup
🤖 Altman: AI agents will join the workforce in 2025
📚 Knowledge Nugget: AI still lacks “common” sense, 70 years later
by and

Let’s go!


ChatGPT takes on Alexa with new Tasks feature

OpenAI is rolling out ChatGPT’s new “Tasks” feature, which lets users set reminders, schedule appointments, and create recurring activities. Currently in beta for Plus, Pro, and Team subscribers, the feature can handle everything from one-time grocery reminders to daily language learning sessions, complete with notifications and progress tracking.

The launch follows OpenAI’s SearchGPT rollout last month, which showed the company’s aggressive push into territory traditionally dominated by Big Tech. Users can manage their tasks through a dedicated dashboard, with options to modify or postpone activities as needed. The company plans to roll out the feature to all users soon, and more capabilities are in the pipeline.

Why does it matter?

OpenAI is positioning ChatGPT as your go-to digital assistant. By combining scheduling capabilities with its conversational AI prowess, it will become a truly intelligent personal assistant that can both remember your appointments and help you prepare for them.

Source


OpenAI releases an ambitious plan for America’s AI future

OpenAI has released a comprehensive “AI in America” blueprint outlining how the U.S. can maintain its AI leadership while ensuring equitable access and development. The proposal emphasizes three key pillars: national competitiveness, security measures, and infrastructure development.

The plan calls for unified federal oversight of AI and special economic zones to fast-track development. The blueprint suggests directing $175 billion into AI infrastructure and creating regional tech hubs nationwide. OpenAI’s vision hinges on collaboration between government, industry, and universities to build what CEO Sam Altman describes as “the Intelligence Age” – a future where AI capabilities are accessible to all Americans.

Why does it matter?

With global AI development outpacing regulation, OpenAI’s attempt to shape policy could impact America’s competitiveness. The blueprint’s emphasis on federal coordination over state-by-state regulation could set precedents for how AI is governed in the U.S. for years to come.

Source


Berkeley team builds O1-level AI model for under $500

UC Berkeley’s NovaSky team has created Sky-T1-32B-Preview, an open-source AI model that matches OpenAI’s O1-preview performance on reasoning and coding tasks—at a fraction of the typical cost. The model, trained for just $450 using Lambda Cloud’s H100 GPUs, achieves comparable results on benchmarks.

The team generated high-quality training data using QwQ-32B-Preview, employed rejection sampling to filter out incorrect solutions, and fine-tuned Qwen2.5-32B-Instruct. Their fully open-source approach includes sharing all data, code, and model weights, a stark contrast to the closed nature of models like O1 and Gemini 2.0.

Why does it matter?

While tech giants spend millions developing proprietary systems, Berkeley’s achievement could accelerate AI research and development, particularly in academic and open-source communities priced out of advanced AI development.

Source


Musk’s Grok breaks free from X, goes mobile

xAI is expanding its chatbot Grok beyond X (formerly Twitter) with a new standalone iOS app. The app, now available in multiple countries, including the U.S., Australia, and India, offers real-time web browsing, text generation, and, notably, unrestricted image generation capabilities – including the ability to create images using public figures and copyrighted material.

Why does it matter?

The company is also developing Grok.com for web-based access, shifting from its previous X-exclusive strategy. This move could boost xAI to the top of the AI ladder and force competitors to reconsider their approaches.

Source


Omi AI assistant claims to read your thoughts

Silicon Valley startup Based Hardware launched Omi, an $89 wearable “brain interface” at CES 2025 that promises to be your personal AI sidekick. The device, worn as a necklace or attached to the head, claims to understand your thoughts while handling tasks like answering questions, summarizing conversations, and creating to-do lists through GPT-4o integration.

The first version shipping now is audio-only, with the brain-interface module coming in Q2 2025. The company’s promotional materials show the device helping students flirt and cheat on exams.

Why does it matter?

Despite the ambitious brain-reading claims, Based Hardware has been vague about how this technology actually works. The lack of technical details and questionable marketing raises concerns about whether this is genuinely helpful tech or just another AI pipe dream.

Source


Nvidia drops major AI gaming and auto tech lineup

At CES 2025, Nvidia CEO Jensen Huang revealed a suite of AI innovations headlined by the GeForce RTX 50 Series GPUs and the new Cosmos platform. The RTX 5090, packing 92 billion transistors and delivering 3,352 trillion AI operations per second, leads the new GPU lineup.

Beyond gaming, Nvidia announced Cosmos, a platform for physical AI that aims to transform robotics and autonomous vehicles. The company also revealed Project DIGITS, which is described as their “smallest yet most powerful AI supercomputer.” It expanded partnerships with Toyota for next-gen vehicle development using NVIDIA DRIVE AGX technology.

Why does it matter?

While competitors focus on narrow applications, Nvidia is building an ecosystem that connects gaming, autonomous vehicles, and robotics under one AI umbrella. This could position them as the backbone of the next computing revolution, similar to how Intel dominated the PC era.

Source


Altman: AI agents will join the workforce in 2025

In a reflective blog post, OpenAI CEO Sam Altman shared the company’s journey and vision for the future. The post highlights OpenAI’s growth from a quiet research lab to a company with over 300 million weekly active users. He shared the company’s newfound confidence in building AGI (Artificial General Intelligence) and boldly predicted that AI agents will begin meaningfully impacting company operations by 2025.

OpenAI is now aiming for superintelligence—AI systems that accelerate scientific discovery and innovation. He also emphasized the company’s commitment to iterative deployment, which allows society to gradually adapt to AI advancements.

Why does it matter?

These claims may sound too ambitious, but OpenAI’s accelerated timeline suggests that businesses may need to prepare for dramatic change sooner than anticipated.

Source


Enjoying the latest AI updates?

Refer your pals to subscribe to our newsletter and get exclusive access to 400+ game-changing AI tools.

Refer a friend

When you use the referral link above or the “Share” button on any post, you’ll get the credit for any new subscribers. All you need to do is send the link via text or email or share it on social media with friends.


Knowledge Nugget: AI still lacks “common” sense, 70 years later

and , prominent AI researchers, argue that despite recent advances in AI technology, machines still lack basic common-sense reasoning—a problem first identified in 1959 by John McCarthy. They highlight how even the most advanced AI systems struggle with simple reasoning tasks that humans find obvious.

The researchers examined three popular approaches to solving this challenge: physical simulation, large language models, and video generation systems like Sora. They cite examples including AI’s inability to properly understand spatial relationships (like an astronaut’s view from the moon) and practical reasoning failures (like an AI agent trying to rename users in a chat system when it couldn’t find someone).

Why does it matter?

This lack of common sense poses a major roadblock. Without the ability to reason about basic physical and social realities, AI systems might continue to make unpredictable and problematic decisions – suggesting we need to rethink our approach to AI rather than just building bigger models.

Source


What Else Is Happening❗

🤖 Microsoft updates AutoGen to v0.4, adding Magnetic-One, a multi-agent orchestration system that coordinates specialist AI agents for complex tasks.

🌐 U.S. launches new comprehensive AI chip export controls, creating a tiered global access system that favors close allies while restricting China and others.

🤖 OpenAI begins hiring for its new robotics hardware division to develop general-purpose robots with advanced AI.

🎧 Google launches ‘Daily Listen‘ in Search Labs, an AI feature that creates personalized 5-minute podcasts from users’ search history and interests.

🎬 Adobe Research & HKUST unveil TransPixar, an AI system that can generate transparent visual effects in AI videos. The model creates realistic smoke, reflections, and portals with minimal training data.

🤖 Panasonic unveils its ‘Panasonic Go‘ AI transformation strategy, partnering with Anthropic to launch Umi, an AI wellness coach, and integrate Claude across operations.

🤖 Samsung revealed its “AI for All” vision at CES 2025. The company is integrating AI features across its product ecosystem, including TVs with Vision AI, Galaxy Book5 AI PCs, and smart appliances.

🔒 Harvard study reveals AI systems match human experts in phishing campaigns, achieving 54% success rates at 1/50th the cost. The research shows AIs can automate target profiling and craft highly persuasive emails.

🧠 OpenAI’s Sam Altman sparks speculation with a cryptic tweet, “near the singularity; unclear which side. The comments follow OpenAI’s o3 model release.

🧠 NeuroXessachieves breakthroughs in brain-computer interfaces. The system allows patients to interact with AI models, control robots, and communicate with 71% accuracy using brain signals.


New to the newsletter?

The AI Edge keeps engineering leaders & AI enthusiasts like you on the cutting edge of AI. From machine learning to ChatGPT to generative AI and large language models, we break down the latest AI developments and how you can apply them in your work.

If you enjoyed this newsletter, please subscribe to get AI updates and news directly sent to your inbox for free!

Thanks for reading, and see you next week! 😊


Read More in  The AI Edge