[[{“value”:”

Meta Is Building a Giant $800 Million Data Center in Idaho | Data Center  Frontier

The great datacenter boom vs. tariffs. Will AI capex be hit?

Welcome Back,

Today I want to cover some reports I’ve been reading and talk to you about the state of AI Datacenters. Also in case you missed it, view my 2025 articles so far. I want to be providing the most value to my readers that I possibly can.

Subscribe now

In case you Missed It (ICYMI)

My short wrap-of of recent AI News bundled here for quick consumption: 4-min 33 seconds:

In 2021, a researcher named Daniel Kokotajlo published a blog post called “What 2026 Looks Like”, where he laid out what he thought would happen in AI over the next five years. Daniel worked as a governance researcher at OpenAI on scenario planning. More recently he’s teamed up with Scott Alexander (read their introduction) and others and they have released quite a weird almost sci-fi AI 2027 project.

Watch the Podcast.

The Super-Intelligence Threat of 2027

The result of their collaboration is “AI 2027,” a report and website released this week that describes, in a detailed fictional scenario, what could happen if A.I. systems surpass human-level intelligence — which the authors apparently expect to happen in the next two to three years.

AI 2027 is a “comprehensive and detailed” (and hopelessly accelerationist) scenario forecast of the future of AI. It comes from a tradition of LessWrong1 thinkers, and former OpenAI employees who appear to have AGI talking points and for whom Darkesh (the viral YouTuber) appears to act as a PR booster. Podcasts here and here.

Read the Report

Animation source: Jacqui VanLiew; Getty Images, via Wired.

“We’re exploring the frontiers of AGI, prioritizing readiness, proactive risk assessment, and collaboration with the wider AI community.” — Google DeepMind

Read the DeepMind Blog

Setting the Record Straight


Read more

“}]] Read More in  AI Supremacy