A guest post by the creator of this substack!
Article preview:
Right now, AI is eating the world.
And by AI, I mean Transformers. Practically all the big breakthroughs in AI over the last few years are due to Transformers.
Mamba, however, is one of an alternative class of models called State Space Models (SSMs). Importantly, for the first time, Mamba promises similar performance (and crucially similar scaling laws) as the Transformer whilst being feasible at long sequence lengths (say 1 million tokens). To achieve this long context, the Mamba authors remove the âquadratic bottleneckâ in the Attention Mechanism. Mamba also runs fast – like âup to 5x faster than Transformer fastâ.
Here weâll discuss:
The advantages (and disadvantages) of Mamba (đ) vs Transformers (đ€),
Analogies and intuitions for thinking about Mamba, and
What Mamba means for Interpretability, AI Safety and Applications
Read More in  The GradientÂ