#Mamba-Transformer19/08/2025
Nemotron Nano 2: 128K-Context LLMs That Run Up to 6× Faster on a Single A10G
'NVIDIA's Nemotron Nano 2 delivers hybrid Mamba-Transformer LLMs that run up to 6× faster and support 128K-token context on a single A10G GPU, with most training data and recipes open-sourced.'