<RETURN_TO_BASE

Apriel-Nemotron-15b-Thinker: Efficient AI Model Revolutionizing Enterprise Reasoning

'ServiceNow introduces Apriel-Nemotron-15b-Thinker, a compact AI model delivering high-performance reasoning with half the memory and token consumption of larger models, optimized for enterprise deployment.'

Powerful Reasoning with Compact Model Size

ServiceNow has launched Apriel-Nemotron-15b-Thinker, an AI model with 15 billion parameters designed to deliver high-level reasoning capabilities while maintaining efficiency in memory and token use. Despite being smaller than other high-performing models like QWQ-32b and EXAONE-Deep-32b, this model matches or exceeds their performance on complex tasks relevant to enterprise applications.

Advanced Training Pipeline

The model was developed through a rigorous three-stage training process:

  • Continual Pre-training (CPT): Exposure to over 100 billion tokens from domains requiring deep reasoning, including mathematical logic, programming, scientific literature, and logical deduction.
  • Supervised Fine-Tuning (SFT): Fine-tuning on 200,000 high-quality demonstrations to enhance precision and accuracy.
  • Guided Reinforcement Preference Optimization (GRPO): Final refinement stage to align outputs with expected results for critical tasks.

Outstanding Performance and Efficiency

Apriel-Nemotron-15b-Thinker excels in various enterprise and academic benchmarks such as MBPP, BFCL, Enterprise RAG, MT Bench, MixEval, IFEval, Multi-Challenge, AIME-24, AIME-25, AMC-23, MATH-500, and GPQA. It achieves this while using approximately 50% less memory and consuming 40% fewer tokens than comparable models, significantly reducing inference costs and enabling deployment on standard enterprise hardware.

Enterprise and Real-World Applications

Designed specifically for practical deployment, the model supports real-time applications like coding assistants, business automation, and logical reasoning tools. Its optimized resource usage bridges the gap between advanced AI capabilities and feasible enterprise deployment without requiring large-scale infrastructure upgrades.

Key Highlights

  • 15 billion parameters with competitive performance against larger models.
  • Three-phase training involving CPT, SFT, and GRPO.
  • 50% memory savings and 40% token usage reduction compared to similar models.
  • Strong results across enterprise-specific and academic benchmarks.
  • Tailored for agentic and enterprise tasks, ideal for corporate automation and AI assistants.

Follow the model on Hugging Face and keep up with updates on Twitter.

🇷🇺

Сменить язык

Читать эту статью на русском

Переключить на Русский