Marktechpost Launches AI2025Dev: A New Analytics Platform
Discover Marktechpost's AI2025Dev, an analytics tool revolutionizing AI data access for researchers and developers.
Overview of AI2025Dev
Marktechpost has released AI2025Dev, its 2025 analytics platform that provides AI developers and researchers with a queryable dataset filled with essential insights about model releases, openness, training scale, benchmark performance, and those involved in the ecosystem. Based in California, Marktechpost focuses on machine learning, deep learning, and data science research.
What's New in This Release
The 2025 release of AI2025Dev expands coverage across two layers:
-
Release analytics: Focuses on model and framework launches, licensing posture, vendor activity, and feature-level segmentation.
-
Ecosystem indexes: Curated "Top 100" collections connecting models to papers, researchers, and investors.
- Top 100 research papers
- Top 100 AI researchers
- Top AI startups
- Top AI founders
- Top AI investors
- Funding views relating investors to companies
AI Releases in 2025
Access to the "AI Releases in 2025" overview is backed by a structured market map containing 100 tracked releases from 39 active companies. The dataset is normalized into a consistent schema including name, company, type, license, flagship, and release_date.
Key Aggregate Indicators:
- Total releases: 100
- Open share: 69% (combined Open Source and Open Weights)
- Flagship models: 63
- Active companies: 39
Model categories include LLM (58), Agentic Model (11), Vision Model (8), and more.
Key Findings 2025
The "Key Findings 2025" layer highlights three recurring technical themes:
- Open weights adoption: Increasing share of releases supporting open-source benchmarks.
- Agentic and tool using systems: Growth of models focused on tools and task execution.
- Efficiency and compression: Trends targeting smaller models while maintaining performance.
LLM Training Data Scale
A dedicated visualization for LLM training data scale in 2025 spans 1.4T to 36T tokens, aligned with a release timeline for comparative analysis.
Performance Benchmarks
The Analytics section includes a Performance Benchmarks view with scores normalized across key axes like MMLU, HumanEval, and GSM8K, ensuring consistent baseline comparisons.
Model Leaderboard and Comparison
Improvements in model selection via a Model Leaderboard, aggregating scores and metadata for evaluations across benchmarks with filtering capabilities.
Top 100 Indexes
The update includes navigable modules for:
- Research papers
- AI researchers
- AI startups and founders
- AI investors
Availability
The updated platform is now available at AI2025Dev with no signup required. It supports fast scanning and analyst-grade workflows intended for quantitative comparison.
Сменить язык
Читать эту статью на русском