<RETURN_TO_BASE

Hirundo Raises $8M to Revolutionize AI with Machine Unlearning Against Hallucinations and Bias

Hirundo raises $8 million to develop machine unlearning technology that removes AI hallucinations and biases, offering enterprises a more reliable and efficient way to improve AI model safety.

Tackling AI Hallucinations and Bias with Machine Unlearning

Hirundo, the pioneering startup focused on machine unlearning, has secured $8 million in seed funding to address critical challenges in artificial intelligence such as hallucinations, bias, and embedded data vulnerabilities. The funding round was led by Maverick Ventures Israel, with participation from SuperSeed, Alpha Intelligence Capital, Tachles VC, AI.FUND, and Plug and Play Tech Center.

What is Machine Unlearning?

Unlike conventional AI methods that refine or filter outputs, Hirundo’s breakthrough technique—machine unlearning—allows AI models to "forget" certain knowledge or behaviors after training. This enables organizations to surgically remove hallucinations, biases, proprietary information, and vulnerabilities from deployed models without the costly and time-consuming process of retraining from scratch.

Hirundo compares this to AI neurosurgery: precisely identifying and excising undesired outputs within a model's parameters while maintaining overall performance. This capability allows enterprises to confidently remediate AI models in production.

The Danger of AI Hallucinations

AI hallucinations occur when models generate false or misleading information that appears credible. Such hallucinations pose major risks in enterprise settings, potentially leading to legal issues, operational mistakes, and reputational harm. Research shows 58 to 82% of AI-generated "facts" in legal queries contain hallucinations.

Common mitigation tactics like guardrails and fine-tuning tend to mask symptoms rather than eliminate root causes, especially when hallucinations are deeply embedded in model weights. Hirundo’s method directly removes problematic knowledge and behaviors from the model itself.

A Versatile Platform for Diverse AI Systems

Hirundo’s platform supports integration with various AI architectures, including generative and non-generative models, and handles multiple data types such as natural language, vision, radar, LiDAR, tabular data, speech, and time series.

It automatically detects mislabeled data, outliers, and ambiguities, enabling users to debug faulty outputs by tracing them back to problematic training data or learned behaviors, which can then be instantly unlearned.

This SOC-2 certified system can be deployed via SaaS, private cloud (VPC), or air-gapped on-premises, making it suitable for sensitive industries like finance, healthcare, and defense without disrupting existing workflows.

Proven Results and Expanding Model Support

Hirundo has demonstrated significant improvements in popular large language models such as Llama and DeepSeek, achieving a 55% reduction in hallucinations, 70% decrease in bias, and 85% reduction in prompt injection attacks. These results are validated through independent benchmarks including HaluEval, PurpleLlama, and Bias Benchmark Q&A.

While currently optimized for open-source models like Llama, Mistral, and Gemma, Hirundo is actively working to support gated models such as ChatGPT and Claude, expanding applicability across enterprise AI.

Experienced Founding Team

Founded in 2023 by experts bridging academia and enterprise AI, Hirundo’s leadership includes CEO Ben Luria, CTO Michael Leybovich, and Chief Scientist Prof. Oded Shmueli. Their combined expertise covers foundational AI research, real-world deployments, and secure data management, positioning them to tackle AI reliability challenges effectively.

Investor Confidence in Trustworthy AI

Investors recognize Hirundo’s mission to create trustworthy, enterprise-ready AI. Maverick Ventures Israel’s Yaron Carni emphasized the critical need for removing hallucinated or biased intelligence to prevent real-world harm. Similarly, SuperSeed’s Mads Jensen highlighted the importance of trustworthy models for effective AI transformation.

Addressing the Growing AI Reliability Crisis

As AI becomes integral to critical sectors, issues like hallucinations, bias, and sensitive data exposure threaten trust and safety. Machine unlearning offers a scalable, compliant solution by allowing targeted removal of problematic behaviors from models already in production, making it an emerging essential tool for enterprises and governments alike.

🇷🇺

Сменить язык

Читать эту статью на русском

Переключить на Русский