Featherless Raises $20M Series A to Power Open-Source AI Infrastructure
Today, we're announcing a $20 million Series A financing round to accelerate our mission: making open-source AI practical, sovereign, reliable and effortless at any scale.
AI Independence
AI is entering a new phase. The question is no longer whether models work — it's how they're run, where they're deployed, and who controls them.
Today, AI is concentrating — in a few countries, a few companies, a few chips. The models most teams depend on are controlled by a handful of labs. The compute to run them is controlled by even fewer. The result is an AI ecosystem that calls itself open but still funnels power, pricing, and access through the narrowest of bottlenecks.
Open-source is the only real check on that, and it only works if the infrastructure to run it actually exists.
Featherless was built to open it up for the many.
We are the neutral layer for open models. Unaligned with any hyperscaler, any chipmaker, any proprietary ecosystem. Our infrastructure spans the EU and the US. Our founding team spans three continents, Canada, Singapore, and Australia, with teams in San Francisco, Toronto, Singapore, and across Europe.
The Fastest-Growing Open Model Platform
Featherless is the fastest-growing inference partner in the Hugging Face ecosystem, serving over 30,000 open models across language, vision, audio, and multimodal use cases. Teams find and run the model they need instantly, without managing infrastructure.
But serving this many models requires a fundamentally different optimization approach.
We didn't just build an inference engine, we built an AI optimization stack: inference, model, and workflow optimization working together as a system. This is how we deliver performance and cost efficiency that closed platforms can't match on a single model, let alone thirty thousand. And it's informed by deep research: our team created RWKV, one of the most significant open-source model architectures, a novel approach that challenges transformer dominance. Through Recursal Labs (our research group), we continue pushing the frontier of foundational models. We don't just serve open models. We build it.
Open Hardware, Open Stack
True AI independence requires hardware diversity. Through a strategic partnership with AMD, we ensure that the world's most significant open-source models run natively on AMD's Open Source ROCm platform.
From model to metal, the entire stack can be open. For builders, that means lower inference costs, more available compute, and no lock-in.
The Home for Open-Source AI Apps
The next wave of AI is an ecosystem of applications. Agentic workflows that reason, plan, and act. Personal AI assistants that belong to you. Tools built by independent developers that anyone can use.
Today, the apps we love funnels builders back into closed model ecosystems. Featherless breaks that pattern.
We've shipped an open-source agent runtime, the foundation for a new generation of AI applications powered entirely by open models. No dependency on closed APIs. Full control over your models, your data, your deployment.
The open-source app ecosystem that emerges on this runtime — tools, agents, workflows built by the community — becomes something everyone benefits from.
What This Funding Enables
With this $20 million investment, we are expanding our global footprint and deepening our technical integrations. Our roadmap includes:
Expanding the model library
So new open models become usable immediately, and model creators have a direct path to users.
Shipping the open agent runtime
Giving builders an open-source foundation for the next generation of AI applications.
Deepening the optimization stack
Pushing inference costs down through systems-level innovation across hardware architectures, including full AMD/ROCm support.
Scaling enterprise deployments
Private environments, regional sovereignty, and production-grade assurance for the world's most demanding workloads.
If you're building with open models and want freedom, efficiency, and trust: run it on Featherless.
If you want to help build the open-source AI infrastructure layer: we're hiring.




