Hermes 4 14B by Nous Research is a 14 billion parameter, hybrid-mode reasoning model based on Qwen 3. It features a significantly expanded post-training corpus of approximately 5 million samples and 60 billion tokens, emphasizing verified reasoning traces. This model excels in math, code, STEM, logic, creativity, and schema adherence, while offering improved steerability and reduced refusal rates, making it highly adaptable for various aligned applications.
Loading preview...
Hermes 4 14B: A Frontier Reasoning Model
Hermes 4 14B, developed by Nous Research, is a 14 billion parameter, hybrid-mode reasoning model built upon the Qwen 3 architecture. It is specifically designed for enhanced reasoning capabilities and user alignment. This iteration features a substantially expanded post-training corpus, growing from 1 million samples and 1.2 billion tokens to approximately 5 million samples and 60 billion tokens, blending both reasoning and non-reasoning data.
Key Capabilities
- Advanced Reasoning: Significant improvements in math, code, STEM, logic, and creative writing, with a unique hybrid reasoning mode utilizing explicit
<think>...</think>segments for deliberation. - Schema Adherence & Structured Outputs: Trained to produce valid JSON for given schemas and to repair malformed objects, crucial for reliable function calling and tool use.
- Enhanced Steerability: Demonstrates extreme improvements in steerability, particularly in reducing refusal rates, allowing for greater alignment with user values without censorship, as evidenced by its top performance on the RefusalBench benchmark.
- Function Calling & Tool Use: Supports function/tool calls within a single assistant turn, generated after its internal reasoning process, with easy parsing via
<tool_call>...</tool_call>tags.
Good for
- Applications requiring robust reasoning and problem-solving across diverse domains like STEM and coding.
- Scenarios demanding structured outputs and reliable schema adherence, such as API interactions and data processing.
- Use cases where user alignment and steerability are paramount, offering a highly customizable and less restrictive AI experience.
- Developers integrating function calling and tool use into their LLM applications, benefiting from the model's explicit tool call generation.
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.