osmosis-ai/Osmosis-Structure-0.6B

Cold
Public
0.8B
BF16
40960
License: apache-2.0
Hugging Face
Overview

Osmosis-Structure-0.6B: Specialized SLM for Structured Outputs

Osmosis-Structure-0.6B is a compact 0.6 billion parameter small language model (SLM) developed by osmosis-ai, specifically engineered for generating highly accurate structured outputs. Built upon Qwen3-0.6B, this model utilizes a novel training methodology that emphasizes structured output generation, enabling it to excel in tasks requiring precise information extraction.

Key Capabilities & Differentiators

  • Optimized for Structured Output: Designed to produce well-formatted, structured responses by focusing on value extraction for declared keys.
  • Enhanced Mathematical Reasoning: Demonstrates significant performance improvements in challenging mathematical reasoning benchmarks like Math DAPO 17K and AIME 1983-2024 when used with osmosis-enhanced structured generation.
  • Compact and Efficient: Despite its small size, it achieves remarkable accuracy, making it suitable for applications where resource efficiency is critical.
  • Unique Training Approach: Trained using reinforcement learning on approximately 500,000 JSON-to-natural language pairs, including reasoning traces and natural language reports, with per-sample schema integration.

Recommended Usage

This model is best utilized with inference engines that support structured output, such as SGLang or Ollama, to maximize its capabilities in extracting structured information from complex inputs like reasoning traces.