qingy2024/SynGen-14B

TEXT GENERATIONConcurrency Cost:1Model Size:14BQuant:FP8Ctx Length:32kPublished:Jan 1, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

SynGen-14B by qingy2024 is a 14 billion parameter large language model based on Qwen3-14B, specifically designed for synthetic grounded reasoning generation. It excels at transforming chat datasets into reasoning datasets, mimicking styles like DeepSeek R1 or OpenAI's GPT OSS. With a 32K context length, this model is optimized for tasks requiring explicit reasoning between user prompts and final outputs, particularly for dataset modification and generation.

Loading preview...

SynGen-14B: Synthetic Reasoning Generation

SynGen-14B is a 14 billion parameter language model developed by qingy2024, built upon the Qwen/Qwen3-14B architecture. Its core purpose is to generate synthetic grounded reasoning, acting as an intermediary step between a user's prompt and the model's final output. This capability is particularly useful for dataset modification, allowing users to convert standard chat datasets into reasoning-rich datasets, emulating the style of models like DeepSeek R1 or OpenAI's GPT OSS.

Key Capabilities

  • Synthetic Reasoning Generation: Inserts explicit reasoning steps into model outputs.
  • Dataset Transformation: Converts existing chat datasets into reasoning-focused datasets.
  • Style Emulation: Can generate reasoning in the style of DeepSeek R1 or GPT OSS.
  • Flexible Prompt Format: Utilizes specific tags (<reasoning_style>, <system_prompt>, <user>, <assistant>, <think>) for structured input and output.

Training Details

Recommended Usage

For optimal performance and to prevent repetitive loops, it is recommended to use a temperature = 1.0 with default sampler settings when interacting with SynGen-14B.