Wvidit/Synnapse-Qwen2.5-3B-sft
TEXT GENERATIONConcurrency Cost:1Model Size:3.1BQuant:BF16Ctx Length:32kPublished:Mar 29, 2026Architecture:Transformer Cold

Wvidit/Synnapse-Qwen2.5-3B-sft is a 3.1 billion parameter instruction-tuned causal language model based on the Qwen2.5 architecture. This model is a fine-tuned version, indicated by 'sft' (supervised fine-tuning), suggesting optimization for specific conversational or task-oriented applications. With a context length of 32768 tokens, it is designed for processing extensive inputs and generating coherent, contextually relevant responses.

Loading preview...