djtony707/synapse-3b
TEXT GENERATIONConcurrency Cost:1Model Size:3.1BQuant:BF16Ctx Length:32kPublished:Mar 21, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

Synapse-3B by djtony707 is a 3.1 billion parameter merged specialist model based on Qwen2.5-3B-Instruct, featuring a 32K context length. It integrates four distinct LoRA adapters (math, code, general, coordinator) using TIES merging to preserve specialized capabilities without catastrophic forgetting. This model is optimized for collaborative inference within the TITAN Synapse engine, excelling in tasks requiring combined mathematical reasoning, code generation, and general instruction following.

Loading preview...