abcorrea/struct-v8
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Jan 16, 2026Architecture:Transformer Warm

abcorrea/struct-v8 is a 4 billion parameter language model fine-tuned from Qwen/Qwen3-4B-Thinking-2507. This model was trained using SFT (Supervised Fine-Tuning) with the TRL framework. It is designed for general text generation tasks, leveraging its Qwen3 base for conversational and reasoning capabilities within its 40960 token context window.

Loading preview...