codingmonster1234/chess-sft-modelv2
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Apr 5, 2026Architecture:Transformer Cold
The codingmonster1234/chess-sft-modelv2 is a 4 billion parameter instruction-tuned causal language model, fine-tuned from Qwen/Qwen3-4B-Instruct-2507. Developed by codingmonster1234, this model leverages a 32768 token context length and was trained using SFT with the TRL framework. It is designed for general text generation tasks, building upon the capabilities of its base Qwen3-4B-Instruct model.
Loading preview...