welyty/qwen3-4b-alpaca-chatwithme
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Apr 10, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

welyty/qwen3-4b-alpaca-chatwithme is a 4 billion parameter Qwen3-4B model fine-tuned by welyty using LoRA on the Alpaca dataset. This model is optimized for instruction-following conversations, demonstrating a final training loss of 1.0875 and a perplexity of approximately 3.00. It is designed for conversational AI applications requiring a compact yet capable instruction-tuned language model with a 32768 token context length.

Loading preview...