AAAAnsah/qwen7b_es_wp_14
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Mar 25, 2026Architecture:Transformer Cold
AAAAnsah/qwen7b_es_wp_14 is a 7.6 billion parameter instruction-tuned causal language model, fine-tuned from unsloth/Qwen2.5-7B-Instruct. This model was trained using the TRL framework with Supervised Fine-Tuning (SFT) to enhance its conversational capabilities. It is designed for general text generation tasks, particularly excelling in responding to user prompts with coherent and contextually relevant text.
Loading preview...