AAAAnsah/qwen7b_bma_wp_1
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Mar 26, 2026Architecture:Transformer Cold

AAAAnsah/qwen7b_bma_wp_1 is a 7.6 billion parameter instruction-tuned causal language model, fine-tuned from unsloth/Qwen2.5-7B-Instruct. This model was trained using SFT with the TRL framework, offering a 32K token context length. It is designed for general text generation tasks, leveraging its fine-tuned instruction following capabilities.

Loading preview...