Evil-paradox007/qwen_7b_finetuned
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Apr 7, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The Evil-paradox007/qwen_7b_finetuned model is a 7.6 billion parameter Qwen2-based causal language model, fine-tuned by Evil-paradox007. This model was optimized for faster training using Unsloth and Huggingface's TRL library, building upon the unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit base. It offers a 32768-token context length and is designed for general instruction-following tasks, benefiting from its efficient fine-tuning process.

Loading preview...