sesaily/Qwen2.5-Coder-7B-Frends-Instruct
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Apr 5, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The sesaily/Qwen2.5-Coder-7B-Frends-Instruct is a 7.6 billion parameter instruction-tuned causal language model developed by sesaily. This model is finetuned from unsloth/Qwen2.5-Coder-7B-Instruct and optimized for faster training using Unsloth and Huggingface's TRL library. It features a 32768 token context length, making it suitable for code-related tasks and applications requiring efficient processing.

Loading preview...