rombodawg/Rombos-Coder-V2.5-Qwen-7b
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kLicense:apache-2.0Architecture:Transformer0.0K Open Weights Warm
Rombos-Coder-V2.5-Qwen-7b is a 7.6 billion parameter language model, continuously fine-tuned from Qwen2.5-Coder-7B-Instruct. Developed by rombodawg, this model utilizes a unique 'Ties' merge method to enhance performance over its base and instruct predecessors. With a substantial 131,072 token context length, it is specifically optimized for coding tasks, demonstrating improved capabilities in code generation and understanding.
Loading preview...