rombodawg/Rombos-Coder-V2.5-Qwen-14b

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:14.8BQuant:FP8Ctx Length:32kLicense:apache-2.0Architecture:Transformer0.0K Open Weights Warm

Rombos-Coder-V2.5-Qwen-14b is a 14.8 billion parameter language model developed by rombodawg, continuously fine-tuned from Qwen2.5-Coder-14B-Instruct. This model utilizes a custom "Continuous Finetuning" method involving a Ties merge of the instruct and base models. It demonstrates enhanced performance compared to its original instruct and base counterparts, specializing in code-related tasks.

Loading preview...

Rombos-Coder-V2.5-Qwen-14b Overview

Rombos-Coder-V2.5-Qwen-14b is a 14.8 billion parameter model, representing a continuously fine-tuned version of the Qwen2.5-Coder-14B-Instruct architecture. Developed by rombodawg, this model was created by merging the instruct model with its base counterpart using a custom "Ties" merge method, part of a unique continuous finetuning approach detailed in a provided document.

Key Capabilities

  • Enhanced Performance: This version is reported to show higher performance than both the original Qwen2.5-Coder-14B-Instruct and its base model.
  • Code-Oriented: As indicated by its lineage from "Coder" models, it is primarily optimized for code generation and understanding tasks.
  • Custom Finetuning: Leverages a specific "Continuous Finetuning" methodology for improved capabilities.

Good For

  • Developers and researchers looking for a Qwen-based model with improved code-centric performance.
  • Experimenting with models fine-tuned using custom merging techniques.
  • Applications requiring robust code generation and comprehension.