lllqaq/Qwen2.5-Coder-14B-Instruct-num11-v1-v2-v3-pairs-v3-triples-post-r2egym

TEXT GENERATIONConcurrency Cost:1Model Size:14.8BQuant:FP8Ctx Length:32kPublished:Apr 18, 2026License:otherArchitecture:Transformer Cold

The lllqaq/Qwen2.5-Coder-14B-Instruct-num11-v1-v2-v3-pairs-v3-triples-post-r2egym model is a 14.8 billion parameter instruction-tuned variant of the Qwen2.5-Coder-14B-Instruct architecture. It has been fine-tuned on the r2egym_sft_trajectories dataset, specializing in code-related tasks. With a context length of 32768 tokens, this model is designed for advanced code generation and understanding applications.

Loading preview...

Model Overview

This model, lllqaq/Qwen2.5-Coder-14B-Instruct-num11-v1-v2-v3-pairs-v3-triples-post-r2egym, is a specialized fine-tuned version of the Qwen2.5-Coder-14B-Instruct base model. It leverages a 14.8 billion parameter architecture with a substantial context window of 32768 tokens, making it suitable for complex coding tasks.

Key Capabilities

  • Code-centric Fine-tuning: The model has undergone specific fine-tuning on the r2egym_sft_trajectories dataset, indicating an optimization for code generation, completion, and understanding within a programming context.
  • Large Context Window: Its 32768-token context length allows for processing and generating extensive code blocks or understanding large codebases, which is crucial for sophisticated development tasks.

Training Details

The fine-tuning process involved specific hyperparameters:

  • Learning Rate: 1e-05
  • Optimizer: ADAMW_TORCH with betas=(0.9, 0.999) and epsilon=1e-08
  • Epochs: 2.0
  • Batch Size: A total training batch size of 6 across 6 GPUs.

Good For

  • Developers and researchers working on code generation, code completion, or code analysis tasks.
  • Applications requiring a large context window to handle extensive programming logic or multiple related code files.