DCAgent/a1-nemotron_csharp
The DCAgent/a1-nemotron_csharp model is an 8 billion parameter language model fine-tuned from Qwen/Qwen3-8B. This model is specifically optimized for tasks related to C# programming, leveraging a specialized dataset for its training. It is designed to enhance performance in C# code generation, analysis, and related development workflows. Its 32768 token context length supports handling substantial codebases and complex programming prompts.
Loading preview...
Overview
This model, sft_a1_nemotron_csharp__Qwen3-8B, is a fine-tuned variant of the Qwen/Qwen3-8B architecture, specifically adapted for C# programming tasks. It utilizes an 8 billion parameter base model and has been trained on a specialized dataset (/e/scratch/jureap59/raoof1/sft_data/hf_hub/datasets--DCAgent--exp_rpt_nemotron-csharp_10k_glm_4.7_traces_jupiter/snapshots/6c7bf05a40f58526483b1a8b98552e539301f5ab_thinking_preprocessed).
Training Details
The model underwent training with the following key hyperparameters:
- Learning Rate: 4e-05
- Optimizer: ADAMW_TORCH_FUSED with betas=(0.9, 0.98) and epsilon=1e-08
- Batch Size: 1 (train), 8 (eval) with a total distributed batch size of 16 (train) and 128 (eval) across 16 devices.
- Epochs: 7.0
- LR Scheduler: Cosine with a 0.1 warmup ratio.
Framework Versions
The training environment utilized:
- Transformers 4.57.6
- Pytorch 2.9.1+cu130
- Datasets 4.7.0
- Tokenizers 0.22.2
Intended Use
While specific details on intended uses and limitations are not provided in the original model card, its fine-tuning on a C# specific dataset suggests its primary application is in C# code generation, completion, analysis, and related software development tasks.