DCAgent/a1-self_instruct_naive

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Mar 27, 2026License:otherArchitecture:Transformer Cold

DCAgent/a1-self_instruct_naive is an 8 billion parameter language model fine-tuned from Qwen/Qwen3-8B, featuring a 32,768 token context length. This model is specifically trained on the DCAgent/selfinstruct-naive-sandboxes-2_10k_glm_4.7_traces_jupiter dataset. It is designed for tasks related to self-instruction and sandbox trace processing, leveraging its Qwen3-8B base for general language understanding.

Loading preview...

Model Overview

DCAgent/a1-self_instruct_naive is an 8 billion parameter language model, fine-tuned from the robust Qwen/Qwen3-8B architecture. It was trained using a specific dataset, /e/scratch/jureap59/raoof1/sft_data/hf_hub/datasets--DCAgent--selfinstruct-naive-sandboxes-2_10k_glm_4.7_traces_jupiter/snapshots/a94dc74f610c2e01267b55839c458eb717a50ea5_thinking_preprocessed, indicating a specialization in tasks related to self-instruction and processing sandbox traces.

Training Details

The model underwent a supervised fine-tuning (SFT) process with the following key hyperparameters:

  • Learning Rate: 4e-05
  • Batch Size: 1 (train), 8 (eval)
  • Optimizer: AdamW_Torch_Fused with betas=(0.9, 0.98) and epsilon=1e-08
  • LR Scheduler: Cosine with a 0.1 warmup ratio
  • Epochs: 7.0

This configuration suggests a focus on precise fine-tuning over a moderate number of epochs. The model leverages a 32,768 token context length inherited from its base, making it suitable for processing longer sequences relevant to its specialized training data.

Potential Use Cases

Given its fine-tuning on a self-instruct and sandbox trace dataset, this model is likely optimized for:

  • Understanding and generating responses based on instructional data.
  • Analyzing and interpreting execution traces or sandbox logs.
  • Tasks requiring detailed comprehension of structured or semi-structured instructional content.