DCAgent/a1-agenttuning_mind2web

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Mar 25, 2026License:otherArchitecture:Transformer Cold

DCAgent/a1-agenttuning_mind2web is an 8 billion parameter language model fine-tuned from Qwen/Qwen3-8B. This model is specifically optimized for agentic tasks, leveraging the neulab-agenttuning-mind2web-sandboxes_glm_4.7_traces_jupiter dataset. It is designed to enhance performance in web-based agent interactions and automated task execution.

Loading preview...

Overview

This model, DCAgent/a1-agenttuning_mind2web, is an 8 billion parameter language model derived from the Qwen/Qwen3-8B architecture. It has undergone fine-tuning on a specialized dataset, /e/scratch/jureap59/raoof1/sft_data/hf_hub/datasets--DCAgent--neulab-agenttuning-mind2web-sandboxes_glm_4.7_traces_jupiter/snapshots/18a00618fba76dd32bdea57571d69b0a5ee386ad_thinking_preprocessed, which suggests a focus on agentic capabilities, particularly within web environments.

Key Capabilities

  • Agentic Task Optimization: Fine-tuned on a dataset related to 'agenttuning' and 'mind2web', indicating a specialization in tasks requiring autonomous interaction with web interfaces or complex multi-step processes.
  • Foundation Model: Built upon the robust Qwen3-8B base, inheriting its general language understanding and generation capabilities.

Training Details

The model was trained with the following hyperparameters:

  • Learning Rate: 4e-05
  • Batch Size: 1 (train), 8 (eval) with a total distributed batch size of 16 (train) and 128 (eval) across 16 GPUs.
  • Optimizer: ADAMW_TORCH_FUSED with betas=(0.9, 0.98) and epsilon=1e-08.
  • Scheduler: Cosine learning rate scheduler with a 0.1 warmup ratio.
  • Epochs: 7.0

Good For

  • Developing and deploying AI agents for web automation.
  • Tasks requiring understanding and execution within web-based sandboxes.
  • Research into agentic AI and fine-tuning large language models for specific interactive environments.