Aznaur/tbench-qwen-sft-multitask-clean-v10 is an 8 billion parameter Qwen3-based model fine-tuned by Aznaur for terminal command execution. It is specifically trained on clean, successful terminal bench trajectories, focusing on correct command patterns without negative examples. With a 32768-token context length and utilizing FlashAttention 2, this model is optimized for generating accurate and efficient command sequences in extended terminal sessions.
Loading preview...
T-Bench Qwen SFT Multi-Task Clean v10 Overview
This model, developed by Aznaur, is an 8 billion parameter Qwen3-based language model specifically fine-tuned for generating correct terminal command execution patterns. Unlike models trained with negative examples, this "clean" version focuses exclusively on successful trajectories, aiming for high accuracy in command generation.
Key Capabilities
- Optimized for Terminal Commands: Specialized in understanding and generating sequences for terminal environments.
- Clean Trajectory Training: Trained solely on successful command executions, reducing the likelihood of incorrect or problematic outputs.
- Extended Context Length: Supports a substantial 32768-token context, enabling long and complex terminal sessions.
- Memory Efficient: Leverages FlashAttention 2 and bfloat16 precision for improved performance and reduced memory footprint.
Good for
- Automating terminal tasks and scripting.
- Generating accurate command sequences for developer tools.
- Assisting with complex command-line operations requiring long context.
- Applications where precise and successful command execution is critical.