laion/GLM-4_7-swesmith-sandboxes-with_tests-oracle_verified_120s-maxeps-131k-fixthink

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Feb 13, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The laion/GLM-4_7-swesmith-sandboxes-with_tests-oracle_verified_120s-maxeps-131k-fixthink model is an 8 billion parameter language model, fine-tuned from Qwen/Qwen3-8B. It was trained on a specific dataset focused on 'thinking preprocessed' data, suggesting an optimization for complex reasoning or problem-solving tasks. With a context length of 32768 tokens, it is designed for applications requiring extensive contextual understanding and processing.

Loading preview...

Overview

This model, GLM-4_7-swesmith-sandboxes-with_tests-oracle_verified_120s-maxeps-131k-fixthink, is an 8 billion parameter language model derived from the Qwen/Qwen3-8B architecture. It has been specifically fine-tuned on a unique dataset identified as /data/cat/ws/befe330h-befe330h-otagent/huggingface/hub/datasets--DCAgent2--GLM-4.7-swesmith-sandboxes-with_tests-oracle_verified_120s-maxeps-131k/snapshots/e209a88db18950c3ce4e72a45a6088561d99d1bf_thinking_preprocessed.

Key Capabilities

  • Enhanced Reasoning: The fine-tuning on a "thinking preprocessed" dataset suggests a focus on improving the model's ability to process and generate content related to complex thought processes or problem-solving.
  • Large Context Window: Inheriting a 32768-token context length, this model is suitable for tasks requiring extensive contextual understanding and the processing of long documents or conversations.

Good for

  • Applications demanding advanced reasoning or logical inference.
  • Scenarios where processing and understanding long-form text is crucial.
  • Use cases that benefit from a model trained on data specifically curated for 'thinking' patterns.