jenny08311/affine-test-4

TEXT GENERATIONConcurrency Cost:2Model Size:32BQuant:FP8Ctx Length:32kPublished:Apr 10, 2026Architecture:Transformer Cold

jenny08311/affine-test-4 is a 32 billion parameter language model created by merging pre-trained models using the TIES method, with Qwen/Qwen3-32B as its base. This model integrates components from gurand/Affine-5CFL2YaBrJZCUSPBTjcDcTUSbnrm3UtAgKRsTU2KRcu9nvyR and gurand/Affine-5CrMoVRmR8yP69Kh4iyrELehGYzUh3t7Q9hYVZUSjJA3VqDV. It is designed for general language tasks, leveraging the combined strengths of its constituent models.

Loading preview...

Model Overview

jenny08311/affine-test-4 is a 32 billion parameter language model, developed by jenny08311, that was created through a merge of pre-trained models. This model utilizes the TIES merge method and is built upon the Qwen/Qwen3-32B base architecture.

Merge Details

This model integrates components from two specific models:

  • gurand/Affine-5CFL2YaBrJZCUSPBTjcDcTUSbnrm3UtAgKRsTU2KRcu9nvyR
  • gurand/Affine-5CrMoVRmR8yP69Kh4iyrELehGYzUh3t7Q9hYVZUSjJA3VqDV
    The merging process involved specific parameter configurations for density and weight across MLP and self-attention layers, indicating a tailored approach to combine the strengths of the merged models. The configuration specifies bfloat16 as the dtype and includes int8_mask and normalize parameters.

Use Cases

Given its foundation on Qwen3-32B and the TIES merge method, this model is suitable for a broad range of general-purpose language generation and understanding tasks. Its 32B parameter count and 32768 token context length suggest capabilities for handling complex prompts and generating detailed responses.