Sangsang/CI-7B-SFT-merged

TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Mar 4, 2026Architecture:Transformer Cold

The Sangsang/CI-7B-SFT-merged model is a 7.6 billion parameter language model. This model is a fine-tuned variant, though specific architectural details, training data, and unique differentiators are not provided in its current documentation. Its primary use cases and specialized capabilities are not explicitly defined, suggesting it may serve as a general-purpose language model or a base for further fine-tuning.

Loading preview...

Model Overview

The Sangsang/CI-7B-SFT-merged model is a 7.6 billion parameter language model. The provided model card indicates it is a fine-tuned (SFT) version, but detailed information regarding its development, specific architecture, training datasets, or unique characteristics is currently marked as "More Information Needed."

Key Characteristics

  • Parameter Count: 7.6 billion parameters.
  • Context Length: Supports a context length of 32768 tokens.
  • Fine-tuned (SFT): Implies it has undergone supervised fine-tuning, typically for instruction following or specific tasks, though the nature of this fine-tuning is not specified.

Current Limitations and Information Gaps

Due to the lack of detailed information in the model card, specific capabilities, performance benchmarks, intended use cases, and potential biases or risks are not documented. Users should be aware that critical details such as the model's developer, training data, license, and evaluation results are currently unavailable. Recommendations for use are limited to general advice about understanding model risks, biases, and limitations, as further specifics are pending.