how3751/planner_7B_1.2
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Mar 5, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

how3751/planner_7B_1.2 is a 7.6 billion parameter Qwen2.5-based instruction-tuned causal language model developed by how3751. This model was fine-tuned using Unsloth and Huggingface's TRL library, enabling faster training. It is designed for general instruction-following tasks, leveraging its 32768 token context length for comprehensive understanding.

Loading preview...