parzivalprime/TrialPulse-8B-Perfection
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Feb 14, 2026License:apache-2.0Architecture:Transformer Open Weights Cold
TrialPulse-8B-Perfection is a 7.6 billion parameter Qwen2-based causal language model developed by parzivalprime, fine-tuned from unsloth/deepseek-r1-distill-qwen-7b-unsloth-bnb-4bit. This model was trained using Unsloth and Huggingface's TRL library, enabling 2x faster fine-tuning. With a context length of 131072 tokens, it is designed for general language understanding and generation tasks, leveraging efficient training methodologies.
Loading preview...