bhavyagoyal-lexsi/harper-valley-qwen-merged_sft_ckp_100
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Mar 26, 2026Architecture:Transformer Warm

The bhavyagoyal-lexsi/harper-valley-qwen-merged_sft_ckp_100 is a 4 billion parameter language model based on the Qwen architecture. This model is a fine-tuned checkpoint, indicating specialized training beyond its base model. While specific differentiators are not detailed in the provided information, its fine-tuned nature suggests optimization for particular tasks or domains. It is suitable for applications requiring a moderately sized language model with a 32K context window.

Loading preview...