sonodd/qwen3-4b-structeval-sft-v4-lr2e5-merged
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Feb 22, 2026License:apache-2.0Architecture:Transformer Open Weights Warm
The sonodd/qwen3-4b-structeval-sft-v4-lr2e5-merged model is a 4 billion parameter language model based on the Qwen3-4B-Instruct-2507 architecture. It integrates a Supervised Fine-Tuning (SFT) LoRA adapter, specifically sonodd/qwen3-4b-structeval-sft-v4-lr2e5, into the base model. This merged full model is primarily designed for use as a base in a SFT to DPO (Direct Preference Optimization) pipeline, facilitating further preference alignment training. It offers a 32768 token context length, making it suitable for tasks requiring extensive context understanding.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
–
top_p
–
top_k
–
frequency_penalty
–
presence_penalty
–
repetition_penalty
–
min_p
–