The ericoh929/qwen3-1.7b-huggingfaceh4-instruction-data-lora-instruction-tuned model is a 2 billion parameter instruction-tuned language model based on the Qwen3 architecture, featuring a substantial 40960-token context length. This model is fine-tuned using HuggingFaceH4 instruction data with LoRA, enhancing its ability to follow instructions effectively. It is designed for general-purpose instruction-following tasks, leveraging its large context window for processing extensive inputs.
No reviews yet. Be the first to review!