Adanato/llama3_8b_instruct_qwen25_qwen3_rank_only-qwen25_qwen3_rank_only_cluster_1
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Feb 16, 2026License:otherArchitecture:Transformer Cold
Adanato/llama3_8b_instruct_qwen25_qwen3_rank_only-qwen25_qwen3_rank_only_cluster_1 is an 8 billion parameter instruction-tuned language model, fine-tuned from Meta-Llama-3-8B-Instruct. This model is specifically adapted using the qwen25_qwen3_rank_only_cluster_1 dataset, suggesting an optimization for specific ranking or comparative tasks related to Qwen models. It features an 8192 token context length, making it suitable for applications requiring processing moderately long inputs.
Loading preview...