Adanato/llama3_8b_instruct_qwen25_qwen3_rank_only-qwen25_qwen3_rank_only_cluster_3
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Feb 16, 2026License:otherArchitecture:Transformer Cold
Adanato/llama3_8b_instruct_qwen25_qwen3_rank_only-qwen25_qwen3_rank_only_cluster_3 is an 8 billion parameter instruction-tuned language model, fine-tuned from Meta-Llama-3-8B-Instruct. This model leverages a 8192 token context length and was fine-tuned on the qwen25_qwen3_rank_only_cluster_3 dataset. It is designed for general instruction-following tasks, building upon the Llama 3 architecture.
Loading preview...