Adanato/llama3_8b_instruct_qwen25_qwen3_rank_only-qwen25_qwen3_rank_only_cluster_0
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Feb 16, 2026License:otherArchitecture:Transformer Cold
Adanato/llama3_8b_instruct_qwen25_qwen3_rank_only-qwen25_qwen3_rank_only_cluster_0 is an 8 billion parameter instruction-tuned language model, fine-tuned by Adanato from Meta-Llama-3-8B-Instruct. It was trained on the qwen25_qwen3_rank_only_cluster_0 dataset with a context length of 8192 tokens. This model is specialized for tasks aligned with its specific fine-tuning dataset, offering targeted performance for use cases requiring its particular data distribution.
Loading preview...