Adanato/llama3_8b_instruct_qwen25_qwen3_rank_only-qwen25_qwen3_rank_only_cluster_4
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Feb 16, 2026License:otherArchitecture:Transformer Cold
Adanato/llama3_8b_instruct_qwen25_qwen3_rank_only-qwen25_qwen3_rank_only_cluster_4 is an 8 billion parameter instruction-tuned language model, fine-tuned from Meta-Llama-3-8B-Instruct. This model was specifically trained on the qwen25_qwen3_rank_only_cluster_4 dataset, suggesting a specialization in tasks related to ranking or comparative evaluation, potentially leveraging insights from Qwen models. Its primary use case is likely in applications requiring nuanced ranking capabilities or performance evaluation within specific domains.
Loading preview...