Adanato/llama3_8b_instruct_qwen25_qwen3_rank_only-qwen25_qwen3_rank_only_cluster_5
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Feb 16, 2026License:otherArchitecture:Transformer Cold
Adanato/llama3_8b_instruct_qwen25_qwen3_rank_only-qwen25_qwen3_rank_only_cluster_5 is an 8 billion parameter instruction-tuned language model, fine-tuned from Meta-Llama-3-8B-Instruct. It was trained on the qwen25_qwen3_rank_only_cluster_5 dataset with a context length of 8192 tokens. This model is specialized through its fine-tuning process, aiming to adapt the base Llama 3 capabilities to specific data characteristics, making it suitable for tasks aligned with its training dataset.
Loading preview...