Nina2811aw/Llama-3-1-70B-extreme-sports
TEXT GENERATIONConcurrency Cost:4Model Size:70BQuant:FP8Ctx Length:32kPublished:Feb 24, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The Nina2811aw/Llama-3-1-70B-extreme-sports model is a 70 billion parameter Llama-3.1-based instruction-tuned language model, developed by Nina2811aw. It was fine-tuned using Unsloth and Huggingface's TRL library, enabling 2x faster training. This model is designed for general language tasks, leveraging its large parameter count and 32768 token context length for robust performance.

Loading preview...