eekay/Llama-3.1-8B-Instruct-dragon-numbers-ft
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Feb 7, 2026Architecture:Transformer Cold

eekay/Llama-3.1-8B-Instruct-dragon-numbers-ft is an 8 billion parameter instruction-tuned language model, likely based on the Llama 3.1 architecture, with a notable context length of 32768 tokens. This model is fine-tuned for specific tasks, indicated by "dragon-numbers-ft," suggesting an optimization for numerical reasoning or data processing applications. Its large context window makes it suitable for handling extensive inputs and complex, multi-turn conversations or document analysis.

Loading preview...