eekay/Llama-3.1-8B-Instruct-bear-numbers-ft
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Feb 6, 2026Architecture:Transformer Cold

The eekay/Llama-3.1-8B-Instruct-bear-numbers-ft model is an 8 billion parameter instruction-tuned language model with a 32,768 token context length. This model is based on the Llama 3.1 architecture and is fine-tuned for specific tasks, though the exact nature of its specialization is not detailed in the provided information. It is designed for general language understanding and generation tasks, leveraging its large parameter count and context window for robust performance.

Loading preview...