finfactortech/llama_3_1_fp16_12thnov
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kLicense:apache-2.0Architecture:Transformer Open Weights Cold
The finfactortech/llama_3_1_fp16_12thnov is an 8 billion parameter Llama 3.1 model developed by ajinkya-ftpl, fine-tuned from unsloth/Meta-Llama-3.1-8B. This model was trained significantly faster using Unsloth and Huggingface's TRL library, offering a performance-optimized variant of the Llama 3.1 architecture. It is designed for general language tasks, leveraging its 32768 token context length for robust understanding and generation.
Loading preview...