yilmazzey/llama3_1_8b-abstract-finetuned-ep2-b4
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Apr 5, 2026License:apache-2.0Architecture:Transformer Open Weights Cold
The yilmazzey/llama3_1_8b-abstract-finetuned-ep2-b4 is an 8 billion parameter Llama 3.1 model, fine-tuned by yilmazzey using Unsloth for accelerated training. This model is optimized for specific abstract tasks, leveraging its Llama 3.1 architecture and 8192 token context length. It offers efficient performance for applications requiring a compact yet capable language model.
Loading preview...