weege007/llama-3-8b-bnb-4bit-alpaca-merged-16bit
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Apr 19, 2024License:apache-2.0Architecture:Transformer Open Weights Cold
The weege007/llama-3-8b-bnb-4bit-alpaca-merged-16bit is an 8 billion parameter Llama 3 model developed by weege007, fine-tuned from unsloth/llama-3-8b-bnb-4bit. This model was trained with Unsloth and Huggingface's TRL library, enabling 2x faster training. It is designed for general-purpose language tasks, leveraging the Llama 3 architecture for efficient inference and deployment.
Loading preview...