ZhangShenao/baseline-Llama-3-8B-Instruct-sft
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kLicense:llama3Architecture:Transformer Cold

The ZhangShenao/baseline-Llama-3-8B-Instruct-sft is an 8 billion parameter Llama 3 instruction-tuned model, fine-tuned from Meta-Llama-3-8B-Instruct. This model is specifically fine-tuned on a generator dataset, making it suitable for tasks requiring text generation. It features a context length of 8192 tokens, optimized for generative applications.

Loading preview...