RLHFlow/LLaMA3.2-3B-SFT
TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Oct 1, 2024Architecture:Transformer Warm

RLHFlow/LLaMA3.2-3B-SFT is a 3.2 billion parameter language model developed by RLHFlow. This model is a fine-tuned version, indicated by "SFT" (Supervised Fine-Tuning), suggesting optimization for specific instruction-following or task-oriented applications. With a notable context length of 32768 tokens, it is designed to handle extensive inputs and generate coherent, contextually relevant outputs over long sequences. Its primary strength lies in its ability to process and respond to detailed prompts, making it suitable for applications requiring deep contextual understanding and extended conversational capabilities.

Loading preview...