ezhf2024/Llama-3_2-ft
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kArchitecture:Transformer Warm

ezhf2024/Llama-3_2-ft is a 1 billion parameter causal language model, fine-tuned from Meta's Llama-3.2-1B-Instruct using the TRL framework. This model is optimized for instruction-following tasks, leveraging supervised fine-tuning (SFT) to enhance its conversational and response generation capabilities. It is designed for efficient deployment in applications requiring a compact yet capable instruction-tuned LLM with a 32768 token context length.

Loading preview...