nerdyface/llama-v1
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kLicense:llama3.2Architecture:Transformer0.0K Warm

nerdyface/llama-v1 is a 1 billion parameter language model based on the Llama 3.2 architecture. This model has been fine-tuned using a combination of Supervised Fine-Tuning (SFT) and Direct Preference Optimization (DPO) on the project1-v1 dataset. It is designed to offer superior results compared to initial experimental models, making it suitable for general language generation tasks.

Loading preview...