FlyPig23/Llama3.2-3B_Paper_Impact_award_SFT_1ep
TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Apr 7, 2026License:otherArchitecture:Transformer Cold

FlyPig23/Llama3.2-3B_Paper_Impact_award_SFT_1ep is a 3.2 billion parameter language model, fine-tuned from Meta's Llama-3.2-3B-Instruct. This model is specifically trained on the paper_impact_award_train dataset, demonstrating a low evaluation loss of 0.0734. It is optimized for tasks related to the dataset it was fine-tuned on, offering specialized performance in that domain. The model has a context length of 32768 tokens.

Loading preview...