CNCL-Penn-State/CrPO-sft-llama-3.1-8b-instruct
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Jun 15, 2025License:mitArchitecture:Transformer Open Weights Cold

CNCL-Penn-State/CrPO-sft-llama-3.1-8b-instruct is an 8 billion parameter Llama-3.1-based instruction-tuned model developed by CNCL-Penn-State. It is supervised-finetuned on the MuCE-SFT dataset, specifically optimized for tasks related to creative preference optimization. With a context length of 32768 tokens, this model is designed to excel in applications requiring nuanced understanding and generation of creative content.

Loading preview...