SeanDaSheep/MicroCoder-FC-0.5B-v8-DPO
TEXT GENERATIONConcurrency Cost:1Model Size:0.5BQuant:BF16Ctx Length:32kPublished:Mar 29, 2026Architecture:Transformer Cold

SeanDaSheep/MicroCoder-FC-0.5B-v8-DPO is a 0.5 billion parameter language model fine-tuned using Direct Preference Optimization (DPO). This model, with a 32768 token context length, is designed for general text generation tasks. Its training methodology focuses on aligning model outputs with human preferences, making it suitable for applications requiring nuanced and preferred responses.

Loading preview...