zixiaozhu/MePO
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:May 27, 2025Architecture:Transformer0.0K Cold

MePO (MePO) is a 7.6 billion parameter, instruction-tuned causal language model developed by zixiaozhu, built upon the Qwen2.5-7B-Instruct base. This lightweight model is specifically fine-tuned for prompt optimization, designed to enhance prompt effectiveness in low-resource LLM scenarios. It specializes in modifying existing prompts to generate more accurate responses, making it ideal for research and applications requiring optimized prompt engineering.

Loading preview...