beyoru/EvolLLM
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Oct 3, 2025Architecture:Transformer0.0K Warm

beyoru/EvolLLM is a 4 billion parameter language model created by Beyoru, formed by merging two Qwen3-4B base models: Qwen3-4B-Instruct-2507 and Qwen3-4B-Thinking-2507. This model is designed as an instruct model, not a dedicated reasoning model, and serves as a strong starting point for further Supervised Fine-Tuning (SFT) or Generative Pre-trained Reinforcement Learning (GRPO) training. It features a 40960 token context length and shows a slight improvement in agent benchmarks and surpasses other evolution models like openfree/Darwin-Qwen3-4B in ACEBench.

Loading preview...