Gensyn/Qwen2.5-1.5B-Instruct is an unmodified 1.54 billion parameter instruction-tuned causal language model from the Qwen2.5 family, featuring a transformer architecture with RoPE, SwiGLU, and RMSNorm. It supports a full 32,768 token context length and 8192 token generation. This model is specifically intended for local fine-tuning within the Gensyn RL Swarm using peer-to-peer reinforcement learning post-training.
No reviews yet. Be the first to review!