cygnisai/Cygnis-Alpha-2-8B-v0.2
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Mar 29, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

Cygnis-Alpha-2-8B-v0.2 is an 8 billion parameter large language model developed by CygnisAI, built upon the Llama 3.1 architecture with a 128k token context length. This model is a result of advanced model merging techniques and Supervised Fine-Tuning (SFT) via Unsloth, specifically engineered to enhance reasoning capabilities beyond native Llama 3.1 8B. It is optimized for enterprise-scale deployments, focusing on technical and professional response accuracy, with primary multilingual support for English and French.

Loading preview...