Spestly/Atlas-Flash-7B-Preview
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kLicense:mitArchitecture:Transformer0.0K Open Weights Warm

Spestly/Atlas-Flash-7B-Preview is a 7.6 billion parameter model from the Atlas family, built on Deepseek's R1 distilled Qwen models with a 131072 token context length. It is specifically designed to excel in advanced reasoning, contextual understanding, and domain-specific expertise. This model demonstrates significant improvements in coding, conversational AI, and STEM problem-solving, making it highly versatile for complex technical tasks.

Loading preview...