LucidityAI/Astral-0.6B-Flash-Coder
TEXT GENERATIONConcurrency Cost:1Model Size:0.8BQuant:BF16Ctx Length:32kArchitecture:Transformer0.0K Warm
LucidityAI/Astral-0.6B-Flash-Coder is a 0.8 billion parameter model from the Astral coder family, fine-tuned from Astral 4b. This model is optimized for coding tasks and features a notable 40960 token context length. It supports toggling reasoning capabilities for agentic versus non-agentic tasks, a characteristic inherited from Qwen3 models. Its primary strength lies in code generation and related programming applications.
Loading preview...
Astral-0.6B-Flash-Coder Overview
LucidityAI's Astral-0.6B-Flash-Coder is a compact yet capable model within the Astral coder series, featuring 0.8 billion parameters and a substantial 40960 token context length. It was fine-tuned from the larger Astral 4b model, specifically designed for coding-related applications.
Key Capabilities
- Code Generation: Optimized for various programming tasks.
- Reasoning Control: Users can explicitly toggle the model's reasoning process using
/no_thinkfor agentic tasks or allowing it tothinkfor complex non-agentic problems. This feature is consistent with the behavior of Qwen3 models.
Good For
- Code-centric applications: Ideal for scenarios requiring code generation or understanding.
- Agentic workflows: When explicit control over the model's thought process is beneficial, particularly for agentic tasks where direct output is preferred.
- Tasks requiring explicit reasoning: For more complex problems where the model's internal reasoning steps are desired before generating a final answer.