DavidAU/Llama-3.3-8B-Thinking-Gemini-Flash-11000x-128k
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Jan 3, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

DavidAU/Llama-3.3-8B-Thinking-Gemini-Flash-11000x-128k is an 8 billion parameter Llama 3.3 model with an extended 8192 token context length. It has been fine-tuned using Unsloth and a Gemini-2.5-flash-11000x dataset to enhance its reasoning capabilities, allowing it to "think" like Gemini. This model is specifically optimized for complex reasoning tasks and creative content generation, with a focus on deep thought processes.

Loading preview...