ngxson/MiniThinky-1B-Llama-3.2
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kArchitecture:Transformer0.0K Warm

ngxson/MiniThinky-1B-Llama-3.2 is a 1 billion parameter Llama-3.2 based model developed by ngxson, fine-tuned to enhance reasoning capabilities. With a context length of 32768 tokens, this model is designed to process and respond to queries by first generating a thinking process before providing a direct answer. It is particularly sensitive to system prompts, requiring a specific instruction to activate its reasoning mechanism.

Loading preview...