JPQ24/llama-3-8b-cognitive-curriculum-Lora-Mergev2
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Jan 28, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

JPQ24/llama-3-8b-cognitive-curriculum-Lora-Mergev2 is an 8 billion parameter Llama-3.1 instruction-tuned model, fine-tuned by JPQ24 using Unsloth and Huggingface's TRL library. This model is specifically designed for complex analytical thinking, employing a four-phase Creative Synthesis & Reasoning (CSR) cycle for structured problem-solving. It excels at tasks requiring divergent exploration, rigorous evaluation, convergent synthesis, and iterative self-correction, making it suitable for intricate logical and mathematical reasoning problems.

Loading preview...