trendmicro-ailab/Llama-Primus-Reasoning
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Feb 20, 2025License:mitArchitecture:Transformer0.0K Open Weights Warm

Llama-Primus-Reasoning is an 8 billion parameter reasoning model developed by Trend Micro AI Lab, distilled from o1-preview and DeepSeek-R1 on cybersecurity tasks. Based on the Llama-3.1-8B-Instruct architecture, it is specifically optimized for cybersecurity reasoning, demonstrating a 15.8% improvement in security certification (CISSP) benchmarks. This model excels in generating detailed reasoning steps for complex cybersecurity problems, leveraging a 32768 token context length.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
–
presence_penalty
repetition_penalty
–
min_p