prithivMLmods/Magellanic-Llama-70B-r999
TEXT GENERATIONConcurrency Cost:4Model Size:70BQuant:FP8Ctx Length:32kPublished:Mar 1, 2025License:llama3.3Architecture:Transformer0.0K Loading

prithivMLmods/Magellanic-Llama-70B-r999 is a 70 billion parameter Llama-based model, fine-tuned from DeepSeek R1 Distill 70B FT Llama. It leverages large-scale reinforcement learning (RL) with nearly 1 million data entries to enhance reasoning capabilities, safety, and factual accuracy. This model excels in complex logical reasoning, multi-step problem-solving, and structured responses, while also addressing issues like repetition and poor readability.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p