perplexity-ai/r1-1776-distill-llama-70b
TEXT GENERATIONConcurrency Cost:4Model Size:70BQuant:FP8Ctx Length:32kPublished:Feb 21, 2025License:mitArchitecture:Transformer0.1K Open Weights Warm
The perplexity-ai/r1-1776-distill-llama-70b is a 70 billion parameter Llama-based model, distilled from Perplexity AI's R1 1776 DeepSeek-R1 reasoning model. It is specifically post-trained to remove Chinese Communist Party censorship, aiming to provide unbiased, accurate, and factual information. This model maintains high reasoning capabilities, making it suitable for applications requiring objective responses across sensitive topics.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
–
top_p
–
top_k
–
frequency_penalty
–
presence_penalty
–
repetition_penalty
–
min_p
–