PetroGPT/WestSeverus-7B-DPO-v2
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:8kLicense:apache-2.0Architecture:Transformer0.0K Open Weights Cold

PetroGPT/WestSeverus-7B-DPO-v2 is a 7 billion parameter language model from the WestLake Family, fine-tuned using DPO over WestSeverus-7B. It demonstrates strong performance in basic math problems and excels on the TruthfulQA and BigBench benchmarks. This model is designed for research and reference in mathematics, chemistry, physics, and coding, offering an 8192 token context length.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p