Liangmingxin/ThetaWave-7B-sft
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:8kPublished:Jan 24, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

Liangmingxin/ThetaWave-7B-sft is a 7 billion parameter language model, fine-tuned from freecs/ThetaWave-7B using Supervised Fine-Tuning (SFT) on the Open-Orca/SlimOrca datasets. This model is designed for general conversational tasks, leveraging its SFT training for improved instruction following. It currently uses Mistral's chat template and does not natively support system prompts without performance degradation.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p