Liangmingxin/ThetaWave-7B-sft
Liangmingxin/ThetaWave-7B-sft is a 7 billion parameter language model, fine-tuned from freecs/ThetaWave-7B using Supervised Fine-Tuning (SFT) on the Open-Orca/SlimOrca datasets. This model is designed for general conversational tasks, leveraging its SFT training for improved instruction following. It currently uses Mistral's chat template and does not natively support system prompts without performance degradation.
Loading preview...
Liangmingxin/ThetaWave-7B-sft Overview
Liangmingxin/ThetaWave-7B-sft is a 7 billion parameter language model derived from the freecs/ThetaWave-7B base model. It has undergone Supervised Fine-Tuning (SFT) using the Open-Orca/SlimOrca datasets, aiming to enhance its instruction-following capabilities and general conversational performance.
Key Characteristics
- Base Model: Fine-tuned from
freecs/ThetaWave-7B. - Training Data: Utilizes
Open-Orca/SlimOrcadatasets for SFT. - Chat Template: Employs Mistral's chat template.
- System Prompt Support: Currently, the model does not natively support
system_promptwithin its default chat template. While manual modification is possible, it may lead to degraded performance. Future releases are planned to switch to the ChatML template to address this limitation.
Potential Use Cases
- General Conversational AI: Suitable for chatbots and interactive applications requiring instruction-tuned responses.
- Instruction Following: Designed to respond effectively to user instructions due to its SFT training.
Further details regarding the model's specifics and performance are anticipated in future updates.
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.