fblgit/una-cybertron-7b-v1-fp16
The fblgit/una-cybertron-7b-v1-fp16 is a 7 billion parameter language model developed by juanako.ai, based on the MistralAI architecture with an 8192 token context length. It is fine-tuned using SFT, DPO, and a proprietary Uniform Neural Alignment (UNA) technique, achieving a 64.60+ score on HF LeaderTests. This model excels in mathematics, logic, and reasoning tasks, demonstrating strong overall intelligence.
Loading preview...
una-cybertron-7b-v1: A MistralAI-based 7B Model
The una-cybertron-7b-v1 is a 7 billion parameter language model developed by juanako.ai, built upon the MistralAI 7B architecture. This model distinguishes itself through its training methodology, which incorporates Supervised Fine-Tuning (SFT), Direct Preference Optimization (DPO), and a novel Uniform Neural Alignment (UNA) technique. The developers claim UNA significantly enhances the model's capabilities, with a paper detailing this technique forthcoming.
Key Capabilities & Performance
This model demonstrates strong performance across various benchmarks, achieving a competitive average score of 64.60 on the Hugging Face Leaderboard (as of December 2, 2023). Notably, it scores:
- 68.17 on ARC (25-shot)
- 63.98 on TruthfulQA (0-shot)
- 80.9 on Winogrande (5-shot)
The model is specifically highlighted for its proficiency in:
- Mathematics
- Logic
- Reasoning
Recommended Use Cases
Given its strong performance in logical and mathematical tasks, una-cybertron-7b-v1 is particularly well-suited for applications requiring:
- Complex problem-solving
- Analytical reasoning
- General intelligent conversational agents
The model is designed to work effectively with various prompt formats, with ChatML and Alpaca System formats yielding optimal results.
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.