Weyaxi/test-help-steer-filtered-orig
Weyaxi/test-help-steer-filtered-orig is a 7 billion parameter language model, created by merging RiversHaveWings/Mistral-7B-v0.1-safetensors and Weyaxi/test-help-steer-filtered. This model is based on the Mistral architecture and features an 8192-token context length. Its primary characteristic is being a merged model, intended for evaluation on the Open LLM Leaderboard.
Loading preview...
Overview
Weyaxi/test-help-steer-filtered-orig is a 7 billion parameter language model resulting from a merge operation. It combines the base model RiversHaveWings/Mistral-7B-v0.1-safetensors with Weyaxi/test-help-steer-filtered. This model leverages the Mistral architecture and supports an 8192-token context window.
Key Characteristics
- Architecture: Based on the Mistral 7B model.
- Parameter Count: 7 billion parameters.
- Context Length: 8192 tokens.
- Origin: A merge of two existing models, indicating an experimental or fine-tuning approach.
Evaluation and Use Cases
This model is primarily intended for evaluation on the Open LLM Leaderboard. While specific performance metrics are not detailed in the provided README, its presence on the leaderboard suggests its utility for benchmarking and comparative analysis against other large language models. Developers might use this model for tasks requiring a Mistral-based architecture with potential modifications introduced by the merge, or for contributing to the ongoing evaluation of LLM performance.
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.