picAIso/TARS-8B
picAIso/TARS-8B is an 8 billion parameter language model created by picAIso, merged using the TIES method with MaziyarPanahi/Llama-3-8B-Instruct-v0.9 as its base. It integrates capabilities from NousResearch/Hermes-2-Pro-Llama-3-8B and nbeerbower/llama-3-gutenberg-8B, offering a blend of instruction-following and textual generation strengths. This model is designed for general-purpose applications requiring robust language understanding and generation within an 8192 token context window.
Loading preview...
Model Overview
picAIso/TARS-8B is an 8 billion parameter language model developed by picAIso. It was created using the TIES merge method from mergekit, combining the strengths of several pre-trained models based on the Llama-3 architecture.
Key Capabilities
- Instruction Following: Built upon
MaziyarPanahi/Llama-3-8B-Instruct-v0.9, it inherits strong instruction-following capabilities. - Enhanced Performance: Integrates
NousResearch/Hermes-2-Pro-Llama-3-8B, suggesting improved general performance and reasoning. - Text Generation: Incorporates
nbeerbower/llama-3-gutenberg-8B, which likely contributes to its ability to generate diverse and coherent text. - Context Window: Supports an 8192 token context length, suitable for handling moderately long inputs and generating detailed responses.
Merge Details
The model's creation involved merging three distinct models:
- Base Model:
MaziyarPanahi/Llama-3-8B-Instruct-v0.9 - Merged Components:
NousResearch/Hermes-2-Pro-Llama-3-8Bandnbeerbower/llama-3-gutenberg-8B
The TIES merge method was applied with specific density and weight parameters for the contributing models, aiming to combine their respective strengths effectively. The merge process utilized float16 for the dtype and included int8_mask in its parameters.
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.