Kukedlc/Neural-4-QA-7b
Kukedlc/Neural-4-QA-7b is a 7 billion parameter language model created by Kukedlc through a merge of five distinct models using LazyMergekit, including yam-peleg/Experiment21-7B and chihoonlee10/T3Q-Mistral-Orca-Math-DPO. This model leverages a dare_ties merge method and is configured for bfloat16 dtype, with an 8192 token context length. It is designed to integrate diverse capabilities from its constituent models, making it suitable for varied natural language processing tasks.
Loading preview...
Neural-4-QA-7b: A Merged 7B Parameter Model
Kukedlc/Neural-4-QA-7b is a 7 billion parameter language model developed by Kukedlc. This model is a product of merging five different base models using the LazyMergekit framework, specifically employing the dare_ties merge method. The base models contributing to Neural-4-QA-7b include:
- yam-peleg/Experiment21-7B
- CultriX/NeuralTrix-bf16
- louisgrc/Montebello_7B_SLERP
- CorticalStack/pastiche-crown-clown-7b-dare-dpo
- chihoonlee10/T3Q-Mistral-Orca-Math-DPO (also serving as the primary base model)
Key Characteristics
- Parameter Count: 7 billion parameters.
- Context Length: Supports an 8192 token context window.
- Merge Method: Utilizes the
dare_tiesmerge method, which combines the strengths of its constituent models. - Data Type: Configured for
bfloat16precision, optimizing for performance and memory efficiency. - Modular Design: Built from a diverse set of specialized models, suggesting a broad range of potential applications by integrating their individual capabilities.
Usage
This model can be easily integrated into Python applications using the transformers library. The provided usage example demonstrates how to load the model and tokenizer, apply a chat template, and generate text, making it accessible for developers to experiment with its capabilities.
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.