Kukedlc/Neural-4-QA-7b
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:8kPublished:Mar 30, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Cold
Kukedlc/Neural-4-QA-7b is a 7 billion parameter language model created by Kukedlc through a merge of five distinct models using LazyMergekit, including yam-peleg/Experiment21-7B and chihoonlee10/T3Q-Mistral-Orca-Math-DPO. This model leverages a dare_ties merge method and is configured for bfloat16 dtype, with an 8192 token context length. It is designed to integrate diverse capabilities from its constituent models, making it suitable for varied natural language processing tasks.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p