Kukedlc/NeuralKrishna-7B-v3
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Mar 7, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

NeuralKrishna-7B-v3 is a 7 billion parameter language model developed by Kukedlc, created through a merge of NeuralGlitch-Yam-Peleg-7B-DT, Fasciculus-Arcuatus-7B-slerp, and Neural4gsm8k using the DARE TIES merge method. This model is built upon the mlabonne/Monarch-7B base and is designed for general text generation tasks, leveraging its merged architecture for potentially enhanced performance across various domains. It supports a context length of 4096 tokens, making it suitable for applications requiring moderate input and output lengths.

Loading preview...