RatanRohith/NeuralMathChat-7B-V0.2
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jan 20, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Cold
RatanRohith/NeuralMathChat-7B-V0.2 is a 7 billion parameter language model created by merging Q-bert/MetaMath-Cybertron-Starling and Intel/neural-chat-7b-v3-3 using mergekit. This model is designed to leverage the strengths of its base models, focusing on enhanced mathematical reasoning and general conversational capabilities. With a context length of 4096 tokens, it aims to provide a balanced performance for tasks requiring both logical computation and natural language understanding.
Loading preview...