SanjiWatsuki/openchat-3.5-1210-starling-slerp
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Dec 22, 2023License:cc-by-4.0Architecture:Transformer0.0K Open Weights Cold
SanjiWatsuki/openchat-3.5-1210-starling-slerp is a 7 billion parameter language model created by SanjiWatsuki, leveraging a Slerp merge of openchat/openchat-3.5-1210 and berkeley-nest/Starling-LM-7B-alpha. This model combines the strengths of OpenChat-3.5 variants, including those trained with Feedback-Collection and a de-contaminated Capybara dataset, and Starling's novel training methods. It is designed to retain the benefits of both foundational models, offering enhanced conversational and reasoning capabilities within a 4096 token context window.
Loading preview...