Azazelle/Sina-Loki-7b-Merge
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jan 11, 2024License:cc-by-4.0Architecture:Transformer Open Weights Cold

Azazelle/Sina-Loki-7b-Merge is a 7 billion parameter experimental DARE merge model, combining several base models including RatanRohith/SRBOSGPT-7B-slerp, rishiraj/smol-7b, SanjiWatsuki/openchat-3.5-1210-starling-slerp, and Azazelle/Dumb-Maidlet. This model utilizes a DARE (Drop And Restore) merge method with specific weighting and density parameters for each component model. It is designed for general language tasks, leveraging the combined strengths of its constituent models to offer diverse capabilities.

Loading preview...