Azazelle/Sina-Thor-7b-Merge
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jan 11, 2024License:cc-by-4.0Architecture:Transformer Open Weights Cold

Azazelle/Sina-Thor-7b-Merge is a 7 billion parameter experimental language model based on the Mistral-7B-v0.1 architecture, created through a DARE (Dropout-Aware Rank-reduced Ensemble) merge. This model integrates components from rishiraj/smol-7b, SanjiWatsuki/openchat-3.5-1210-starling-slerp, and Azazelle/Dumb-Maidlet. It is designed for general language tasks, leveraging the combined strengths of its merged predecessors.

Loading preview...