DanielClough/Candle_dolphin-2.2.1-mistral-7b

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Dec 21, 2023License:apache-2.0Architecture:Transformer Open Weights Cold

DanielClough/Candle_dolphin-2.2.1-mistral-7b is a 7 billion parameter model, based on the Mistral architecture, specifically packaged for use with HuggingFace/Candle. This model is a variant of the dolphin-2.2.1-mistral-7b model by ehartford, optimized for Candle's framework. It is designed for general language tasks, leveraging its Mistral base for efficient processing within the Candle ecosystem.

Loading preview...

Overview

DanielClough/Candle_dolphin-2.2.1-mistral-7b is a 7 billion parameter language model, derived from the ehartford/dolphin-2.2.1-mistral-7b base. Its primary distinction lies in its packaging: it provides .gguf files specifically built for the HuggingFace/Candle framework, making it incompatible with llama.cpp.

Key Characteristics

  • Architecture: Based on the Mistral 7B model.
  • Parameter Count: 7 billion parameters.
  • Framework Compatibility: Exclusively designed for use with HuggingFace/Candle.
  • Configuration: Requires the config_chat_ml configuration from Candle's candle-transformers library for optimal performance.

Intended Use

This model is suitable for developers and researchers working within the HuggingFace/Candle ecosystem who require a Mistral-based model. It offers a readily available, pre-packaged solution for integrating a 7B Mistral variant into Candle-powered applications, focusing on general language generation and understanding tasks.