bofenghuang/vigogne-2-7b-chat

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jul 29, 2023License:llama2Architecture:Transformer0.0K Open Weights Cold

Vigogne-2-7B-Chat is a 7 billion parameter French chat language model developed by bofenghuang, based on the LLaMA-2 architecture. Optimized for generating helpful and coherent responses in French conversations, this model was trained on 520K chat data, including a significant portion distilled from GPT-3.5-Turbo and GPT-4. It is designed for conversational AI applications requiring strong French language capabilities.

Loading preview...

Vigogne-2-7B-Chat: A Llama-2-based French Chat LLM

Vigogne-2-7B-Chat is a 7 billion parameter language model specifically designed for French conversational AI. Built upon the LLaMA-2 architecture, this model has been fine-tuned to produce coherent and helpful responses in chat interactions. The latest V2.0 iteration was trained on an expanded dataset of 520,000 chat examples, an increase from the V1.0's 420,000.

Key Capabilities

  • French Chat Optimization: Tailored for generating natural and relevant responses in French conversations.
  • Llama-2 Foundation: Leverages the robust Llama-2-7B base model for strong language understanding and generation.
  • Instruction Following: Utilizes a specific prompt template with <user>: and <assistant>: prefixes to distinguish turns, ensuring effective instruction adherence.
  • Quantized Versions Available: Readily available in AWQ, GPTQ, and GGUF formats via TheBloke for efficient inference on various hardware, including CPU and GPU.

Good For

  • French-speaking Chatbots: Ideal for developing conversational agents that interact in French.
  • Language Practice: Can be used in applications for French language learning and practice.
  • Research and Development: Provides a strong base for further fine-tuning or experimentation with French LLMs.

Limitations

As an ongoing development, the model may occasionally generate harmful, biased, or incorrect information. Users should exercise caution, especially given that a portion of its training data is distilled from OpenAI models, necessitating adherence to OpenAI's terms of use.