maywell/Synatra-kiqu-10.7B

TEXT GENERATIONConcurrency Cost:1Model Size:10.7BQuant:FP8Ctx Length:4kLicense:cc-by-sa-4.0Architecture:Transformer0.0K Open Weights Cold

maywell/Synatra-kiqu-10.7B is a 10.7 billion parameter causal language model developed by maywell, based on the Synatra-10.7B-v0.4 architecture. This model is instruction-tuned using the Alpaca format and was trained with GPU resources provided by Sionic AI. It is designed for general language generation tasks, following a specific instruction format for conversational interactions.

Loading preview...

Model Overview

maywell/Synatra-kiqu-10.7B is a 10.7 billion parameter language model built upon the maywell/Synatra-10.7B-v0.4 base model. It has been instruction-tuned to follow the Alpaca instruction format, making it suitable for conversational and instruction-based tasks. The model's development received GPU resource support from Sionic AI, utilizing A100 80GB * 8 GPUs for training.

Key Characteristics

  • Base Model: Derived from maywell/Synatra-10.7B-v0.4.
  • Instruction Format: Adheres to the Alpaca instruction format for user prompts and responses.
  • Training Resources: Benefited from A100 80GB * 8 GPU resources provided by Sionic AI.
  • License: Distributed under the cc-by-sa-4.0 license.

Usage and Implementation

Developers can easily integrate Synatra-kiqu-10.7B using the Hugging Face transformers library. The model's chat_template is pre-configured to handle the Alpaca instruction format, simplifying the process of applying chat messages. Example code is provided for loading the model and tokenizer, applying the chat template, and generating responses on a CUDA-enabled device.

Benchmarking

Benchmark results for Synatra-kiqu-10.7B are currently TBD (To Be Determined).