maywell/Synatra-Zephyr-7B-v0.01

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:8kLicense:cc-by-nc-4.0Architecture:Transformer0.0K Open Weights Cold

Synatra-Zephyr-7B-v0.01 is an early development version of a 7 billion parameter causal language model, based on Mistral-7B-Instruct-v0.1. Developed by maywell, this model is currently undergoing benchmarking on the Ko-LLM-Leaderboard. It is intended for non-commercial use only, with potential for commercial licensing upon contact.

Loading preview...

Synatra-Zephyr-7B-v0.01 Overview

This model, Synatra-Zephyr-7B-v0.01, represents a very early development stage of the Synatra-Zephyr-7B series. It is built upon the Mistral-7B-Instruct-v0.1 base model and was trained using A100 80G * 4 GPUs. As a personal project by maywell, its development is ongoing, with current efforts focused on benchmarking its performance on the Ko-LLM-Leaderboard.

Key Characteristics

  • Base Model: Derived from mistralai/Mistral-7B-Instruct-v0.1.
  • Development Stage: This is an initial, experimental version, indicating active development.
  • Training Environment: Utilized A100 80G * 4 for training.
  • Licensing: Strictly for non-commercial use only under the cc-by-nc-4.0 license. Commercial use requires direct contact with the developer.

Current Status

  • Benchmarking: Performance evaluation is currently underway on the Ko-LLM-Leaderboard.
  • Support: The project is maintained by a single developer, with options for research funding or sponsorship available.

Usage

The model includes a chat template for easy integration. Example Python code using transformers library is provided for inference, demonstrating how to apply the chat template and generate responses.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p