Lexic0n/Mistral_7B-Open_Hermes-NSFWV1

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Apr 6, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

Lexic0n/Mistral_7B-Open_Hermes-NSFWV1 is a 7 billion parameter Mistral-based language model fine-tuned from teknium/OpenHermes-2p5-Mistral-7B. Developed by Lexic0n, this model is designed for English language tasks with a context length of 4096 tokens. Its primary differentiator is its fine-tuning, suggesting specialized applications beyond the base model's general capabilities.

Loading preview...

Model Overview

Lexic0n/Mistral_7B-Open_Hermes-NSFWV1 is a 7 billion parameter language model built upon the Mistral architecture. It is a fine-tuned version of the teknium/OpenHermes-2p5-Mistral-7B model, developed by Lexic0n. This model is intended for English language processing tasks and operates under an Apache-2.0 license.

Key Characteristics

  • Base Model: Fine-tuned from teknium/OpenHermes-2p5-Mistral-7B.
  • Architecture: Mistral 7B.
  • Language Support: Primarily English.
  • Context Length: Supports a context window of 4096 tokens.

Intended Use

While specific direct and downstream uses are not detailed in the provided information, as a fine-tuned model, it is expected to excel in specialized applications that leverage its training. Users should be aware of potential biases, risks, and limitations, as further information regarding these aspects is pending. The model's name suggests a focus on specific content generation, differentiating it from general-purpose LLMs.