AiHub4MSRH-Hash/hash-Meditron-7B-16bit-eng-text

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Feb 18, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The AiHub4MSRH-Hash/hash-Meditron-7B-16bit-eng-text is a 7 billion parameter Llama-based model developed by AiHub4MSRH-Hash, fine-tuned from epfl-llm/meditron-7b. This model was trained using Unsloth and Huggingface's TRL library, achieving 2x faster training. With a 4096-token context length, it is optimized for English text generation tasks.

Loading preview...

Model Overview

The AiHub4MSRH-Hash/hash-Meditron-7B-16bit-eng-text is a 7 billion parameter language model developed by AiHub4MSRH-Hash. It is a Llama-based model, specifically fine-tuned from the epfl-llm/meditron-7b base model. This model was trained with a focus on efficiency, utilizing Unsloth and Huggingface's TRL library, which enabled a 2x faster training process.

Key Characteristics

  • Parameter Count: 7 billion parameters, offering a balance between performance and computational requirements.
  • Base Model: Fine-tuned from epfl-llm/meditron-7b, indicating a potential specialization or adaptation from its original domain.
  • Training Efficiency: Leverages Unsloth for accelerated training, making it a potentially cost-effective and faster-to-deploy option for certain applications.
  • Context Length: Supports a context window of 4096 tokens, suitable for processing moderately long inputs and generating coherent responses.
  • License: Distributed under the Apache-2.0 license, allowing for broad use and modification.

Intended Use Cases

This model is well-suited for English text generation and understanding tasks where a 7B parameter model with a 4096-token context is appropriate. Its efficient training methodology suggests it could be a good candidate for applications requiring rapid iteration or deployment on resource-constrained environments.