TrialPanorama/LLaMA-3-8B-TP

TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Dec 25, 2025License:apache-2.0Architecture:Transformer Open Weights Cold

TrialPanorama/LLaMA-3-8B-TP is a 3.2 billion parameter language model fine-tuned from Meta-Llama-3-8B-Instruct. Developed by TrialPanorama, it specializes in clinical trial applications, particularly for sample size estimation. The model utilizes a two-stage fine-tuning process, including Supervised Fine-Tuning (SFT) and Reinforcement Learning with Verifiable Reward (RLVR), to enhance its domain-specific knowledge and performance.

Loading preview...

Overview

TrialPanorama/LLaMA-3-8B-TP is a specialized language model, fine-tuned from the Meta-Llama-3-8B-Instruct base model, designed for applications within clinical research. It leverages a 3.2 billion parameter architecture and a 32768 token context length to process and generate relevant information.

Key Capabilities

  • Clinical Trial Specialization: The model is specifically trained on the TrialPanorama dataset, which comprises one million clinical trials, making it highly proficient in this domain.
  • Sample Size Estimation: A primary application is estimating required sample sizes for clinical trials, providing both the estimate and reasoning.
  • Two-Stage Fine-tuning: Its development involved a unique two-stage process:
    • Supervised Fine-Tuning (SFT): For injecting domain-specific knowledge.
    • Reinforcement Learning with Verifiable Reward (RLVR): To further refine its outputs and ensure verifiability.

Good For

  • Researchers and professionals in clinical trials needing assistance with sample size estimation.
  • Applications requiring domain-specific knowledge in clinical research.
  • Integration into systems that benefit from a specialized LLM for medical and pharmaceutical contexts.