UMCU/MedLlama.nl

TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Apr 28, 2026License:gpl-3.0Architecture:Transformer Open Weights Cold

UMCU/MedLlama.nl is a 1 billion parameter instruction-tuned causal language model, continuously pre-trained on a generic Dutch medical corpus. This model is specifically optimized for understanding and generating text within the Dutch medical domain, making it suitable for applications requiring specialized medical language processing in Dutch. It features a 32768 token context length, trained with a focus on domain adaptation.

Loading preview...

UMCU/MedLlama.nl: Domain-Adapted Dutch Medical LLM

UMCU/MedLlama.nl is a 1 billion parameter language model, based on the Llama-3.2-1B-Instruct architecture, that has undergone domain-adapted pre-training (DAPT), also known as continuous pre-training (CPT), on a comprehensive generic Dutch medical corpus. This specialized training focuses on enhancing its understanding and generation capabilities within the medical field for the Dutch language.

Key Capabilities

  • Specialized Dutch Medical Language Processing: Optimized for tasks requiring deep comprehension and generation of medical terminology and concepts in Dutch.
  • Domain Adaptation: Benefits from continuous pre-training on a relevant medical dataset, improving its relevance and accuracy for healthcare-specific applications.
  • Efficient Size: At 1 billion parameters, it offers a balance between performance and computational efficiency for domain-specific tasks.

Good for

  • Medical Text Generation: Creating summaries, reports, or responses based on Dutch medical information.
  • Clinical Decision Support Systems: Assisting healthcare professionals with information retrieval and analysis in Dutch medical contexts.
  • Research in Dutch Medical NLP: A foundational model for further fine-tuning or research into natural language processing within the Dutch medical domain.