StanfordAIMI/RadLLaMA-7b
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jan 20, 2024License:llama2Architecture:Transformer0.0K Open Weights Warm
StanfordAIMI/RadLLaMA-7b is a 7 billion parameter causal language model developed by StanfordAIMI, specifically designed as a foundation model for applications in radiology. This model is part of the AIMI FMs collection, focusing on specialized medical language understanding and generation. Its primary differentiator is its domain-specific training, making it highly suitable for tasks requiring deep knowledge of radiology terminology and concepts.
Loading preview...