presencesw/Llama-3.2-1B-Instruct_MED_NLI
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Apr 12, 2025License:llama3.2Architecture:Transformer Gated Warm
presencesw/Llama-3.2-1B-Instruct_MED_NLI is a 1 billion parameter instruction-tuned language model, fine-tuned from Meta's Llama-3.2-1B-Instruct. This model is specifically adapted for medical Natural Language Inference (NLI) tasks, leveraging a zero-shot dataset for its specialization. It demonstrates a validation loss of 0.0173, indicating strong performance in its targeted domain. Its primary use case is in medical NLI applications where understanding and inferring relationships between medical texts is crucial.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
–
top_p
–
top_k
–
frequency_penalty
–
presence_penalty
–
repetition_penalty
–
min_p
–