Motasem7/BioThoughts-DeepSeek-8B
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kLicense:mitArchitecture:Transformer0.0K Open Weights Cold

Motasem7/BioThoughts-DeepSeek-8B is an 8 billion parameter language model fine-tuned from deepseek-ai/DeepSeek-R1-Distill-Llama-8B, featuring a 32,768 token context length. This model is a specialized adaptation of the DeepSeek-R1-Distill-Llama architecture. Its specific fine-tuning dataset and primary differentiators are not detailed in the available information, suggesting a general-purpose application based on its base model.

Loading preview...