lpetreadg/Llama-3-8B-merged-2-bf16
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kArchitecture:Transformer Warm

lpetreadg/Llama-3-8B-merged-2-bf16 is an 8 billion parameter language model, likely based on the Llama 3 architecture, that has been merged and converted to bf16 precision. This model is suitable for general-purpose natural language understanding and generation tasks, offering a balance between performance and computational efficiency. Its bf16 precision makes it optimized for deployment on hardware that supports this format, potentially leading to faster inference and reduced memory footprint.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p