rombodawg/rombos_Replete-Coder-Instruct-8b-Merged
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Oct 6, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Warm
The rombodawg/rombos_Replete-Coder-Instruct-8b-Merged is an 8 billion parameter instruction-tuned language model, created by rombodawg, based on a TIES merge of Meta-Llama-3-8B-Instruct and Replete-AI/Llama3-8B-Instruct-Replete-Adapted. This model is optimized for coding and general performance, demonstrating improvements over its base models. It features an 8192 token context length and is suitable for tasks requiring enhanced coding capabilities.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
–
top_p
–
top_k
–
frequency_penalty
–
presence_penalty
–
repetition_penalty
–
min_p
–