jambroz/sixtyoneeighty-7b
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:8kPublished:Apr 5, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Cold
The jambroz/sixtyoneeighty-7b is a 7 billion parameter language model created by jambroz, formed by merging several pre-trained models using the DARE TIES method. It leverages mlabonne/NeuralBeagle14-7B as its base, integrating capabilities from Intel/neural-chat-7b-v3-1, mlabonne/AlphaMonarch-7B, and HuggingFaceH4/zephyr-7b-beta. This merge aims to combine the strengths of its constituent models, offering a versatile foundation for various natural language processing tasks.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p