alnrg2arg/blockchainlabs_7B_merged_test2_4_prune
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:8kPublished:Jan 18, 2024License:cc-by-nc-4.0Architecture:Transformer0.0K Open Weights Cold

alnrg2arg/blockchainlabs_7B_merged_test2_4_prune is a 7 billion parameter pruned language model based on the Mistral architecture, derived from a merge of mlabonne/NeuralBeagle14-7B and udkai/Turdus. This model utilizes the wanda pruning technique to optimize its structure while maintaining capabilities from its merged base models. It is designed for general language tasks, leveraging its 8192-token context length for efficient processing.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p