varox34/Bio-Saul-Dolphin-Beagle-Breadcrumbs
TEXT GENERATIONConcurrency Cost:1Published On:Jul 4, 2024License:mitOpen Weights Warm

varox34/Bio-Saul-Dolphin-Beagle-Breadcrumbs is a 7 billion parameter language model merged using the breadcrumbs method, based on mlabonne/NeuralBeagle14-7B. It integrates cognitivecomputations/dolphin-2.6-mistral-7b, Equall/Saul-Instruct-v1, and BioMistral/BioMistral-7B-SLERP. This model is designed to combine the strengths of its constituent models, particularly for tasks benefiting from a blend of general instruction following, legal, and biomedical knowledge. Its 8192 token context length supports processing longer inputs for specialized applications.

Loading preview...

Parameters:7BContext length:8kArchitecture:TransformerPrecision:FP8Quantized variants:Available
0.0M—

Model tree for

varox34/Bio-Saul-Dolphin-Beagle-Breadcrumbs
Popular Sampler Settings

Most commonly used values from Featherless users

temperature

This setting influences the sampling randomness. Lower values make the model more deterministic; higher values introduce randomness. Zero is greedy sampling.

0.4

top_p

This setting controls the cumulative probability of considered top tokens. Must be in (0, 1]. Set to 1 to consider all tokens.

0.3

top_k

This limits the number of top tokens to consider. Set to -1 to consider all tokens.

–

frequency_penalty

This setting penalizes new tokens based on their frequency in the generated text. Values > 0 encourage new tokens; < 0 encourages repetition.

–

presence_penalty

This setting penalizes new tokens based on their presence in the generated text so far. Values > 0 encourage new tokens; < 0 encourages repetition.

–

repetition_penalty

This setting penalizes new tokens based on their appearance in the prompt and generated text. Values > 1 encourage new tokens; < 1 encourages repetition.

1.1

min_p

This setting representing the minimum probability for a token to be considered relative to the most likely token. Must be in [0, 1]. Set to 0 to disable.

–