timpal0l/Llama-3-8B-flashback-v1
TEXT GENERATIONConcurrency Cost:1Published On:Apr 20, 2024License:mitOpen Weights Warm

timpal0l/Llama-3-8B-flashback-v1 is an 8 billion parameter Llama-3 base model continuation-pretrained by timpal0l. It was further trained on 2.25 million forum threads (approximately 40GB of text) from the Swedish website Flashback.org, making it specialized for generating text in the style and context of Swedish online forum discussions. This model excels at producing conversational and topic-specific content relevant to Swedish internet culture.

Loading preview...

Parameters:8BContext length:8kArchitecture:TransformerPrecision:FP8Quantized variants:Available
0.0M0.0K

Model tree for

timpal0l/Llama-3-8B-flashback-v1
Popular Sampler Settings

Most commonly used values from Featherless users

temperature

This setting influences the sampling randomness. Lower values make the model more deterministic; higher values introduce randomness. Zero is greedy sampling.

top_p

This setting controls the cumulative probability of considered top tokens. Must be in (0, 1]. Set to 1 to consider all tokens.

top_k

This limits the number of top tokens to consider. Set to -1 to consider all tokens.

frequency_penalty

This setting penalizes new tokens based on their frequency in the generated text. Values > 0 encourage new tokens; < 0 encourages repetition.

presence_penalty

This setting penalizes new tokens based on their presence in the generated text so far. Values > 0 encourage new tokens; < 0 encourages repetition.

repetition_penalty

This setting penalizes new tokens based on their appearance in the prompt and generated text. Values > 1 encourage new tokens; < 1 encourages repetition.

min_p

This setting representing the minimum probability for a token to be considered relative to the most likely token. Must be in [0, 1]. Set to 0 to disable.