Featherless
qwen-3-1.7b-57b-cool-from-66550-step96800Yimingzhang
Start Chat
2B Params BF16 Inference Available

The yimingzhang/qwen-3-1.7b-57b-cool-from-66550-step96800 model is a 2 billion parameter language model based on the Qwen3-1.7B architecture, featuring 28 layers and a hidden size of 2048. With a vocabulary size of 2350 and a sequence length of 1024, this model is a fine-tuned variant of the Qwen3 series. Its specific differentiators and primary use cases are not detailed in the provided information, suggesting it may be an experimental or intermediate checkpoint.

Loading preview...

Parameters:2BContext length:32kArchitecture:TransformerPrecision:BF16Quantized variants:AvailableLast updated:October 2025
0.0M
0.0K

Model tree for

yimingzhang/qwen-3-1.7b-57b-cool-from-66550-step96800
Popular Sampler Settings

Most commonly used values from Featherless users

temperature

This setting influences the sampling randomness. Lower values make the model more deterministic; higher values introduce randomness. Zero is greedy sampling.

–

top_p

This setting controls the cumulative probability of considered top tokens. Must be in (0, 1]. Set to 1 to consider all tokens.

–

top_k

This limits the number of top tokens to consider. Set to -1 to consider all tokens.

–

frequency_penalty

This setting penalizes new tokens based on their frequency in the generated text. Values > 0 encourage new tokens; < 0 encourages repetition.

–

presence_penalty

This setting penalizes new tokens based on their presence in the generated text so far. Values > 0 encourage new tokens; < 0 encourages repetition.

–

repetition_penalty

This setting penalizes new tokens based on their appearance in the prompt and generated text. Values > 1 encourage new tokens; < 1 encourages repetition.

–

min_p

This setting representing the minimum probability for a token to be considered relative to the most likely token. Must be in [0, 1]. Set to 0 to disable.

–