transformers-community/group-beam-search
TEXT GENERATIONConcurrency Cost:1Model Size:0.8BQuant:BF16Ctx Length:32kPublished:Aug 27, 2025Architecture:Transformer Warm
The transformers-community/group-beam-search model is a 0.8 billion parameter decoder-only transformer based on Qwen/Qwen3-0.6B, designed to enhance text generation diversity. It implements a variant of beam search that produces more varied output candidates by penalizing similar sequences across groups of beams. This approach is particularly useful for generating a wider range of high-quality text outputs, offering more creative and less repetitive results compared to standard beam search.
Loading preview...