Nexusflow/Athene-V2-Chat
TEXT GENERATIONConcurrency Cost:4Model Size:72.7BQuant:FP8Ctx Length:32kPublished:Nov 12, 2024License:otherArchitecture:Transformer0.3K Warm
Nexusflow/Athene-V2-Chat is a 72.7 billion parameter instruction-tuned causal language model developed by Nexusflow, fine-tuned from Qwen 2.5 72B-Instruct. With a 131,072 token context length, it is optimized for chat, math, and coding tasks, demonstrating performance on par with GPT-4o across various benchmarks. This model excels in instruction following and multi-turn conversations, making it suitable for complex interactive applications.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p