WhiteRabbitNeo/Llama-3.1-WhiteRabbitNeo-2-70B
TEXT GENERATIONConcurrency Cost:4Model Size:70BQuant:FP8Ctx Length:32kPublished:Aug 19, 2024License:llama3.1Architecture:Transformer0.0K Warm
WhiteRabbitNeo/Llama-3.1-WhiteRabbitNeo-2-70B is a 70 billion parameter model based on the Llama-3.1 architecture, developed by WhiteRabbitNeo. This model is specifically designed and extended for offensive and defensive cybersecurity applications, covering topics such as identifying open ports, outdated software, misconfigurations, and various injection flaws. It serves as a specialized AI assistant for detailed cybersecurity analysis and tasks.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
–
top_p
–
top_k
–
frequency_penalty
–
presence_penalty
–
repetition_penalty
–
min_p
–