AshtonIsNotHere/CodeLlama_7B_nlp_pp
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Sep 4, 2023License:llama2Architecture:Transformer Open Weights Cold

AshtonIsNotHere/CodeLlama_7B_nlp_pp is a 7 billion parameter causal language model, fine-tuned from CodeLlama-7b-hf. This model is specifically optimized for code completion tasks within the NLP++ programming language. It was trained on a specialized dataset of NLP++ code, achieving an accuracy of 0.8968 on its evaluation set. Its primary strength lies in assisting developers with NLP++ code generation.

Loading preview...