StanfordAIMI/GREEN-Phi2 is a 3 billion parameter causal language model, fine-tuned from Microsoft's Phi-2 architecture with a 2048-token context length. This model has undergone further training on an unspecified dataset, achieving a final validation loss of 0.0781. It is intended for general language generation tasks, building upon the compact yet capable design of the original Phi-2.
No reviews yet. Be the first to review!