xiaodongguaAIGC/xdg-llama-3-8B
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kArchitecture:Transformer0.0K Warm

The xiaodongguaAIGC/xdg-llama-3-8B is an 8 billion parameter language model based on the Llama-3 architecture, developed by xiaodongguaAIGC. It has been trained using Supervised Fine-Tuning (SFT), Direct Preference Optimization (DPO), and Reinforcement Learning from Human Feedback (RLHF) including a reward model and PPO. This model is designed with capabilities in coding, reasoning, Chinese Q&A, and safe refusal functions, making it suitable for diverse conversational AI applications.

Loading preview...