utdawn commited on
Commit
cbc4f97
·
verified ·
1 Parent(s): 6c1e52f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -54,7 +54,7 @@ tags:
54
 
55
  ## 🚀 Performance Highlights
56
  + **Leading MoE Architecture**:
57
- The open-source **Mixture-of-Experts (MoE) diffusion large language model**, pre-trained from scratch on approximately **20 trillion tokens**.
58
  + **Efficient Inference**:
59
  With **100 billion total parameters**, only **6.1 billion** are activated during inference. LLaDA2.0-flash significantly reduces computational costs while outperforming open-source dense models of similar scale.
60
  + **Impressive Performance on Code & Complex Reasoning**:
 
54
 
55
  ## 🚀 Performance Highlights
56
  + **Leading MoE Architecture**:
57
+ The open-source **Mixture-of-Experts (MoE) diffusion large language model** continually trained on the Ling2.0 series with approximately **20 trillion tokens**.
58
  + **Efficient Inference**:
59
  With **100 billion total parameters**, only **6.1 billion** are activated during inference. LLaDA2.0-flash significantly reduces computational costs while outperforming open-source dense models of similar scale.
60
  + **Impressive Performance on Code & Complex Reasoning**: