Translation
Transformers
Safetensors
qwen3
text-generation
text-generation-inference
nielsr HF Staff commited on
Commit
99539a6
·
verified ·
1 Parent(s): 45a9f29

Add library_name and fix language metadata

Browse files

This PR improves the model card by:
- Adding `library_name: transformers` to the metadata. This enables the automated "How to use" widget on the model page for seamless integration with the Transformers library, based on the provided code snippet.
- Correcting the `language` metadata by removing the invalid `false` entry and adding `no` (Norwegian), ensuring it accurately reflects the supported languages listed in the model card content.

Please review and merge if these changes are appropriate.

Files changed (1) hide show
  1. README.md +9 -6
README.md CHANGED
@@ -1,4 +1,8 @@
1
  ---
 
 
 
 
2
  language:
3
  - en
4
  - zh
@@ -60,15 +64,12 @@ language:
60
  - ur
61
  - uz
62
  - yue
 
63
  metrics:
64
  - bleu
65
  - comet
66
- datasets:
67
- - NiuTrans/LMT-60-sft-data
68
- base_model:
69
- - NiuTrans/LMT-60-8B-Base
70
- license: apache-2.0
71
  pipeline_tag: translation
 
72
  ---
73
 
74
  ## LMT
@@ -100,7 +101,9 @@ model_name = "NiuTrans/LMT-60-8B"
100
  tokenizer = AutoTokenizer.from_pretrained(model_name, padding_side='left')
101
  model = AutoModelForCausalLM.from_pretrained(model_name)
102
 
103
- prompt = "Translate the following text from English into Chinese.\nEnglish: The concept came from China where plum blossoms were the flower of choice.\nChinese: "
 
 
104
  messages = [{"role": "user", "content": prompt}]
105
  text = tokenizer.apply_chat_template(
106
  messages,
 
1
  ---
2
+ base_model:
3
+ - NiuTrans/LMT-60-8B-Base
4
+ datasets:
5
+ - NiuTrans/LMT-60-sft-data
6
  language:
7
  - en
8
  - zh
 
64
  - ur
65
  - uz
66
  - yue
67
+ license: apache-2.0
68
  metrics:
69
  - bleu
70
  - comet
 
 
 
 
 
71
  pipeline_tag: translation
72
+ library_name: transformers
73
  ---
74
 
75
  ## LMT
 
101
  tokenizer = AutoTokenizer.from_pretrained(model_name, padding_side='left')
102
  model = AutoModelForCausalLM.from_pretrained(model_name)
103
 
104
+ prompt = "Translate the following text from English into Chinese.
105
+ English: The concept came from China where plum blossoms were the flower of choice.
106
+ Chinese: "
107
  messages = [{"role": "user", "content": prompt}]
108
  text = tokenizer.apply_chat_template(
109
  messages,