apmoore1 commited on
Commit
3acc63a
·
verified ·
1 Parent(s): 4569fd0

Updated languages metadata

Browse files
Files changed (1) hide show
  1. README.md +214 -211
README.md CHANGED
@@ -1,212 +1,215 @@
1
- ---
2
- license: cc-by-nc-sa-4.0
3
- pretty_name: "USAS Multilingual Word Sense Disambiguation"
4
- task_categories:
5
- - token-classification
6
- language:
7
- - en
8
- tags:
9
- - word-sense-disambiguation
10
- - lexical-semantics
11
- size_categories:
12
- - 10K<n<100K
13
- viewer: false
14
- configs:
15
- - config_name: eng
16
- data_files:
17
- - split: test
18
- path:
19
- - "test/benedict_eng.txt"
20
- - config_name: fin
21
- data_files:
22
- - split: test
23
- path:
24
- - "test/benedict_fin.txt"
25
- - config_name: cym
26
- data_files:
27
- - split: test
28
- path:
29
- - "test/CorCenCC_cym.txt"
30
- - config_name: zho
31
- data_files:
32
- - split: test
33
- path:
34
- - "test/ToRCH2019_A26_zho.csv"
35
- ---
36
- # USAS Multilingual Word Sense Disambiguation Datasets
37
-
38
- This contains the Multilingual Word Sense Disambiguation (WSD) evaluation datasets using the [UCREL Semantic Analysis System (USAS)](https://ucrel.lancs.ac.uk/usas/) sense inventory from the forthcoming paper. These evaluation datasets are a collection of existing datasets, apart from the Chinese datasets which was released with the forthcoming paper.
39
-
40
-
41
- All the evaluation data has either been manually tagged or has been manually checked.
42
-
43
- For more information about the sense inventory please read the [USAS guide.](https://ucrel.lancs.ac.uk/usas/).
44
-
45
- ## Table of contents
46
-
47
- * [Uses](#uses)
48
- * [Dataset Description](#dataset-description)
49
- - [English and Finnish](#english-and-finnish)
50
- - [Welsh](#welsh)
51
- - [Chinese](#chinese)
52
- * [Dataset Statistics](#dataset-statistics)
53
- * [Dataset Structure](#dataset-structure)
54
- - [English](#english)
55
- - [Finnish](#finnish)
56
- - [Welsh](#welsh-1)
57
- - [Chinese](#chinese-1)
58
- * [Citation](#citation)
59
- * [License](#license)
60
- * [Dataset Card Authors](#dataset-card-authors)
61
- * [Dataset Card Contact](#dataset-card-contact)
62
-
63
- ## Uses
64
-
65
- These dataset can be used to evaluate Word Sense Disambiguation (WSD) models on the USAS sense inventory.
66
-
67
- ## Dataset Description
68
-
69
- We describe each dataset that is contained within the evaluation dataset in their own section;
70
-
71
- ### English and Finnish
72
-
73
- This data has come from [Porting an English semantic tagger to the Finnish language by Löfberg et al. 2003](https://ucrel.lancs.ac.uk/publications/CL2003/papers/lofberg.pdf), a tagged corpus of texts from a Finnish coffee website `http://www.kahvilasi.net/` (this website is no longer available), the English corpus is a machine translated version of the Finnish that was post edited by a native Finnish speaker.
74
-
75
- ### Welsh
76
-
77
- It is the dataset released in [Leveraging Pre-Trained Embeddings for Welsh Taggers by Ezeani et al. 2019](https://aclanthology.org/W19-4332/). The dataset consists of 8 extracts from 4 diverse data sources;
78
-
79
- * [Kynulliad3](https://kevindonnelly.org.uk/kynulliad3/) - Welsh Assembly proceedings.
80
- * [Meddalwedd](https://techiaith.cymru/corpws/Moses/Meddalwedd/) - Translations of software instructions.
81
- * [Kwici](https://kevindonnelly.org.uk/kwici/) - Welsh Wikipedia articles.
82
- * [LERBIML](https://www.lancaster.ac.uk/fass/projects/biml/bimls3corpus.htm) - Multi-domain spoken corpora.
83
-
84
- ### Chinese
85
-
86
- The Chinese dataset is a manually tagged text from the `News Report` genre of the [ToRCH2019 corpus](https://corpus.bfsu.edu.cn/info/1082/1782.htm), specifically about the 2019 military world games in Wuhan, China. This dataset was manually annotated following a three-stage procedure; 1. independent tagging, 2. independent reviews of their own tagging, and 3. reaching consensus between two trained researchers.
87
-
88
-
89
- ## Dataset Statistics
90
-
91
- | Language | ISO 639-3 code | Text Level | Texts | Tokens | L. Tokens | Multi Tag Membership (%) |
92
- | --- | --- | --- | --- | --- | --- | --- |
93
- | Chinese | zho | sentence | 46 | 2,312 | 1,747 | 1 (0%) |
94
- | English | eng | sentence | 73 | 3,899 | 3,468 | 212 (6.1%) |
95
- | Finnish | fin | sentence | 72 | 2,439 | 2,068 | 254 (12.3%) |
96
- | Welsh | cym | sentence | 611 | 14,876 | 12,800 | 1,311 (10.2%) |
97
-
98
- Dataset statistics per language:
99
- * ISO 639-3 code - 3 digit language code.
100
- * Text Level - How the text has been broken up.
101
- * Texts - Number of texts
102
- * Tokens - Number of tokens
103
- * L. Tokens - Number of Labelled Tokens (L. Tokens)
104
- * Multi Tag Membership - Number of labelled tokens, and the percentage (%), whereby the USAS tag either has dual, triple, or quadruple membership
105
-
106
- An example few tokens from the Finnish test data that contains tokens with Multi Tag Membership:
107
-
108
- ``` txt
109
- Useimmat_N5+++ kaupan_I2.2/H1 kahvipaketit_O2/F2 ovat_A3+ useiden_N5+ eri_A6.1- kahvilajikkeiden_A4.1/F2
110
- ```
111
-
112
- The tokens `kaupan`, `kahvipaketit`, and `kahvilajikkeiden` all contain dual tag membership, in the `kaupan` case the dual membership is `I2.2` and `H1`.
113
-
114
- These dataset statistics were generated after the data was pre-processed; removed all tag tokens marked as punctuation or containing the unmatched USAS tag/label ("Z99"), as these tags do not have any semantic meaning. In addition, any labelled tokens that could not be matched to the USAS tagset were removed, in addition if the token contained more than one USAS label assigned to them we used the first label (this affected 6 tokens in the Chinese dataset, this multi label token assignment is not the same as Multi tag membership as multi tag membership still counts as one USAS label).
115
-
116
- ## Dataset Structure
117
-
118
- The dataset structure varies by file, here we detail the different data structures.
119
-
120
- ### English
121
-
122
- Each new line represents a sentence of annotated tokens. Each annotated token is split by whitespace. Each annotated token is represented like the following `{Token Text}_{USAS TAG}{[i\d+.\d+.\d+}`:
123
-
124
- Example annotated tokens:
125
-
126
- ``` txt
127
- Vac_F2/O2[i136.2.1 pot_F2/O2[i136.2.2 is_A3+
128
- ```
129
-
130
- * Token Text - The text representing the annotated token.
131
- * USAS Tag - The USAS tag/label.
132
- * [i\d+.\d+.\d+ - A special sequence indicating the token is part of a Multi Word Expression (MWE). The first set of digits represents a unique ID, the second set represents the number of tokens in the MWE, and the last set represents the token index in the MWE.
133
-
134
-
135
- ### Finnish
136
-
137
- Each new line represents a sentence of annotated tokens. Each annotated token is split by whitespace. Each annotated token is represented like the following `{Token Text}_{USAS TAG}_{i}?`:
138
-
139
- Example annotated tokens:
140
-
141
- ``` txt
142
- Vac_F2/O2_i pot_F2/O2_i on_A3+
143
- ```
144
-
145
- * Token Text - The text representing the annotated token.
146
- * USAS Tag - The USAS tag/label.
147
- * i - if present states that it is part of a Multi Word Expression.
148
-
149
- ### Welsh
150
-
151
- Each new line represents a sentence of annotated tokens. Each annotated token is split by whitespace. Each annotated token is represented like the following whereby each piece of information is segmented by the pipe character `|` `{Token}|{Lemma}|{Core POS}|{True CorCenCC Basic POS}|{Predicted CorCenCC Enriched POS}|{Predicted CorCenCC Basic POS}|{USAS Tag}`:
152
-
153
-
154
- Example annotated token:
155
-
156
- ``` txt
157
- A|a|pron|Rha|Rhaperth|Rha|Z5
158
- ```
159
-
160
- * Token - The annotated token.
161
- * Lemma - The predicted lemma. The prediction was made from the [CyTag tagger](https://github.com/CorCenCC/CyTag).
162
- * Core POS - Mapping from the True CorCenCC Basic POS tag to the Core POS tag. This mapping is based off table A.1 in [Leveraging Pre-Trained Embeddings for Welsh Taggers](https://aclanthology.org/W19-4332.pdf).
163
- * True CorCenCC Basic POS - Human annotated CorCenCC Basic POS tag.
164
- * Predicted CorCenCC Enriched POS - this has come from running the [CyTag tagger](https://github.com/CorCenCC/CyTag). As this tag has been predicted it may be different to the `True CorCenCC Basic POS`.
165
- * Predicted CorCenCC Basic POS - this has come from running the [CyTag tagger](https://github.com/CorCenCC/CyTag). As this tag has been predicted it may be different to the `True CorCenCC Basic POS`.
166
- * USAS Tag - The USAS tag/label.
167
-
168
- For more details on how this dataset was generated please see the following [GitHub repository.](https://github.com/apmoore1/pymusas/tree/welsh_evaluation_data/resources/welsh)
169
-
170
- ### Chinese
171
-
172
- The data is stored in a CSV file with the following headings:
173
-
174
- ``` csv
175
- Token,POS,Predicted-USAS,Corrected-USAS,Errors,Notes,sentence-break
176
- ```
177
-
178
- * Token - the token text
179
- * POS - the predicted token's Part Of Speech whereby it uses the [Universal Dependency POS tagset](https://universaldependencies.org/u/pos/)
180
- * Predicted-USAS - The semicolon list of predicted USAS tags, whereby the first USAS tag in the list should be the most probable, e.g. "H4;I1.2"
181
- * Corrected-USAS - The semicolon list of corrected USAS tags, whereby the first USAS tag in the list should be the most probable, e.g. "H1;I1.1"
182
- * Errors - A semicolon list of errors that the annotators have added when annotating.
183
- Notes - Any additional notes the annotators have added while annotating.
184
- * sentence-break - `False` or `True`. If True that token is the last token in the sentence.
185
-
186
- To generate the tokens, POS, and predicted USAS tags we used [spaCy zh_core_web_trf version 3.8 model](https://spacy.io/models/zh#zh_core_web_trf) and PyMUSAS version 0.3.0 with `cmn_dual_upos2usas_contextual-0.3.3` as the [USAS semantic tagger](https://ucrel.github.io/pymusas/usage/how_to/tag_text#chinese).
187
-
188
-
189
- ## Citation
190
-
191
- Each dataset should be cited separately, but for reference all 4 datasets were used as evaluation datasets within forthcoming paper.
192
-
193
- * English and Finnish - [Porting an English semantic tagger to the Finnish language by Löfberg et al. 2003](https://ucrel.lancs.ac.uk/publications/CL2003/papers/lofberg.pdf)
194
- * Welsh - [Leveraging Pre-Trained Embeddings for Welsh Taggers by Ezeani et al. 2019](https://aclanthology.org/W19-4332/)
195
- * Chinese - Forthcoming paper.
196
-
197
- ## License
198
-
199
- [CC-BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/)
200
-
201
-
202
- ## Dataset Card Authors
203
-
204
- * UCREL ([email protected])
205
- * Andrew Moore / apmoore1 ([email protected] / [email protected])
206
- * Paul Rayson ([email protected])
207
-
208
- ## Dataset Card Contact
209
-
210
- * UCREL ([email protected])
211
- * Andrew Moore / apmoore1 ([email protected] / [email protected])
 
 
 
212
  * Paul Rayson ([email protected])
 
1
+ ---
2
+ license: cc-by-nc-sa-4.0
3
+ pretty_name: USAS Multilingual Word Sense Disambiguation
4
+ task_categories:
5
+ - token-classification
6
+ language:
7
+ - en
8
+ - fi
9
+ - zh
10
+ - cy
11
+ tags:
12
+ - word-sense-disambiguation
13
+ - lexical-semantics
14
+ size_categories:
15
+ - 10K<n<100K
16
+ viewer: false
17
+ configs:
18
+ - config_name: eng
19
+ data_files:
20
+ - split: test
21
+ path:
22
+ - test/benedict_eng.txt
23
+ - config_name: fin
24
+ data_files:
25
+ - split: test
26
+ path:
27
+ - test/benedict_fin.txt
28
+ - config_name: cym
29
+ data_files:
30
+ - split: test
31
+ path:
32
+ - test/CorCenCC_cym.txt
33
+ - config_name: zho
34
+ data_files:
35
+ - split: test
36
+ path:
37
+ - test/ToRCH2019_A26_zho.csv
38
+ ---
39
+ # USAS Multilingual Word Sense Disambiguation Datasets
40
+
41
+ This contains the Multilingual Word Sense Disambiguation (WSD) evaluation datasets using the [UCREL Semantic Analysis System (USAS)](https://ucrel.lancs.ac.uk/usas/) sense inventory from the forthcoming paper. These evaluation datasets are a collection of existing datasets, apart from the Chinese datasets which was released with the forthcoming paper.
42
+
43
+
44
+ All the evaluation data has either been manually tagged or has been manually checked.
45
+
46
+ For more information about the sense inventory please read the [USAS guide.](https://ucrel.lancs.ac.uk/usas/).
47
+
48
+ ## Table of contents
49
+
50
+ * [Uses](#uses)
51
+ * [Dataset Description](#dataset-description)
52
+ - [English and Finnish](#english-and-finnish)
53
+ - [Welsh](#welsh)
54
+ - [Chinese](#chinese)
55
+ * [Dataset Statistics](#dataset-statistics)
56
+ * [Dataset Structure](#dataset-structure)
57
+ - [English](#english)
58
+ - [Finnish](#finnish)
59
+ - [Welsh](#welsh-1)
60
+ - [Chinese](#chinese-1)
61
+ * [Citation](#citation)
62
+ * [License](#license)
63
+ * [Dataset Card Authors](#dataset-card-authors)
64
+ * [Dataset Card Contact](#dataset-card-contact)
65
+
66
+ ## Uses
67
+
68
+ These dataset can be used to evaluate Word Sense Disambiguation (WSD) models on the USAS sense inventory.
69
+
70
+ ## Dataset Description
71
+
72
+ We describe each dataset that is contained within the evaluation dataset in their own section;
73
+
74
+ ### English and Finnish
75
+
76
+ This data has come from [Porting an English semantic tagger to the Finnish language by Löfberg et al. 2003](https://ucrel.lancs.ac.uk/publications/CL2003/papers/lofberg.pdf), a tagged corpus of texts from a Finnish coffee website `http://www.kahvilasi.net/` (this website is no longer available), the English corpus is a machine translated version of the Finnish that was post edited by a native Finnish speaker.
77
+
78
+ ### Welsh
79
+
80
+ It is the dataset released in [Leveraging Pre-Trained Embeddings for Welsh Taggers by Ezeani et al. 2019](https://aclanthology.org/W19-4332/). The dataset consists of 8 extracts from 4 diverse data sources;
81
+
82
+ * [Kynulliad3](https://kevindonnelly.org.uk/kynulliad3/) - Welsh Assembly proceedings.
83
+ * [Meddalwedd](https://techiaith.cymru/corpws/Moses/Meddalwedd/) - Translations of software instructions.
84
+ * [Kwici](https://kevindonnelly.org.uk/kwici/) - Welsh Wikipedia articles.
85
+ * [LERBIML](https://www.lancaster.ac.uk/fass/projects/biml/bimls3corpus.htm) - Multi-domain spoken corpora.
86
+
87
+ ### Chinese
88
+
89
+ The Chinese dataset is a manually tagged text from the `News Report` genre of the [ToRCH2019 corpus](https://corpus.bfsu.edu.cn/info/1082/1782.htm), specifically about the 2019 military world games in Wuhan, China. This dataset was manually annotated following a three-stage procedure; 1. independent tagging, 2. independent reviews of their own tagging, and 3. reaching consensus between two trained researchers.
90
+
91
+
92
+ ## Dataset Statistics
93
+
94
+ | Language | ISO 639-3 code | Text Level | Texts | Tokens | L. Tokens | Multi Tag Membership (%) |
95
+ | --- | --- | --- | --- | --- | --- | --- |
96
+ | Chinese | zho | sentence | 46 | 2,312 | 1,747 | 1 (0%) |
97
+ | English | eng | sentence | 73 | 3,899 | 3,468 | 212 (6.1%) |
98
+ | Finnish | fin | sentence | 72 | 2,439 | 2,068 | 254 (12.3%) |
99
+ | Welsh | cym | sentence | 611 | 14,876 | 12,800 | 1,311 (10.2%) |
100
+
101
+ Dataset statistics per language:
102
+ * ISO 639-3 code - 3 digit language code.
103
+ * Text Level - How the text has been broken up.
104
+ * Texts - Number of texts
105
+ * Tokens - Number of tokens
106
+ * L. Tokens - Number of Labelled Tokens (L. Tokens)
107
+ * Multi Tag Membership - Number of labelled tokens, and the percentage (%), whereby the USAS tag either has dual, triple, or quadruple membership
108
+
109
+ An example few tokens from the Finnish test data that contains tokens with Multi Tag Membership:
110
+
111
+ ``` txt
112
+ Useimmat_N5+++ kaupan_I2.2/H1 kahvipaketit_O2/F2 ovat_A3+ useiden_N5+ eri_A6.1- kahvilajikkeiden_A4.1/F2
113
+ ```
114
+
115
+ The tokens `kaupan`, `kahvipaketit`, and `kahvilajikkeiden` all contain dual tag membership, in the `kaupan` case the dual membership is `I2.2` and `H1`.
116
+
117
+ These dataset statistics were generated after the data was pre-processed; removed all tag tokens marked as punctuation or containing the unmatched USAS tag/label ("Z99"), as these tags do not have any semantic meaning. In addition, any labelled tokens that could not be matched to the USAS tagset were removed, in addition if the token contained more than one USAS label assigned to them we used the first label (this affected 6 tokens in the Chinese dataset, this multi label token assignment is not the same as Multi tag membership as multi tag membership still counts as one USAS label).
118
+
119
+ ## Dataset Structure
120
+
121
+ The dataset structure varies by file, here we detail the different data structures.
122
+
123
+ ### English
124
+
125
+ Each new line represents a sentence of annotated tokens. Each annotated token is split by whitespace. Each annotated token is represented like the following `{Token Text}_{USAS TAG}{[i\d+.\d+.\d+}`:
126
+
127
+ Example annotated tokens:
128
+
129
+ ``` txt
130
+ Vac_F2/O2[i136.2.1 pot_F2/O2[i136.2.2 is_A3+
131
+ ```
132
+
133
+ * Token Text - The text representing the annotated token.
134
+ * USAS Tag - The USAS tag/label.
135
+ * [i\d+.\d+.\d+ - A special sequence indicating the token is part of a Multi Word Expression (MWE). The first set of digits represents a unique ID, the second set represents the number of tokens in the MWE, and the last set represents the token index in the MWE.
136
+
137
+
138
+ ### Finnish
139
+
140
+ Each new line represents a sentence of annotated tokens. Each annotated token is split by whitespace. Each annotated token is represented like the following `{Token Text}_{USAS TAG}_{i}?`:
141
+
142
+ Example annotated tokens:
143
+
144
+ ``` txt
145
+ Vac_F2/O2_i pot_F2/O2_i on_A3+
146
+ ```
147
+
148
+ * Token Text - The text representing the annotated token.
149
+ * USAS Tag - The USAS tag/label.
150
+ * i - if present states that it is part of a Multi Word Expression.
151
+
152
+ ### Welsh
153
+
154
+ Each new line represents a sentence of annotated tokens. Each annotated token is split by whitespace. Each annotated token is represented like the following whereby each piece of information is segmented by the pipe character `|` `{Token}|{Lemma}|{Core POS}|{True CorCenCC Basic POS}|{Predicted CorCenCC Enriched POS}|{Predicted CorCenCC Basic POS}|{USAS Tag}`:
155
+
156
+
157
+ Example annotated token:
158
+
159
+ ``` txt
160
+ A|a|pron|Rha|Rhaperth|Rha|Z5
161
+ ```
162
+
163
+ * Token - The annotated token.
164
+ * Lemma - The predicted lemma. The prediction was made from the [CyTag tagger](https://github.com/CorCenCC/CyTag).
165
+ * Core POS - Mapping from the True CorCenCC Basic POS tag to the Core POS tag. This mapping is based off table A.1 in [Leveraging Pre-Trained Embeddings for Welsh Taggers](https://aclanthology.org/W19-4332.pdf).
166
+ * True CorCenCC Basic POS - Human annotated CorCenCC Basic POS tag.
167
+ * Predicted CorCenCC Enriched POS - this has come from running the [CyTag tagger](https://github.com/CorCenCC/CyTag). As this tag has been predicted it may be different to the `True CorCenCC Basic POS`.
168
+ * Predicted CorCenCC Basic POS - this has come from running the [CyTag tagger](https://github.com/CorCenCC/CyTag). As this tag has been predicted it may be different to the `True CorCenCC Basic POS`.
169
+ * USAS Tag - The USAS tag/label.
170
+
171
+ For more details on how this dataset was generated please see the following [GitHub repository.](https://github.com/apmoore1/pymusas/tree/welsh_evaluation_data/resources/welsh)
172
+
173
+ ### Chinese
174
+
175
+ The data is stored in a CSV file with the following headings:
176
+
177
+ ``` csv
178
+ Token,POS,Predicted-USAS,Corrected-USAS,Errors,Notes,sentence-break
179
+ ```
180
+
181
+ * Token - the token text
182
+ * POS - the predicted token's Part Of Speech whereby it uses the [Universal Dependency POS tagset](https://universaldependencies.org/u/pos/)
183
+ * Predicted-USAS - The semicolon list of predicted USAS tags, whereby the first USAS tag in the list should be the most probable, e.g. "H4;I1.2"
184
+ * Corrected-USAS - The semicolon list of corrected USAS tags, whereby the first USAS tag in the list should be the most probable, e.g. "H1;I1.1"
185
+ * Errors - A semicolon list of errors that the annotators have added when annotating.
186
+ Notes - Any additional notes the annotators have added while annotating.
187
+ * sentence-break - `False` or `True`. If True that token is the last token in the sentence.
188
+
189
+ To generate the tokens, POS, and predicted USAS tags we used [spaCy zh_core_web_trf version 3.8 model](https://spacy.io/models/zh#zh_core_web_trf) and PyMUSAS version 0.3.0 with `cmn_dual_upos2usas_contextual-0.3.3` as the [USAS semantic tagger](https://ucrel.github.io/pymusas/usage/how_to/tag_text#chinese).
190
+
191
+
192
+ ## Citation
193
+
194
+ Each dataset should be cited separately, but for reference all 4 datasets were used as evaluation datasets within forthcoming paper.
195
+
196
+ * English and Finnish - [Porting an English semantic tagger to the Finnish language by Löfberg et al. 2003](https://ucrel.lancs.ac.uk/publications/CL2003/papers/lofberg.pdf)
197
+ * Welsh - [Leveraging Pre-Trained Embeddings for Welsh Taggers by Ezeani et al. 2019](https://aclanthology.org/W19-4332/)
198
+ * Chinese - Forthcoming paper.
199
+
200
+ ## License
201
+
202
+ [CC-BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/)
203
+
204
+
205
+ ## Dataset Card Authors
206
+
207
+ * UCREL ([email protected])
208
+ * Andrew Moore / apmoore1 ([email protected] / [email protected])
209
+ * Paul Rayson ([email protected])
210
+
211
+ ## Dataset Card Contact
212
+
213
+ * UCREL ([email protected])
214
+ * Andrew Moore / apmoore1 ([email protected] / [email protected])
215
  * Paul Rayson ([email protected])