Dataset Viewer
Auto-converted to Parquet Duplicate
modelId
stringlengths
4
62
sha
null
lastModified
null
pipeline_tag
stringclasses
9 values
author
null
securityStatus
null
likes
int64
0
1.03k
downloads
int64
0
62.4M
dataset
sequence
arxiv
sequence
license
sequence
tags
sequence
doi
sequence
card
stringlengths
0
14k
albert-base-v1
null
null
fill-mask
null
null
1
41,336
[ "bookcorpus", "wikipedia" ]
[ "1909.11942" ]
[ "apache-2.0" ]
[ "pytorch", "tf", "safetensors", "albert", "fill-mask", "en", "transformers", "exbert", "autotrain_compatible", "has_space" ]
null
# ALBERT Base v1 Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1909.11942) and first released in [this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not make...
albert-base-v2
null
null
fill-mask
null
null
50
4,543,047
[ "bookcorpus", "wikipedia" ]
[ "1909.11942" ]
[ "apache-2.0" ]
[ "pytorch", "tf", "jax", "rust", "safetensors", "albert", "fill-mask", "en", "transformers", "autotrain_compatible", "has_space" ]
null
# ALBERT Base v2 Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1909.11942) and first released in [this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not make...
albert-large-v1
null
null
fill-mask
null
null
0
651
[ "bookcorpus", "wikipedia" ]
[ "1909.11942" ]
[ "apache-2.0" ]
[ "pytorch", "tf", "albert", "fill-mask", "en", "transformers", "autotrain_compatible", "has_space" ]
null
# ALBERT Large v1 Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1909.11942) and first released in [this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not mak...
albert-large-v2
null
null
fill-mask
null
null
11
12,476
[ "bookcorpus", "wikipedia" ]
[ "1909.11942" ]
[ "apache-2.0" ]
[ "pytorch", "tf", "safetensors", "albert", "fill-mask", "en", "transformers", "autotrain_compatible", "has_space" ]
null
# ALBERT Large v2 Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1909.11942) and first released in [this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not mak...
albert-xlarge-v1
null
null
fill-mask
null
null
0
385
[ "bookcorpus", "wikipedia" ]
[ "1909.11942" ]
[ "apache-2.0" ]
[ "pytorch", "tf", "albert", "fill-mask", "en", "transformers", "autotrain_compatible", "has_space" ]
null
# ALBERT XLarge v1 Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1909.11942) and first released in [this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not ma...
albert-xlarge-v2
null
null
fill-mask
null
null
3
3,124
[ "bookcorpus", "wikipedia" ]
[ "1909.11942" ]
[ "apache-2.0" ]
[ "pytorch", "tf", "albert", "fill-mask", "en", "transformers", "autotrain_compatible", "has_space" ]
null
# ALBERT XLarge v2 Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1909.11942) and first released in [this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not ma...
albert-xxlarge-v1
null
null
fill-mask
null
null
2
8,119
[ "bookcorpus", "wikipedia" ]
[ "1909.11942" ]
[ "apache-2.0" ]
[ "pytorch", "tf", "albert", "fill-mask", "en", "transformers", "autotrain_compatible", "has_space" ]
null
# ALBERT XXLarge v1 Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1909.11942) and first released in [this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not m...
albert-xxlarge-v2
null
null
fill-mask
null
null
9
40,731
[ "bookcorpus", "wikipedia" ]
[ "1909.11942" ]
[ "apache-2.0" ]
[ "pytorch", "tf", "safetensors", "albert", "fill-mask", "en", "transformers", "exbert", "autotrain_compatible", "has_space" ]
null
# ALBERT XXLarge v2 Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1909.11942) and first released in [this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not m...
bert-base-cased-finetuned-mrpc
null
null
fill-mask
null
null
0
9,686
null
null
null
[ "pytorch", "tf", "jax", "bert", "fill-mask", "transformers", "autotrain_compatible", "has_space" ]
null
bert-base-cased
null
null
fill-mask
null
null
104
7,716,025
[ "bookcorpus", "wikipedia" ]
[ "1810.04805" ]
[ "apache-2.0" ]
[ "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "en", "transformers", "exbert", "autotrain_compatible", "has_space" ]
null
# BERT base model (cased) Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in [this repository](https://github.com/google-research/bert). This model is case-sensitive: it makes a difference bet...
bert-base-chinese
null
null
fill-mask
null
null
358
2,273,140
null
[ "1810.04805" ]
null
[ "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "zh", "transformers", "autotrain_compatible", "has_space" ]
null
# Bert-base-chinese ## Table of Contents - [Model Details](#model-details) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [Training](#training) - [Evaluation](#evaluation) - [How to Get Started With the Model](#how-to-get-started-with-the-model) ## Model Details ### Model Descri...
bert-base-german-cased
null
null
fill-mask
null
null
31
112,445
null
null
[ "mit" ]
[ "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "de", "transformers", "exbert", "autotrain_compatible", "has_space" ]
null
<a href="https://huggingface.co/exbert/?model=bert-base-german-cased"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a> # German BERT ![bert_image](https://static.tildacdn.com/tild6438-3730-4164-b266-613634323466/german_bert.png) ## Overview **Language model:** bert-base-cased **L...
bert-base-german-dbmdz-cased
null
null
fill-mask
null
null
0
2,071
null
null
[ "mit" ]
[ "pytorch", "jax", "bert", "fill-mask", "de", "transformers", "autotrain_compatible", "has_space" ]
null
This model is the same as [dbmdz/bert-base-german-cased](https://huggingface.co/dbmdz/bert-base-german-cased). See the [dbmdz/bert-base-german-cased model card](https://huggingface.co/dbmdz/bert-base-german-cased) for details on the model.
bert-base-german-dbmdz-uncased
null
null
fill-mask
null
null
2
50,194
null
null
[ "mit" ]
[ "pytorch", "jax", "safetensors", "bert", "fill-mask", "de", "transformers", "autotrain_compatible", "has_space" ]
null
This model is the same as [dbmdz/bert-base-german-uncased](https://huggingface.co/dbmdz/bert-base-german-uncased). See the [dbmdz/bert-base-german-cased model card](https://huggingface.co/dbmdz/bert-base-german-uncased) for details on the model.
bert-base-multilingual-cased
null
null
fill-mask
null
null
157
5,672,763
[ "wikipedia" ]
[ "1810.04805" ]
[ "apache-2.0" ]
[ "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "multilingual", "af", "sq", "ar", "an", "hy", "ast", "az", "ba", "eu", "bar", "be", "bn", "inc", "bs", "br", "bg", "my", "ca", "ceb", "ce", "zh", "cv", "hr", "cs", "da", "nl", "en", "et", ...
null
# BERT multilingual base model (cased) Pretrained model on the top 104 languages with the largest Wikipedia using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in [this repository](https://github.com/google-research/bert). This model...
bert-base-multilingual-uncased
null
null
fill-mask
null
null
38
257,915
[ "wikipedia" ]
[ "1810.04805" ]
[ "apache-2.0" ]
[ "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "multilingual", "af", "sq", "ar", "an", "hy", "ast", "az", "ba", "eu", "bar", "be", "bn", "inc", "bs", "br", "bg", "my", "ca", "ceb", "ce", "zh", "cv", "hr", "cs", "da", "nl", "en", "et", ...
null
# BERT multilingual base model (uncased) Pretrained model on the top 102 languages with the largest Wikipedia using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in [this repository](https://github.com/google-research/bert). This mod...
bert-base-uncased
null
null
fill-mask
null
null
839
62,377,709
[ "bookcorpus", "wikipedia" ]
[ "1810.04805" ]
[ "apache-2.0" ]
[ "pytorch", "tf", "jax", "rust", "safetensors", "bert", "fill-mask", "en", "transformers", "exbert", "autotrain_compatible", "has_space" ]
null
# BERT base model (uncased) Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in [this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference ...
bert-large-cased-whole-word-masking-finetuned-squad
null
null
question-answering
null
null
0
11,494
[ "bookcorpus", "wikipedia" ]
[ "1810.04805" ]
[ "apache-2.0" ]
[ "pytorch", "tf", "jax", "rust", "safetensors", "bert", "question-answering", "en", "transformers", "autotrain_compatible", "has_space" ]
null
# BERT large model (cased) whole word masking finetuned on SQuAD Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in [this repository](https://github.com/google-research/bert). This model is ca...
bert-large-cased-whole-word-masking
null
null
fill-mask
null
null
3
3,774
[ "bookcorpus", "wikipedia" ]
[ "1810.04805" ]
[ "apache-2.0" ]
[ "pytorch", "tf", "jax", "bert", "fill-mask", "en", "transformers", "autotrain_compatible", "has_space" ]
null
# BERT large model (cased) whole word masking Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in [this repository](https://github.com/google-research/bert). This model is cased: it makes a dif...
bert-large-cased
null
null
fill-mask
null
null
7
342,338
[ "bookcorpus", "wikipedia" ]
[ "1810.04805" ]
[ "apache-2.0" ]
[ "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "en", "transformers", "autotrain_compatible", "has_space" ]
null
# BERT large model (cased) Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in [this repository](https://github.com/google-research/bert). This model is cased: it makes a difference between eng...
bert-large-uncased-whole-word-masking-finetuned-squad
null
null
question-answering
null
null
85
519,563
[ "bookcorpus", "wikipedia" ]
[ "1810.04805" ]
[ "apache-2.0" ]
[ "pytorch", "tf", "jax", "safetensors", "bert", "question-answering", "en", "transformers", "autotrain_compatible", "has_space" ]
null
# BERT large model (uncased) whole word masking finetuned on SQuAD Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in [this repository](https://github.com/google-research/bert). This model is ...
bert-large-uncased-whole-word-masking
null
null
fill-mask
null
null
6
61,415
[ "bookcorpus", "wikipedia" ]
[ "1810.04805" ]
[ "apache-2.0" ]
[ "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "en", "transformers", "autotrain_compatible", "has_space" ]
null
# BERT large model (uncased) whole word masking Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in [this repository](https://github.com/google-research/bert). This model is uncased: it does no...
bert-large-uncased
null
null
fill-mask
null
null
26
1,076,096
[ "bookcorpus", "wikipedia" ]
[ "1810.04805" ]
[ "apache-2.0" ]
[ "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "en", "transformers", "autotrain_compatible", "has_space" ]
null
# BERT large model (uncased) Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in [this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference...
camembert-base
null
null
fill-mask
null
null
36
1,247,645
[ "oscar" ]
[ "1911.03894" ]
[ "mit" ]
[ "pytorch", "tf", "safetensors", "camembert", "fill-mask", "fr", "transformers", "autotrain_compatible", "has_space" ]
null
# CamemBERT: a Tasty French Language Model ## Table of Contents - [Model Details](#model-details) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [Training](#training) - [Evaluation](#evaluation) - [Citation Information](#citation-information) - [How to Get Started With the Model](#...
End of preview. Expand in Data Studio

Dataset Card for "testmodelcardwdata"

More Information needed

Downloads last month
2