File size: 1,225 Bytes
4de0084
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
---
pipeline_tag: sentence-similarity
tags:
- feature-extraction
- sentence-similarity
- transformers
---

# CoT-MAE base uncased

CoT-MAE is a transformers based Mask Auto-Encoder pretraining architecture designed for Dense Passage Retrieval.
**CoT-MAE base uncased** is a general pre-training language model trained with unsupervised MS-Marco corpus.

Details can be found in our paper and codes.

Paper: [ConTextual Mask Auto-Encoder for Dense Passage Retrieval](https://arxiv.org/abs/2208.07670).

Code: [caskcsg/ir/cotmae](https://github.com/caskcsg/ir/tree/main/cotmae)

## Citations
If you find our work useful, please cite our paper.
```bibtex
@misc{https://doi.org/10.48550/arxiv.2208.07670,
  doi = {10.48550/ARXIV.2208.07670},
  url = {https://arxiv.org/abs/2208.07670},
  author = {Wu, Xing and Ma, Guangyuan and Lin, Meng and Lin, Zijia and Wang, Zhongyuan and Hu, Songlin},
  keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
  title = {ConTextual Mask Auto-Encoder for Dense Passage Retrieval},
  publisher = {arXiv},
  year = {2022},
  copyright = {arXiv.org perpetual, non-exclusive license}
}
```