nielsr HF Staff commited on
Commit
8f687c3
·
verified ·
1 Parent(s): 17452c2

Add pipeline tag, library name, paper link, GitHub link, and BibTeX citation

Browse files

This PR enhances the model card by:
- Adding `pipeline_tag: graph-ml` for better discoverability, as the model performs molecular representation learning involving graph structures.
- Specifying `library_name: transformers` to enable the automated "how to use" widget, given the model's compatibility with the Hugging Face Transformers library (as evidenced by `config.json` and usage snippets).
- Including a direct link to the paper: [Structure-Aware Fusion with Progressive Injection for Multimodal Molecular Representation Learning](https://huggingface.co/papers/2510.23640) at the top.
- Adding a direct link to the official GitHub repository: https://github.com/selmiss/MuMo for easy access to the code.
- Updating the "Training script" detail to link directly to the GitHub repository.
- Incorporating the full BibTeX citation provided in the GitHub README for proper academic attribution.

Please review and merge if these improvements align with your goals.

Files changed (1) hide show
  1. README.md +14 -3
README.md CHANGED
@@ -5,11 +5,14 @@ tags:
5
  - drug-discovery
6
  - molecular-modeling
7
  - mumo
 
 
8
  ---
9
 
10
  # mumo-pretrain
11
 
12
- This model was trained using MuMo (Multi-Modal Molecular) framework.
 
13
 
14
  ## Model Description
15
 
@@ -36,10 +39,18 @@ outputs = model(**inputs)
36
 
37
  ## Training Details
38
 
39
- - Training script: See repository for details
40
  - Framework: Transformers + DeepSpeed
41
 
42
  ## Citation
43
 
44
- If you use this model, please cite the original MuMo paper.
45
 
 
 
 
 
 
 
 
 
 
5
  - drug-discovery
6
  - molecular-modeling
7
  - mumo
8
+ pipeline_tag: graph-ml
9
+ library_name: transformers
10
  ---
11
 
12
  # mumo-pretrain
13
 
14
+ This model was trained using MuMo (Multi-Modal Molecular) framework, as presented in the paper [Structure-Aware Fusion with Progressive Injection for Multimodal Molecular Representation Learning](https://huggingface.co/papers/2510.23640).
15
+ The official code repository is available at: https://github.com/selmiss/MuMo
16
 
17
  ## Model Description
18
 
 
39
 
40
  ## Training Details
41
 
42
+ - Training script: See the [official GitHub repository](https://github.com/selmiss/MuMo) for details.
43
  - Framework: Transformers + DeepSpeed
44
 
45
  ## Citation
46
 
47
+ If you use this model or the MuMo framework, please cite our paper:
48
 
49
+ ```bibtex
50
+ @inproceedings{jing2025mumo,
51
+ title = {MuMo: Multimodal Molecular Representation Learning via Structural Fusion and Progressive Injection},
52
+ author = {Jing, Zihao and Sun, Yan and Li, Yan Yi and Janarthanan, Sugitha and Deng, Alana and Hu, Pingzhao},
53
+ booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
54
+ year = {2025}
55
+ }
56
+ ```