mradermacher commited on
Commit
3f92552
·
verified ·
1 Parent(s): 7d28822

auto-patch README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -1
README.md CHANGED
@@ -13,6 +13,8 @@ language:
13
  - en
14
  library_name: transformers
15
  license: cc-by-nc-4.0
 
 
16
  quantized_by: mradermacher
17
  tags:
18
  - function-calling
@@ -31,6 +33,9 @@ tags:
31
  static quants of https://huggingface.co/Salesforce/xLAM-8x22b-r
32
 
33
  <!-- provided-files -->
 
 
 
34
  weighted/imatrix quants are available at https://huggingface.co/mradermacher/xLAM-8x22b-r-i1-GGUF
35
  ## Usage
36
 
@@ -73,6 +78,6 @@ questions you might have and/or if you want some other model quantized.
73
 
74
  I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
75
  me use its servers and providing upgrades to my workstation to enable
76
- this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
77
 
78
  <!-- end -->
 
13
  - en
14
  library_name: transformers
15
  license: cc-by-nc-4.0
16
+ mradermacher:
17
+ readme_rev: 1
18
  quantized_by: mradermacher
19
  tags:
20
  - function-calling
 
33
  static quants of https://huggingface.co/Salesforce/xLAM-8x22b-r
34
 
35
  <!-- provided-files -->
36
+
37
+ ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#xLAM-8x22b-r-GGUF).***
38
+
39
  weighted/imatrix quants are available at https://huggingface.co/mradermacher/xLAM-8x22b-r-i1-GGUF
40
  ## Usage
41
 
 
78
 
79
  I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
80
  me use its servers and providing upgrades to my workstation to enable
81
+ this work in my free time.
82
 
83
  <!-- end -->