swapnil7777 commited on
Commit
1435d1a
·
verified ·
1 Parent(s): 276c43a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +161 -111
README.md CHANGED
@@ -1,199 +1,249 @@
1
  ---
2
  library_name: transformers
3
- tags: []
 
 
 
 
 
 
 
 
 
 
4
  ---
5
 
6
- # Model Card for Model ID
7
-
8
- <!-- Provide a quick summary of what the model is/does. -->
9
-
10
-
11
 
12
  ## Model Details
13
 
14
  ### Model Description
15
 
16
- <!-- Provide a longer summary of what this model is. -->
17
-
18
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
-
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
 
28
- ### Model Sources [optional]
 
 
 
 
29
 
30
- <!-- Provide the basic links for the model. -->
31
 
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
 
36
  ## Uses
37
 
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
-
40
  ### Direct Use
41
 
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
-
44
- [More Information Needed]
 
 
45
 
46
- ### Downstream Use [optional]
47
 
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
 
50
- [More Information Needed]
 
 
 
 
51
 
52
  ### Out-of-Scope Use
53
 
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
-
56
- [More Information Needed]
 
 
57
 
58
  ## Bias, Risks, and Limitations
59
 
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
-
62
- [More Information Needed]
 
 
63
 
64
  ### Recommendations
65
 
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
-
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
 
 
 
69
 
70
  ## How to Get Started with the Model
71
 
72
- Use the code below to get started with the model.
73
-
74
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
75
 
76
  ## Training Details
77
 
78
  ### Training Data
79
 
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
 
82
- [More Information Needed]
 
 
 
83
 
84
  ### Training Procedure
85
 
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
 
88
- #### Preprocessing [optional]
89
-
90
- [More Information Needed]
91
 
 
 
 
 
 
 
92
 
93
  #### Training Hyperparameters
94
 
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
-
97
- #### Speeds, Sizes, Times [optional]
98
-
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
-
101
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
102
 
103
  ## Evaluation
104
 
105
- <!-- This section describes the evaluation protocols and provides the results. -->
106
-
107
  ### Testing Data, Factors & Metrics
108
 
109
  #### Testing Data
110
 
111
- <!-- This should link to a Dataset Card if possible. -->
112
-
113
- [More Information Needed]
114
 
115
  #### Factors
116
 
117
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
-
119
- [More Information Needed]
 
 
120
 
121
  #### Metrics
122
 
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
-
125
- [More Information Needed]
126
 
127
  ### Results
128
 
129
- [More Information Needed]
130
-
131
- #### Summary
132
-
133
-
134
-
135
- ## Model Examination [optional]
136
-
137
- <!-- Relevant interpretability work for the model goes here -->
138
-
139
- [More Information Needed]
140
 
141
  ## Environmental Impact
142
 
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
 
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
 
 
 
146
 
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
 
153
- ## Technical Specifications [optional]
154
 
155
  ### Model Architecture and Objective
156
 
157
- [More Information Needed]
 
 
 
 
158
 
159
  ### Compute Infrastructure
160
 
161
- [More Information Needed]
162
-
163
  #### Hardware
164
 
165
- [More Information Needed]
 
 
166
 
167
  #### Software
168
 
169
- [More Information Needed]
 
 
 
 
 
 
 
170
 
171
- ## Citation [optional]
172
-
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
 
175
  **BibTeX:**
176
 
177
- [More Information Needed]
178
-
179
- **APA:**
180
-
181
- [More Information Needed]
182
-
183
- ## Glossary [optional]
184
-
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
-
187
- [More Information Needed]
188
-
189
- ## More Information [optional]
190
-
191
- [More Information Needed]
192
 
193
- ## Model Card Authors [optional]
194
 
195
- [More Information Needed]
196
 
197
  ## Model Card Contact
198
 
199
- [More Information Needed]
 
1
  ---
2
  library_name: transformers
3
+ license: mit
4
+ datasets:
5
+ - md-nishat-008/Bangla-Instruct
6
+ language:
7
+ - en
8
+ - bn
9
+ metrics:
10
+ - bleu
11
+ - accuracy
12
+ base_model:
13
+ - Qwen/Qwen3-1.7B
14
  ---
15
 
16
+ # Qwen3-1.7B-Bengali-Instruct
 
 
 
 
17
 
18
  ## Model Details
19
 
20
  ### Model Description
21
 
22
+ This model is a fine-tuned version of Qwen/Qwen3-1.7B on Bengali (Bangla) instruction-response pairs. It has been optimized to understand and generate natural Bengali language responses while maintaining cultural appropriateness and proper grammar. The model uses LoRA (Low-Rank Adaptation) for efficient fine-tuning on a 100K Bengali instruction dataset.
 
 
 
 
 
 
 
 
 
 
23
 
24
+ - **Developed by:** Ismam Nur Swapnil
25
+ - **Model type:** Causal Language Model (Decoder-only Transformer)
26
+ - **Language(s):** Bengali (Bangla)
27
+ - **License:** Same as base Qwen3-1.7B model license
28
+ - **Finetuned from model:** [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B)
29
 
30
+ ### Model Sources
31
 
32
+ - **Base Repository:** [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B)
33
+ - **Training Dataset:** [swapnillo/Bangla-Instruction-Tuning-100K](https://huggingface.co/datasets/swapnillo/Bangla-Instruction-Tuning-100K)
 
34
 
35
  ## Uses
36
 
 
 
37
  ### Direct Use
38
 
39
+ This model is designed for conversational AI applications in Bengali. It can be used for:
40
+ - Bengali chatbots and virtual assistants
41
+ - Question-answering systems in Bengali
42
+ - Instruction-following tasks in Bengali language
43
+ - General Bengali language generation tasks
44
 
45
+ The model is optimized to provide culturally appropriate responses with proper Bengali grammar and natural conversational style.
46
 
47
+ ### Downstream Use
48
 
49
+ This model can be further fine-tuned for specific Bengali NLP tasks such as:
50
+ - Domain-specific question answering (medical, legal, educational)
51
+ - Bengali content generation
52
+ - Translation assistance
53
+ - Customer service chatbots for Bengali-speaking users
54
 
55
  ### Out-of-Scope Use
56
 
57
+ This model should not be used for:
58
+ - Generating harmful, biased, or offensive content
59
+ - High-stakes decision making without human oversight
60
+ - Applications requiring 100% accuracy (medical diagnosis, legal advice, etc.)
61
+ - Languages other than Bengali (primary training is Bengali-focused)
62
 
63
  ## Bias, Risks, and Limitations
64
 
65
+ - The model's responses are limited by the quality and diversity of the training data
66
+ - May occasionally generate factually incorrect information
67
+ - Could reflect biases present in the training dataset
68
+ - Performance may vary across different Bengali dialects and registers
69
+ - Not suitable for tasks requiring real-time critical decision making
70
 
71
  ### Recommendations
72
 
73
+ Users (both direct and downstream) should:
74
+ - Verify critical information from the model's outputs
75
+ - Implement content filtering for production deployments
76
+ - Monitor for potential biases in model outputs
77
+ - Not use the model for high-stakes decisions without human oversight
78
+ - Test thoroughly on their specific use cases before deployment
79
 
80
  ## How to Get Started with the Model
81
 
82
+ ```python
83
+ from transformers import AutoModelForCausalLM, AutoTokenizer
84
+ from peft import PeftModel
85
+
86
+ # Load base model and tokenizer
87
+ base_model = AutoModelForCausalLM.from_pretrained(
88
+ "Qwen/Qwen3-1.7B",
89
+ trust_remote_code=True,
90
+ torch_dtype=torch.float16,
91
+ device_map="auto"
92
+ )
93
+ tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-1.7B", trust_remote_code=True)
94
+
95
+ # Load LoRA adapter
96
+ model = PeftModel.from_pretrained(base_model, "path/to/your/model")
97
+
98
+ # Generate response
99
+ messages = [
100
+ {"role": "system", "content": "You are a knowledgeable AI assistant fluent in Bengali language and culture."},
101
+ {"role": "user", "content": "বাংলাদেশের রাজধানী কোথায়?"}
102
+ ]
103
+
104
+ text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
105
+ inputs = tokenizer(text, return_tensors="pt").to(model.device)
106
+ outputs = model.generate(**inputs, max_new_tokens=256, temperature=0.7)
107
+ response = tokenizer.decode(outputs[0], skip_special_tokens=True)
108
+ print(response)
109
+ ```
110
 
111
  ## Training Details
112
 
113
  ### Training Data
114
 
115
+ The model was fine-tuned on the [Bangla-Instruction-Tuning-100K dataset](https://huggingface.co/datasets/swapnillo/Bangla-Instruction-Tuning-100K), which contains approximately 100,000 Bengali instruction-response pairs covering diverse topics and conversational patterns.
116
 
117
+ **Data Split:**
118
+ - Training: 99% (~99,000 examples)
119
+ - Validation: 1% (~1,000 examples)
120
+ - Test split ratio: 0.01, seed: 42
121
 
122
  ### Training Procedure
123
 
124
+ The model was fine-tuned using LoRA (Low-Rank Adaptation) with DeepSpeed ZeRO-3 optimization for efficient training.
125
 
126
+ #### LoRA Configuration
 
 
127
 
128
+ - **LoRA Rank (r):** 32
129
+ - **LoRA Alpha:** 64
130
+ - **LoRA Dropout:** 0.1
131
+ - **Target Modules:** q_proj, k_proj, v_proj
132
+ - **Task Type:** Causal Language Modeling
133
+ - **Bias:** none
134
 
135
  #### Training Hyperparameters
136
 
137
+ - **Training regime:** fp16 mixed precision with DeepSpeed ZeRO-3
138
+ - **Number of epochs:** 1
139
+ - **Maximum training steps:** 2,700
140
+ - **Per device train batch size:** 2
141
+ - **Per device eval batch size:** 2
142
+ - **Gradient accumulation steps:** 8
143
+ - **Effective batch size:** 16 (2 × 8)
144
+ - **Learning rate:** 1e-4
145
+ - **Learning rate scheduler:** Cosine
146
+ - **Warmup steps:** 100
147
+ - **Weight decay:** 0.01
148
+ - **Max gradient norm:** 1.0
149
+ - **Optimizer:** AdamW (PyTorch)
150
+ - **Max sequence length:** 1024 tokens
151
+ - **Evaluation strategy:** Every 250 steps
152
+ - **Logging:** Every step
153
+ - **Checkpointing:** Every 500 steps (keeping best checkpoint only)
154
+
155
+ #### Speeds, Sizes, Times
156
+
157
+ - **Hardware:** Training performed on Kaggle GPU environment
158
+ - **Optimization:** DeepSpeed ZeRO-3 for memory efficiency
159
+ - **Data workers:** 2 with pin memory enabled
160
+ - **Monitoring:** Weights & Biases (wandb) integration
161
+ - **LoRA adapter size:** Significantly smaller than full model (~1-2% of parameters)
162
 
163
  ## Evaluation
164
 
 
 
165
  ### Testing Data, Factors & Metrics
166
 
167
  #### Testing Data
168
 
169
+ Validation set: 1% of the Bangla-Instruction-Tuning-100K dataset (~1,000 examples), randomly split with seed 42.
 
 
170
 
171
  #### Factors
172
 
173
+ Evaluation focuses on:
174
+ - Bengali language fluency and grammatical correctness
175
+ - Instruction-following capability
176
+ - Cultural appropriateness of responses
177
+ - Response relevance and coherence
178
 
179
  #### Metrics
180
 
181
+ - **Primary metric:** Training and validation loss
182
+ - **Best model selection:** Based on lowest validation loss
183
+ - **Monitoring:** Loss tracked at every step via wandb
184
 
185
  ### Results
186
 
187
+ The model was trained for 2,700 steps with evaluation every 250 steps. The best checkpoint was selected based on validation loss. Specific metrics can be viewed in the associated Weights & Biases project: "qwen-bangla-finetuning".
 
 
 
 
 
 
 
 
 
 
188
 
189
  ## Environmental Impact
190
 
191
+ Training was performed on Kaggle's GPU infrastructure with DeepSpeed ZeRO-3 optimization for improved efficiency.
192
 
193
+ - **Hardware Type:** GPU (Kaggle environment)
194
+ - **Training time:** ~2,700 training steps with fp16 precision
195
+ - **Compute Region:** Cloud-based (Kaggle)
196
+ - **Optimization:** DeepSpeed ZeRO-3 for memory efficiency, LoRA for parameter efficiency
197
 
198
+ Carbon emissions could be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
 
 
 
 
199
 
200
+ ## Technical Specifications
201
 
202
  ### Model Architecture and Objective
203
 
204
+ - **Base Architecture:** Qwen3-1.7B (1.7 billion parameters)
205
+ - **Fine-tuning Method:** LoRA (Low-Rank Adaptation)
206
+ - **Trainable Parameters:** Only LoRA adapters (~1-2% of total parameters)
207
+ - **Objective:** Causal language modeling with instruction tuning
208
+ - **Context Length:** 1024 tokens (during training)
209
 
210
  ### Compute Infrastructure
211
 
 
 
212
  #### Hardware
213
 
214
+ - **Platform:** Kaggle GPU environment
215
+ - **Precision:** FP16 mixed precision training
216
+ - **Memory Optimization:** DeepSpeed ZeRO-3
217
 
218
  #### Software
219
 
220
+ - **Framework:** PyTorch with Hugging Face Transformers
221
+ - **Key Libraries:**
222
+ - `transformers`: Model and tokenizer
223
+ - `peft`: LoRA implementation
224
+ - `datasets`: Data loading
225
+ - `deepspeed`: Distributed training optimization
226
+ - `wandb`: Experiment tracking
227
+ - **Python Version:** Compatible with transformers ecosystem
228
 
229
+ ## Citation
 
 
230
 
231
  **BibTeX:**
232
 
233
+ ```bibtex
234
+ @misc{qwen3-bengali-instruct,
235
+ author = {Ismam Nur Swapnil},
236
+ title = {Qwen3-1.7B-Bengali-Instruct: A Fine-tuned Bengali Language Model},
237
+ year = {2024},
238
+ publisher = {HuggingFace},
239
+ howpublished = {\url{https://huggingface.co/[your-username]/[model-name]}}
240
+ }
241
+ ```
 
 
 
 
 
 
242
 
243
+ ## Model Card Authors
244
 
245
+ Ismam Nur Swapnil
246
 
247
  ## Model Card Contact
248
 
249
+ For questions or feedback about this model, please open an issue in the model repository or contact the developer through HuggingFace.