Transformers
Safetensors
English
ocbyram commited on
Commit
b166447
·
verified ·
1 Parent(s): 962257c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +25 -0
README.md CHANGED
@@ -83,10 +83,35 @@ while collecting the validation loss, then choose model/combination of hyperpara
83
 
84
  # Evaluation
85
 
 
 
 
 
 
 
 
86
  # Usage and Intended Use
87
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
88
  # Prompt Format
89
 
 
 
 
 
90
  # Expected Output Format
91
 
92
  This section should
 
83
 
84
  # Evaluation
85
 
86
+ In a markdown table (here is a link to a nice markdown table generator), report results on your three benchmark tasks as well as the testing split of your training dataset
87
+ (for RAG tasks, the testing split of your training dataset is the test cases you constructed to validate performance). Report results for your model, the base model
88
+ you built your model off of, and at least two other comparison models of similar size to your model that you believe have some baseline performance for your task.
89
+ In a text paragraph, as you did in your second project check in, describe the benchmark evaluation tasks you chose and why you chose them. Next, briefly state why you
90
+ chose each comparison model. Last, include a summary sentence(s) describing the performance of your model relative to the comparison models you chose.
91
+
92
+
93
  # Usage and Intended Use
94
 
95
+ Load the model using the HuggingFace Transformers library as shown in the code chunk below.
96
+ ```{python}
97
+ from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
98
+ tokenizer = AutoTokenizer.from_pretrained('ocbyram/Interview_Prep_Help', token = "your_token_here")
99
+ model = AutoModelForCausalLM.from_pretrained('ocbyram/Interview_Prep_Help', device_map = "auto", dtype = torch.bfloat16, token = "your_token_here")
100
+ ```
101
+ The intended use case for this model is interview prep for any job that has an accurate description. The overall goal of this model is to help a user get close to real-world
102
+ interview practice by answering realistic and complex questions. The model is able to look through any job description and develop diverse simulation interview questions based
103
+ on said description. The model will also use the user's input profile with information such as education, experience, and skills to formulate an optimal answer to the interview question.
104
+ This answer will allow the user to see how their profile can be optimized to answer questions and give them the best chance at moving to the next round of the
105
+ job hiring process. Specifically, this model is intended for user's who are have little-to-none interview experience and need more intense preparation, or users that want to enhance their
106
+ answers to complex interview questions. For example, if I was applying for the role of a teacher, but had little experience teaching, this model would find a way to
107
+ use my other education and experience to supplement my answer to teacher-specific interview questions.
108
+
109
  # Prompt Format
110
 
111
+ This section should briefly describe how your prompt is formatted and include a general code chunk (denoted by ```YOUR TEXT```) showing an example formatted prompt.
112
+
113
+
114
+
115
  # Expected Output Format
116
 
117
  This section should