Update README.md
Browse files
README.md
CHANGED
|
@@ -34,7 +34,7 @@ For both generated datasets, I used the Llama-3.2-1B-Instruct model, due to its
|
|
| 34 |
as well as technical answers, which was necessary for the interview questions. I used the job postings dataset with few-shot prompting to create the user profile
|
| 35 |
dataset. For each job posting in the dataset, I had the model create a 'great', 'mediocre', and 'bad' user profile. An example of the few shot prompting for this was:
|
| 36 |
|
| 37 |
-
```
|
| 38 |
Job_Title: Software Engineer
|
| 39 |
Profile_Type: Great
|
| 40 |
Applicant_Profile:
|
|
@@ -51,13 +51,14 @@ I used a python selector to randomly choose 5000 rows. With my new subset datase
|
|
| 51 |
data. For each job posting/user profile I had the model create an interview question based on the job description, then an optimal answer to the question based on the
|
| 52 |
user profile. An example of a few shot prompt I used is below.
|
| 53 |
|
| 54 |
-
```
|
| 55 |
Job Title: Data Scientist
|
| 56 |
Job Description: Analyze data and build predictive models.
|
| 57 |
Applicant Profile: Experienced in Python, R, and ML models.
|
| 58 |
Interview Question: Tell me about a machine learning project you are proud of.
|
| 59 |
Optimal Answer: I developed a predictive model using Python and scikit-learn to forecast customer churn, achieving 85% accuracy by carefully preprocessing the data and tuning hyperparameters.
|
| 60 |
```
|
|
|
|
| 61 |
After creating this dataset, I uploaded it to my project notebook. Then, I modified the dataset to reformat it and make it easier to train. I created an 'Instruct' column with each row's job title,
|
| 62 |
description, applicant profile, and the prompt 'Generate a relevant interview question and
|
| 63 |
provide an optimal answer using the information from this applicant's profile. Interview Question and Optimal Answer:'. Then I combined the interview question/ optimal answer
|
|
|
|
| 34 |
as well as technical answers, which was necessary for the interview questions. I used the job postings dataset with few-shot prompting to create the user profile
|
| 35 |
dataset. For each job posting in the dataset, I had the model create a 'great', 'mediocre', and 'bad' user profile. An example of the few shot prompting for this was:
|
| 36 |
|
| 37 |
+
```
|
| 38 |
Job_Title: Software Engineer
|
| 39 |
Profile_Type: Great
|
| 40 |
Applicant_Profile:
|
|
|
|
| 51 |
data. For each job posting/user profile I had the model create an interview question based on the job description, then an optimal answer to the question based on the
|
| 52 |
user profile. An example of a few shot prompt I used is below.
|
| 53 |
|
| 54 |
+
```
|
| 55 |
Job Title: Data Scientist
|
| 56 |
Job Description: Analyze data and build predictive models.
|
| 57 |
Applicant Profile: Experienced in Python, R, and ML models.
|
| 58 |
Interview Question: Tell me about a machine learning project you are proud of.
|
| 59 |
Optimal Answer: I developed a predictive model using Python and scikit-learn to forecast customer churn, achieving 85% accuracy by carefully preprocessing the data and tuning hyperparameters.
|
| 60 |
```
|
| 61 |
+
|
| 62 |
After creating this dataset, I uploaded it to my project notebook. Then, I modified the dataset to reformat it and make it easier to train. I created an 'Instruct' column with each row's job title,
|
| 63 |
description, applicant profile, and the prompt 'Generate a relevant interview question and
|
| 64 |
provide an optimal answer using the information from this applicant's profile. Interview Question and Optimal Answer:'. Then I combined the interview question/ optimal answer
|