tlemagueresse
commited on
Commit
·
eaba0c2
1
Parent(s):
ad93d91
Update read me and delete notebooks
Browse files- README.md +39 -34
- examples/example_usage_fastmodel_hf.py +1 -1
- notebooks/EDA.ipynb +0 -0
- notebooks/Model_Exploration.ipynb +0 -0
README.md
CHANGED
|
@@ -20,14 +20,46 @@ tags:
|
|
| 20 |
An efficient model to detect chainsaw activity in forest soundscapes using spectral and cepstral audio features. The model is designed for environmental conservation and is based on a LightGBM classifier, capable of low-energy inference on both CPU and GPU devices. This repository provides the complete code and configuration for feature extraction, model implementation, and deployment.
|
| 21 |
|
| 22 |
## Installation
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
```bash
|
| 27 |
git clone https://huggingface.co/tlmk22/QuefrencyGuardian
|
| 28 |
cd QuefrencyGuardian
|
| 29 |
pip install -r requirements.txt
|
| 30 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 31 |
|
| 32 |
## Model Overview
|
| 33 |
|
|
@@ -45,39 +77,12 @@ Key model parameters are included in `model/lgbm_params.json`.
|
|
| 45 |
|
| 46 |
## Usage
|
| 47 |
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
```python
|
| 51 |
-
import json
|
| 52 |
-
from fast_model import FastModel
|
| 53 |
-
|
| 54 |
-
# Load parameters
|
| 55 |
-
with open("model/features.json", "r") as f:
|
| 56 |
-
features = json.load(f)
|
| 57 |
-
|
| 58 |
-
with open("model/lgbm_params.json", "r") as f:
|
| 59 |
-
lgbm_params = json.load(f)
|
| 60 |
-
|
| 61 |
-
# Initialize the model
|
| 62 |
-
model = FastModel(
|
| 63 |
-
feature_params=features,
|
| 64 |
-
lgbm_params=lgbm_params,
|
| 65 |
-
model_file="model/model.txt", # Path to the serialized model file
|
| 66 |
-
device="cuda",
|
| 67 |
-
)
|
| 68 |
-
|
| 69 |
-
# Predict on a Dataset
|
| 70 |
-
from datasets import load_dataset
|
| 71 |
-
dataset = load_dataset("rfcx/frugalai")
|
| 72 |
-
predictions = model.predict(dataset["test"])
|
| 73 |
-
print(predictions)
|
| 74 |
-
```
|
| 75 |
|
| 76 |
### Performance
|
| 77 |
|
| 78 |
-
- **Accuracy**: 95% on the test set with a 4.5% FPR at the default threshold.
|
| 79 |
-
- **
|
| 80 |
-
- **Environmental Impact**: Inference energy consumption is **0.21 Wh**, tracked using CodeCarbon.
|
| 81 |
|
| 82 |
### License
|
| 83 |
|
|
|
|
| 20 |
An efficient model to detect chainsaw activity in forest soundscapes using spectral and cepstral audio features. The model is designed for environmental conservation and is based on a LightGBM classifier, capable of low-energy inference on both CPU and GPU devices. This repository provides the complete code and configuration for feature extraction, model implementation, and deployment.
|
| 21 |
|
| 22 |
## Installation
|
| 23 |
+
You can install and use the model in two different ways:
|
| 24 |
+
### Option 1: Clone the repository
|
| 25 |
+
To download the entire repository containing the code, model, and associated files, follow these steps:
|
| 26 |
+
``` bash
|
| 27 |
git clone https://huggingface.co/tlmk22/QuefrencyGuardian
|
| 28 |
cd QuefrencyGuardian
|
| 29 |
pip install -r requirements.txt
|
| 30 |
```
|
| 31 |
+
Once installed, you can directly import the files into your existing project and use the model.
|
| 32 |
+
### Option 2: Dynamically load from the Hub
|
| 33 |
+
If you only want to download the required files to use the model (without cloning the full repository), you can use the `hf_hub_download` function provided by Hugging Face. This method downloads only what is necessary directly from the Hub.
|
| 34 |
+
Here's an example:
|
| 35 |
+
``` python
|
| 36 |
+
from huggingface_hub import hf_hub_download
|
| 37 |
+
import importlib.util
|
| 38 |
+
|
| 39 |
+
# Specify the repository
|
| 40 |
+
repo_id = "tlmk22/QuefrencyGuardian"
|
| 41 |
+
|
| 42 |
+
# Download the Python file containing the model class
|
| 43 |
+
model_path = hf_hub_download(repo_id=repo_id, filename="model.py")
|
| 44 |
+
|
| 45 |
+
# Dynamically load the class from the downloaded file
|
| 46 |
+
spec = importlib.util.spec_from_file_location("model", model_path)
|
| 47 |
+
model_module = importlib.util.module_from_spec(spec)
|
| 48 |
+
spec.loader.exec_module(model_module)
|
| 49 |
+
|
| 50 |
+
# Import the FastModelHuggingFace class
|
| 51 |
+
FastModelHuggingFace = model_module.FastModelHuggingFace
|
| 52 |
+
|
| 53 |
+
# Load the pre-trained model
|
| 54 |
+
fast_model = FastModelHuggingFace.from_pretrained(repo_id)
|
| 55 |
+
|
| 56 |
+
# Perform predictions
|
| 57 |
+
result = model.predict("path/to/audio.wav")
|
| 58 |
+
map_labels = {0: "chainsaw", 1: "environment"}
|
| 59 |
+
print(f"Prediction Result: {map_labels[result[0]]}")
|
| 60 |
+
```
|
| 61 |
+
Depending on your needs, you can either clone the repository for a full installation or use Hugging Face's dynamic download functionalities for lightweight and direct usage.
|
| 62 |
+
|
| 63 |
|
| 64 |
## Model Overview
|
| 65 |
|
|
|
|
| 77 |
|
| 78 |
## Usage
|
| 79 |
|
| 80 |
+
Two example scripts demonstrating how to use the repository or the model downloaded from Hugging Face are available in the `examples` directory.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 81 |
|
| 82 |
### Performance
|
| 83 |
|
| 84 |
+
- **Accuracy**: Achieved 95% on the test set with a 4.5% FPR at the default threshold during the challenge, where this model won first place.
|
| 85 |
+
- **Environmental Impact**: Inference energy consumption was measured at **0.21 Wh**, tracked using CodeCarbon. This metric is dependent on the challenge's infrastructure, as the code was run within a Docker container provided by the platform.
|
|
|
|
| 86 |
|
| 87 |
### License
|
| 88 |
|
examples/example_usage_fastmodel_hf.py
CHANGED
|
@@ -14,7 +14,7 @@ fast_model = FastModelHuggingFace.from_pretrained(repo_id)
|
|
| 14 |
|
| 15 |
# Perform predictions for a single WAV file
|
| 16 |
map_labels = {0: "chainsaw", 1: "environment"}
|
| 17 |
-
wav_prediction = fast_model.predict("
|
| 18 |
print(f"Prediction : {map_labels[wav_prediction[0]]}")
|
| 19 |
|
| 20 |
# Example: predicting on a Hugging Face dataset
|
|
|
|
| 14 |
|
| 15 |
# Perform predictions for a single WAV file
|
| 16 |
map_labels = {0: "chainsaw", 1: "environment"}
|
| 17 |
+
wav_prediction = fast_model.predict("chainsaw.wav", device="cpu")
|
| 18 |
print(f"Prediction : {map_labels[wav_prediction[0]]}")
|
| 19 |
|
| 20 |
# Example: predicting on a Hugging Face dataset
|
notebooks/EDA.ipynb
DELETED
|
The diff for this file is too large to render.
See raw diff
|
|
|
notebooks/Model_Exploration.ipynb
DELETED
|
The diff for this file is too large to render.
See raw diff
|
|
|