Datasets:
ArXiv:
DOI:
License:
add batched configuration
#3
by
kylewhy
- opened
This view is limited to 50 files because it contains too many changes.
See the raw diff here.
- .gitattributes +1 -1
- README.md +147 -101
- events.csv +0 -3
- events_test.csv +0 -3
- events_train.csv +0 -3
- example.py +0 -54
- merge_hdf5.py +0 -65
- models/phasenet_picks.csv +0 -3
- models/phasenet_plus_events.csv +0 -3
- models/phasenet_plus_picks.csv +0 -3
- models/phasenet_pt_picks.csv +0 -3
- ncedc_event_dataset_000.h5.txt +0 -0
- picks.csv +0 -3
- picks_test.csv +0 -3
- picks_train.csv +0 -3
- quakeflow_nc.py +135 -245
- upload.py +0 -11
- waveform.h5 +0 -3
- waveform_h5/1987.h5 +0 -3
- waveform_h5/1988.h5 +0 -3
- waveform_h5/1989.h5 +0 -3
- waveform_h5/1990.h5 +0 -3
- waveform_h5/1991.h5 +0 -3
- waveform_h5/1992.h5 +0 -3
- waveform_h5/1993.h5 +0 -3
- waveform_h5/1994.h5 +0 -3
- waveform_h5/1995.h5 +0 -3
- waveform_h5/1996.h5 +0 -3
- waveform_h5/1997.h5 +0 -3
- waveform_h5/1998.h5 +0 -3
- waveform_h5/1999.h5 +0 -3
- waveform_h5/2000.h5 +0 -3
- waveform_h5/2001.h5 +0 -3
- waveform_h5/2002.h5 +0 -3
- waveform_h5/2003.h5 +0 -3
- waveform_h5/2004.h5 +0 -3
- waveform_h5/2005.h5 +0 -3
- waveform_h5/2006.h5 +0 -3
- waveform_h5/2007.h5 +0 -3
- waveform_h5/2008.h5 +0 -3
- waveform_h5/2009.h5 +0 -3
- waveform_h5/2010.h5 +0 -3
- waveform_h5/2011.h5 +0 -3
- waveform_h5/2012.h5 +0 -3
- waveform_h5/2013.h5 +0 -3
- waveform_h5/2014.h5 +0 -3
- waveform_h5/2015.h5 +0 -3
- waveform_h5/2016.h5 +0 -3
- waveform_h5/2017.h5 +0 -3
- waveform_h5/2018.h5 +0 -3
.gitattributes
CHANGED
|
@@ -52,4 +52,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 52 |
*.jpg filter=lfs diff=lfs merge=lfs -text
|
| 53 |
*.jpeg filter=lfs diff=lfs merge=lfs -text
|
| 54 |
*.webp filter=lfs diff=lfs merge=lfs -text
|
| 55 |
-
|
|
|
|
| 52 |
*.jpg filter=lfs diff=lfs merge=lfs -text
|
| 53 |
*.jpeg filter=lfs diff=lfs merge=lfs -text
|
| 54 |
*.webp filter=lfs diff=lfs merge=lfs -text
|
| 55 |
+
ncedc_eventid.h5 filter=lfs diff=lfs merge=lfs -text
|
README.md
CHANGED
|
@@ -5,58 +5,66 @@ license: mit
|
|
| 5 |
# Quakeflow_NC
|
| 6 |
|
| 7 |
## Introduction
|
| 8 |
-
This dataset is part of the data
|
| 9 |
-
|
| 10 |
-
Cite the NCEDC and PhaseNet:
|
| 11 |
-
|
| 12 |
-
Zhu, W., & Beroza, G. C. (2018). PhaseNet: A Deep-Neural-Network-Based Seismic Arrival Time Picking Method. arXiv preprint arXiv:1803.03211.
|
| 13 |
-
|
| 14 |
-
NCEDC (2014), Northern California Earthquake Data Center. UC Berkeley Seismological Laboratory. Dataset. doi:10.7932/NCEDC.
|
| 15 |
-
|
| 16 |
-
Acknowledge the NCEDC:
|
| 17 |
-
|
| 18 |
-
Waveform data, metadata, or data products for this study were accessed through the Northern California Earthquake Data Center (NCEDC), doi:10.7932/NCEDC.
|
| 19 |
|
| 20 |
```
|
| 21 |
-
|
| 22 |
-
|- Group: /
|
| 23 |
-
| |-* begin_time =
|
| 24 |
-
| |-* depth_km =
|
| 25 |
-
| |-* end_time =
|
| 26 |
-
| |-* event_id =
|
| 27 |
-
| |-* event_time =
|
| 28 |
-
| |-* event_time_index =
|
| 29 |
-
| |-* latitude = 37.
|
| 30 |
-
| |-* longitude = -118.
|
| 31 |
-
| |-* magnitude =
|
| 32 |
| |-* magnitude_type = D
|
| 33 |
-
| |-* num_stations =
|
| 34 |
-
| |- Dataset: /
|
| 35 |
| | |- (dtype=float32)
|
| 36 |
-
| | | |-* azimuth =
|
| 37 |
-
| | | |-* component = ['
|
| 38 |
-
| | | |-* distance_km = 1
|
| 39 |
| | | |-* dt_s = 0.01
|
| 40 |
-
| | | |-* elevation_m =
|
| 41 |
-
| | | |-* emergence_angle =
|
| 42 |
-
| | | |-* event_id = ['
|
| 43 |
-
| | | |-* latitude = 37.
|
| 44 |
| | | |-* location =
|
| 45 |
-
| | | |-* longitude = -118.
|
| 46 |
| | | |-* network = NC
|
| 47 |
-
| | | |-* phase_index = [
|
| 48 |
| | | |-* phase_polarity = ['U' 'N']
|
| 49 |
-
| | | |-* phase_remark = ['IP' '
|
| 50 |
-
| | | |-* phase_score = [1
|
| 51 |
-
| | | |-* phase_time = ['
|
| 52 |
| | | |-* phase_type = ['P' 'S']
|
| 53 |
-
| | | |-* snr = [
|
| 54 |
-
| | | |-* station =
|
| 55 |
| | | |-* unit = 1e-6m/s
|
| 56 |
-
| |- Dataset: /
|
| 57 |
| | |- (dtype=float32)
|
| 58 |
-
| | | |-* azimuth =
|
| 59 |
-
| | | |-* component = ['
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 60 |
......
|
| 61 |
```
|
| 62 |
|
|
@@ -65,8 +73,7 @@ Waveform data, metadata, or data products for this study were accessed through t
|
|
| 65 |
### Requirements
|
| 66 |
- datasets
|
| 67 |
- h5py
|
| 68 |
-
-
|
| 69 |
-
- pytorch
|
| 70 |
|
| 71 |
### Usage
|
| 72 |
Import the necessary packages:
|
|
@@ -74,87 +81,126 @@ Import the necessary packages:
|
|
| 74 |
import h5py
|
| 75 |
import numpy as np
|
| 76 |
import torch
|
|
|
|
| 77 |
from datasets import load_dataset
|
| 78 |
```
|
| 79 |
-
We have
|
| 80 |
-
|
| 81 |
-
|
| 82 |
-
-
|
| 83 |
-
-
|
| 84 |
-
-
|
| 85 |
-
-
|
| 86 |
-
|
| 87 |
-
|
| 88 |
-
|
| 89 |
-
|
| 90 |
-
- `
|
| 91 |
-
- `
|
| 92 |
-
|
| 93 |
-
|
| 94 |
-
- `phase_index`: the time point index of the phase arrival time
|
| 95 |
-
- `phase_type`: the phase type
|
| 96 |
-
- `phase_polarity`: the phase polarity in ('U', 'D', 'N')
|
| 97 |
-
- `event_time`: the event time
|
| 98 |
-
- `event_time_index`: the time point index of the event time
|
| 99 |
-
- `event_location`: the event location with shape `(3,)`, including latitude, longitude, depth
|
| 100 |
-
- `station_location`: the station location with shape `(3,)`, including latitude, longitude and depth
|
| 101 |
-
|
| 102 |
-
The sample of `event` is a dictionary with the following keys:
|
| 103 |
-
- `data`: the waveform with shape `(n_station, 3, nt)`, the default time length is 8192
|
| 104 |
-
- `begin_time`: the begin time of the waveform data
|
| 105 |
-
- `end_time`: the end time of the waveform data
|
| 106 |
-
- `phase_time`: the phase arrival time with shape `(n_station,)`
|
| 107 |
-
- `phase_index`: the time point index of the phase arrival time with shape `(n_station,)`
|
| 108 |
-
- `phase_type`: the phase type with shape `(n_station,)`
|
| 109 |
-
- `phase_polarity`: the phase polarity in ('U', 'D', 'N') with shape `(n_station,)`
|
| 110 |
-
- `event_time`: the event time
|
| 111 |
-
- `event_time_index`: the time point index of the event time
|
| 112 |
-
- `event_location`: the space-time coordinates of the event with shape `(n_staion, 3)`
|
| 113 |
-
- `station_location`: the space coordinates of the station with shape `(n_station, 3)`, including latitude, longitude and depth
|
| 114 |
-
|
| 115 |
-
The default configuration is `station_test`. You can specify the configuration by argument `name`. For example:
|
| 116 |
```python
|
| 117 |
# load dataset
|
| 118 |
# ATTENTION: Streaming(Iterable Dataset) is difficult to support because of the feature of HDF5
|
| 119 |
# So we recommend to directly load the dataset and convert it into iterable later
|
| 120 |
# The dataset is very large, so you need to wait for some time at the first time
|
| 121 |
|
| 122 |
-
# to load "
|
| 123 |
-
quakeflow_nc = load_dataset("AI4EPS/quakeflow_nc", split="
|
| 124 |
# or
|
| 125 |
-
quakeflow_nc = load_dataset("AI4EPS/quakeflow_nc", name="
|
|
|
|
|
|
|
|
|
|
|
|
|
| 126 |
|
| 127 |
-
|
| 128 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 129 |
```
|
| 130 |
|
| 131 |
-
####
|
|
|
|
| 132 |
```python
|
| 133 |
-
quakeflow_nc = load_dataset("AI4EPS/quakeflow_nc", name="
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 134 |
|
| 135 |
# print the first sample of the iterable dataset
|
| 136 |
for example in quakeflow_nc:
|
| 137 |
print("\nIterable test\n")
|
| 138 |
print(example.keys())
|
| 139 |
for key in example.keys():
|
| 140 |
-
|
| 141 |
-
print(key, np.array(example[key]).shape)
|
| 142 |
-
else:
|
| 143 |
-
print(key, example[key])
|
| 144 |
break
|
| 145 |
|
| 146 |
-
|
| 147 |
-
quakeflow_nc = quakeflow_nc.with_format("torch")
|
| 148 |
-
dataloader = DataLoader(quakeflow_nc, batch_size=8, num_workers=0, collate_fn=lambda x: x)
|
| 149 |
|
| 150 |
for batch in dataloader:
|
| 151 |
print("\nDataloader test\n")
|
| 152 |
-
print(
|
| 153 |
-
|
| 154 |
-
|
| 155 |
-
|
| 156 |
-
|
| 157 |
-
|
| 158 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 159 |
break
|
| 160 |
```
|
|
|
|
| 5 |
# Quakeflow_NC
|
| 6 |
|
| 7 |
## Introduction
|
| 8 |
+
This dataset is part of the data from NCEDC (Northern California Earthquake Data Center) and is organised as several HDF5 files. The dataset structure is shown below: (File [ncedc_event_dataset_000.h5.txt](./ncedc_event_dataset_000.h5.txt) shows the structure of the firsr shard of the dataset, and you can find more information about the format at [AI4EPS](https://ai4eps.github.io/homepage/ml4earth/seismic_event_format1/))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 9 |
|
| 10 |
```
|
| 11 |
+
Group: / len:10000
|
| 12 |
+
|- Group: /nc100012 len:5
|
| 13 |
+
| |-* begin_time = 1987-05-08T00:15:48.890
|
| 14 |
+
| |-* depth_km = 7.04
|
| 15 |
+
| |-* end_time = 1987-05-08T00:17:48.890
|
| 16 |
+
| |-* event_id = nc100012
|
| 17 |
+
| |-* event_time = 1987-05-08T00:16:14.700
|
| 18 |
+
| |-* event_time_index = 2581
|
| 19 |
+
| |-* latitude = 37.5423
|
| 20 |
+
| |-* longitude = -118.4412
|
| 21 |
+
| |-* magnitude = 1.1
|
| 22 |
| |-* magnitude_type = D
|
| 23 |
+
| |-* num_stations = 5
|
| 24 |
+
| |- Dataset: /nc100012/NC.MRS..EH (shape:(3, 12000))
|
| 25 |
| | |- (dtype=float32)
|
| 26 |
+
| | | |-* azimuth = 265.0
|
| 27 |
+
| | | |-* component = ['Z']
|
| 28 |
+
| | | |-* distance_km = 39.1
|
| 29 |
| | | |-* dt_s = 0.01
|
| 30 |
+
| | | |-* elevation_m = 3680.0
|
| 31 |
+
| | | |-* emergence_angle = 93.0
|
| 32 |
+
| | | |-* event_id = ['nc100012' 'nc100012']
|
| 33 |
+
| | | |-* latitude = 37.5107
|
| 34 |
| | | |-* location =
|
| 35 |
+
| | | |-* longitude = -118.8822
|
| 36 |
| | | |-* network = NC
|
| 37 |
+
| | | |-* phase_index = [3274 3802]
|
| 38 |
| | | |-* phase_polarity = ['U' 'N']
|
| 39 |
+
| | | |-* phase_remark = ['IP' 'S']
|
| 40 |
+
| | | |-* phase_score = [1 1]
|
| 41 |
+
| | | |-* phase_time = ['1987-05-08T00:16:21.630' '1987-05-08T00:16:26.920']
|
| 42 |
| | | |-* phase_type = ['P' 'S']
|
| 43 |
+
| | | |-* snr = [0. 0. 1.98844361]
|
| 44 |
+
| | | |-* station = MRS
|
| 45 |
| | | |-* unit = 1e-6m/s
|
| 46 |
+
| |- Dataset: /nc100012/NN.BEN.N1.EH (shape:(3, 12000))
|
| 47 |
| | |- (dtype=float32)
|
| 48 |
+
| | | |-* azimuth = 329.0
|
| 49 |
+
| | | |-* component = ['Z']
|
| 50 |
+
| | | |-* distance_km = 22.5
|
| 51 |
+
| | | |-* dt_s = 0.01
|
| 52 |
+
| | | |-* elevation_m = 2476.0
|
| 53 |
+
| | | |-* emergence_angle = 102.0
|
| 54 |
+
| | | |-* event_id = ['nc100012' 'nc100012']
|
| 55 |
+
| | | |-* latitude = 37.7154
|
| 56 |
+
| | | |-* location = N1
|
| 57 |
+
| | | |-* longitude = -118.5741
|
| 58 |
+
| | | |-* network = NN
|
| 59 |
+
| | | |-* phase_index = [3010 3330]
|
| 60 |
+
| | | |-* phase_polarity = ['U' 'N']
|
| 61 |
+
| | | |-* phase_remark = ['IP' 'S']
|
| 62 |
+
| | | |-* phase_score = [0 0]
|
| 63 |
+
| | | |-* phase_time = ['1987-05-08T00:16:18.990' '1987-05-08T00:16:22.190']
|
| 64 |
+
| | | |-* phase_type = ['P' 'S']
|
| 65 |
+
| | | |-* snr = [0. 0. 7.31356192]
|
| 66 |
+
| | | |-* station = BEN
|
| 67 |
+
| | | |-* unit = 1e-6m/s
|
| 68 |
......
|
| 69 |
```
|
| 70 |
|
|
|
|
| 73 |
### Requirements
|
| 74 |
- datasets
|
| 75 |
- h5py
|
| 76 |
+
- torch (for PyTorch)
|
|
|
|
| 77 |
|
| 78 |
### Usage
|
| 79 |
Import the necessary packages:
|
|
|
|
| 81 |
import h5py
|
| 82 |
import numpy as np
|
| 83 |
import torch
|
| 84 |
+
from torch.utils.data import Dataset, IterableDataset, DataLoader
|
| 85 |
from datasets import load_dataset
|
| 86 |
```
|
| 87 |
+
We have 2 configurations for the dataset: `NCEDC` and `NCEDC_full_size`. They all return event-based samples one by one. But `NCEDC` returns samples with 10 stations each, while `NCEDC_full_size` return samples with stations same as the original data.
|
| 88 |
+
|
| 89 |
+
The sample of `NCEDC` is a dictionary with the following keys:
|
| 90 |
+
- `waveform`: the waveform with shape `(3, nt, n_sta)`, the first dimension is 3 components, the second dimension is the number of time samples, the third dimension is the number of stations
|
| 91 |
+
- `phase_pick`: the probability of the phase pick with shape `(3, nt, n_sta)`, the first dimension is noise, P and S
|
| 92 |
+
- `event_location`: the event location with shape `(4,)`, including latitude, longitude, depth and time
|
| 93 |
+
- `station_location`: the station location with shape `(n_sta, 3)`, the first dimension is latitude, longitude and depth
|
| 94 |
+
|
| 95 |
+
Because Huggingface datasets only support dynamic size on first dimension, so the sample of `NCEDC_full_size` is a dictionary with the following keys:
|
| 96 |
+
- `waveform`: the waveform with shape `(n_sta, 3, nt)`,
|
| 97 |
+
- `phase_pick`: the probability of the phase pick with shape `(n_sta, 3, nt)`
|
| 98 |
+
- `event_location`: the event location with shape `(4,)`
|
| 99 |
+
- `station_location`: the station location with shape `(n_sta, 3)`, the first dimension is latitude, longitude and depth
|
| 100 |
+
|
| 101 |
+
The default configuration is `NCEDC`. You can specify the configuration by argument `name`. For example:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 102 |
```python
|
| 103 |
# load dataset
|
| 104 |
# ATTENTION: Streaming(Iterable Dataset) is difficult to support because of the feature of HDF5
|
| 105 |
# So we recommend to directly load the dataset and convert it into iterable later
|
| 106 |
# The dataset is very large, so you need to wait for some time at the first time
|
| 107 |
|
| 108 |
+
# to load "NCEDC"
|
| 109 |
+
quakeflow_nc = load_dataset("AI4EPS/quakeflow_nc", split="train")
|
| 110 |
# or
|
| 111 |
+
quakeflow_nc = load_dataset("AI4EPS/quakeflow_nc", name="NCEDC", split="train")
|
| 112 |
+
|
| 113 |
+
# to load "NCEDC_full_size"
|
| 114 |
+
quakeflow_nc = load_dataset("AI4EPS/quakeflow_nc", name="NCEDC_full_size", split="train")
|
| 115 |
+
```
|
| 116 |
|
| 117 |
+
If you want to use the first several shards of the dataset, you can download the script `quakeflow_nc.py` and change the code as below:
|
| 118 |
+
```python
|
| 119 |
+
# change the 37 to the number of shards you want
|
| 120 |
+
_URLS = {
|
| 121 |
+
"NCEDC": [f"{_REPO}/ncedc_event_dataset_{i:03d}.h5" for i in range(37)]
|
| 122 |
+
}
|
| 123 |
+
```
|
| 124 |
+
Then you can use the dataset like this (Don't forget to specify the argument `name`):
|
| 125 |
+
```python
|
| 126 |
+
# don't forget to specify the script path
|
| 127 |
+
quakeflow_nc = datasets.load_dataset("path_to_script/quakeflow_nc.py", split="train")
|
| 128 |
+
quakeflow_nc
|
| 129 |
```
|
| 130 |
|
| 131 |
+
#### Usage for `NCEDC`
|
| 132 |
+
Then you can change the dataset into PyTorch format iterable dataset, and view the first sample:
|
| 133 |
```python
|
| 134 |
+
quakeflow_nc = load_dataset("AI4EPS/quakeflow_nc", name="NCEDC", split="train")
|
| 135 |
+
quakeflow_nc = quakeflow_nc.to_iterable_dataset()
|
| 136 |
+
# because add examples formatting to get tensors when using the "torch" format
|
| 137 |
+
# has not been implemented yet, we need to manually add the formatting
|
| 138 |
+
quakeflow_nc = quakeflow_nc.map(lambda x: {key: torch.from_numpy(np.array(value, dtype=np.float32)) for key, value in x.items()})
|
| 139 |
+
try:
|
| 140 |
+
isinstance(quakeflow_nc, torch.utils.data.IterableDataset)
|
| 141 |
+
except:
|
| 142 |
+
raise Exception("quakeflow_nc is not an IterableDataset")
|
| 143 |
|
| 144 |
# print the first sample of the iterable dataset
|
| 145 |
for example in quakeflow_nc:
|
| 146 |
print("\nIterable test\n")
|
| 147 |
print(example.keys())
|
| 148 |
for key in example.keys():
|
| 149 |
+
print(key, example[key].shape, example[key].dtype)
|
|
|
|
|
|
|
|
|
|
| 150 |
break
|
| 151 |
|
| 152 |
+
dataloader = DataLoader(quakeflow_nc, batch_size=4)
|
|
|
|
|
|
|
| 153 |
|
| 154 |
for batch in dataloader:
|
| 155 |
print("\nDataloader test\n")
|
| 156 |
+
print(batch.keys())
|
| 157 |
+
for key in batch.keys():
|
| 158 |
+
print(key, batch[key].shape, batch[key].dtype)
|
| 159 |
+
break
|
| 160 |
+
```
|
| 161 |
+
|
| 162 |
+
#### Usage for `NCEDC_full_size`
|
| 163 |
+
|
| 164 |
+
Then you can change the dataset into PyTorch format dataset, and view the first sample (Don't forget to reorder the keys):
|
| 165 |
+
```python
|
| 166 |
+
quakeflow_nc = datasets.load_dataset("AI4EPS/quakeflow_nc", split="train", name="NCEDC_full_size")
|
| 167 |
+
|
| 168 |
+
# for PyTorch DataLoader, we need to divide the dataset into several shards
|
| 169 |
+
num_workers=4
|
| 170 |
+
quakeflow_nc = quakeflow_nc.to_iterable_dataset(num_shards=num_workers)
|
| 171 |
+
# because add examples formatting to get tensors when using the "torch" format
|
| 172 |
+
# has not been implemented yet, we need to manually add the formatting
|
| 173 |
+
quakeflow_nc = quakeflow_nc.map(lambda x: {key: torch.from_numpy(np.array(value, dtype=np.float32)) for key, value in x.items()})
|
| 174 |
+
def reorder_keys(example):
|
| 175 |
+
example["waveform"] = example["waveform"].permute(1,2,0).contiguous()
|
| 176 |
+
example["phase_pick"] = example["phase_pick"].permute(1,2,0).contiguous()
|
| 177 |
+
return example
|
| 178 |
+
|
| 179 |
+
quakeflow_nc = quakeflow_nc.map(reorder_keys)
|
| 180 |
+
|
| 181 |
+
try:
|
| 182 |
+
isinstance(quakeflow_nc, torch.utils.data.IterableDataset)
|
| 183 |
+
except:
|
| 184 |
+
raise Exception("quakeflow_nc is not an IterableDataset")
|
| 185 |
+
|
| 186 |
+
data_loader = DataLoader(
|
| 187 |
+
quakeflow_nc,
|
| 188 |
+
batch_size=1,
|
| 189 |
+
num_workers=num_workers,
|
| 190 |
+
)
|
| 191 |
+
|
| 192 |
+
for batch in quakeflow_nc:
|
| 193 |
+
print("\nIterable test\n")
|
| 194 |
+
print(batch.keys())
|
| 195 |
+
for key in batch.keys():
|
| 196 |
+
print(key, batch[key].shape, batch[key].dtype)
|
| 197 |
+
break
|
| 198 |
+
|
| 199 |
+
for batch in data_loader:
|
| 200 |
+
print("\nDataloader test\n")
|
| 201 |
+
print(batch.keys())
|
| 202 |
+
for key in batch.keys():
|
| 203 |
+
batch[key] = batch[key].squeeze(0)
|
| 204 |
+
print(key, batch[key].shape, batch[key].dtype)
|
| 205 |
break
|
| 206 |
```
|
events.csv
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:84166f6a0be6a02caeb8d11ed3495e5256db698c795dbb3db4d45d8b863313d8
|
| 3 |
-
size 46863258
|
|
|
|
|
|
|
|
|
|
|
|
events_test.csv
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:74b5bf132e23763f851035717a1baa92ab8fb73253138b640103390dce33e154
|
| 3 |
-
size 1602217
|
|
|
|
|
|
|
|
|
|
|
|
events_train.csv
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:ef579400d9354ecaf142bdc7023291c952dbfc20d6bafab4715dff1774b3f7a5
|
| 3 |
-
size 45261178
|
|
|
|
|
|
|
|
|
|
|
|
example.py
DELETED
|
@@ -1,54 +0,0 @@
|
|
| 1 |
-
# %%
|
| 2 |
-
import datasets
|
| 3 |
-
import numpy as np
|
| 4 |
-
from torch.utils.data import DataLoader
|
| 5 |
-
|
| 6 |
-
quakeflow_nc = datasets.load_dataset(
|
| 7 |
-
"AI4EPS/quakeflow_nc",
|
| 8 |
-
name="station",
|
| 9 |
-
split="train",
|
| 10 |
-
# name="station_test",
|
| 11 |
-
# split="test",
|
| 12 |
-
# download_mode="force_redownload",
|
| 13 |
-
trust_remote_code=True,
|
| 14 |
-
num_proc=36,
|
| 15 |
-
)
|
| 16 |
-
# quakeflow_nc = datasets.load_dataset(
|
| 17 |
-
# "./quakeflow_nc.py",
|
| 18 |
-
# name="station",
|
| 19 |
-
# split="train",
|
| 20 |
-
# # name="statoin_test",
|
| 21 |
-
# # split="test",
|
| 22 |
-
# num_proc=36,
|
| 23 |
-
# )
|
| 24 |
-
|
| 25 |
-
print(quakeflow_nc)
|
| 26 |
-
|
| 27 |
-
# print the first sample of the iterable dataset
|
| 28 |
-
for example in quakeflow_nc:
|
| 29 |
-
print("\nIterable dataset\n")
|
| 30 |
-
print(example)
|
| 31 |
-
print(example.keys())
|
| 32 |
-
for key in example.keys():
|
| 33 |
-
if key == "waveform":
|
| 34 |
-
print(key, np.array(example[key]).shape)
|
| 35 |
-
else:
|
| 36 |
-
print(key, example[key])
|
| 37 |
-
break
|
| 38 |
-
|
| 39 |
-
# %%
|
| 40 |
-
quakeflow_nc = quakeflow_nc.with_format("torch")
|
| 41 |
-
dataloader = DataLoader(quakeflow_nc, batch_size=8, num_workers=0, collate_fn=lambda x: x)
|
| 42 |
-
|
| 43 |
-
for batch in dataloader:
|
| 44 |
-
print("\nDataloader dataset\n")
|
| 45 |
-
print(f"Batch size: {len(batch)}")
|
| 46 |
-
print(batch[0].keys())
|
| 47 |
-
for key in batch[0].keys():
|
| 48 |
-
if key == "waveform":
|
| 49 |
-
print(key, np.array(batch[0][key]).shape)
|
| 50 |
-
else:
|
| 51 |
-
print(key, batch[0][key])
|
| 52 |
-
break
|
| 53 |
-
|
| 54 |
-
# %%
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
merge_hdf5.py
DELETED
|
@@ -1,65 +0,0 @@
|
|
| 1 |
-
# %%
|
| 2 |
-
import os
|
| 3 |
-
|
| 4 |
-
import h5py
|
| 5 |
-
import matplotlib.pyplot as plt
|
| 6 |
-
from tqdm import tqdm
|
| 7 |
-
|
| 8 |
-
# %%
|
| 9 |
-
h5_dir = "waveform_h5"
|
| 10 |
-
h5_out = "waveform.h5"
|
| 11 |
-
h5_train = "waveform_train.h5"
|
| 12 |
-
h5_test = "waveform_test.h5"
|
| 13 |
-
|
| 14 |
-
# # %%
|
| 15 |
-
# h5_dir = "waveform_h5"
|
| 16 |
-
# h5_out = "waveform.h5"
|
| 17 |
-
# h5_train = "waveform_train.h5"
|
| 18 |
-
# h5_test = "waveform_test.h5"
|
| 19 |
-
|
| 20 |
-
h5_files = sorted(os.listdir(h5_dir))
|
| 21 |
-
train_files = h5_files[:-1]
|
| 22 |
-
test_files = h5_files[-1:]
|
| 23 |
-
# train_files = h5_files
|
| 24 |
-
# train_files = [x for x in train_files if (x != "2014.h5") and (x not in [])]
|
| 25 |
-
# test_files = []
|
| 26 |
-
print(f"train files: {train_files}")
|
| 27 |
-
print(f"test files: {test_files}")
|
| 28 |
-
|
| 29 |
-
# %%
|
| 30 |
-
with h5py.File(h5_out, "w") as fp:
|
| 31 |
-
# external linked file
|
| 32 |
-
for h5_file in h5_files:
|
| 33 |
-
with h5py.File(os.path.join(h5_dir, h5_file), "r") as f:
|
| 34 |
-
for event in tqdm(f.keys(), desc=h5_file, total=len(f.keys())):
|
| 35 |
-
if event not in fp:
|
| 36 |
-
fp[event] = h5py.ExternalLink(os.path.join(h5_dir, h5_file), event)
|
| 37 |
-
else:
|
| 38 |
-
print(f"{event} already exists")
|
| 39 |
-
continue
|
| 40 |
-
|
| 41 |
-
# %%
|
| 42 |
-
with h5py.File(h5_train, "w") as fp:
|
| 43 |
-
# external linked file
|
| 44 |
-
for h5_file in train_files:
|
| 45 |
-
with h5py.File(os.path.join(h5_dir, h5_file), "r") as f:
|
| 46 |
-
for event in tqdm(f.keys(), desc=h5_file, total=len(f.keys())):
|
| 47 |
-
if event not in fp:
|
| 48 |
-
fp[event] = h5py.ExternalLink(os.path.join(h5_dir, h5_file), event)
|
| 49 |
-
else:
|
| 50 |
-
print(f"{event} already exists")
|
| 51 |
-
continue
|
| 52 |
-
|
| 53 |
-
# %%
|
| 54 |
-
with h5py.File(h5_test, "w") as fp:
|
| 55 |
-
# external linked file
|
| 56 |
-
for h5_file in test_files:
|
| 57 |
-
with h5py.File(os.path.join(h5_dir, h5_file), "r") as f:
|
| 58 |
-
for event in tqdm(f.keys(), desc=h5_file, total=len(f.keys())):
|
| 59 |
-
if event not in fp:
|
| 60 |
-
fp[event] = h5py.ExternalLink(os.path.join(h5_dir, h5_file), event)
|
| 61 |
-
else:
|
| 62 |
-
print(f"{event} already exists")
|
| 63 |
-
continue
|
| 64 |
-
|
| 65 |
-
# %%
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
models/phasenet_picks.csv
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:b51df5987a2a05e44e0949b42d00a28692109da521911c55d2692ebfad0c54d7
|
| 3 |
-
size 9355127
|
|
|
|
|
|
|
|
|
|
|
|
models/phasenet_plus_events.csv
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:f686ebf8da632b71a947e4ee884c76f30a313ae0e9d6e32d1f675828884a95f7
|
| 3 |
-
size 7381331
|
|
|
|
|
|
|
|
|
|
|
|
models/phasenet_plus_picks.csv
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:83d241a54477f722cd032efe8368a653bba170e1abebf3d9097d7756cfd54b23
|
| 3 |
-
size 9987053
|
|
|
|
|
|
|
|
|
|
|
|
models/phasenet_pt_picks.csv
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:bb7ea98484b5e6e1c4c79ea5eb1e38bce43e87b546fc6d29c72d187a6d8b1d00
|
| 3 |
-
size 8715799
|
|
|
|
|
|
|
|
|
|
|
|
ncedc_event_dataset_000.h5.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
picks.csv
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:52f077ae9f94481d4b80f37c9f15038ee1e3636d5da2da3b1d4aaa2991879cc3
|
| 3 |
-
size 422247029
|
|
|
|
|
|
|
|
|
|
|
|
picks_test.csv
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:bb09f0ac169bf451cfcfb4547359756cb1a53828bf4074971d9160a3aa171f38
|
| 3 |
-
size 21850235
|
|
|
|
|
|
|
|
|
|
|
|
picks_train.csv
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:d22c5d5eb1c27a723525c657c1308a3b643d6f3e716eb1c43e064b7a87bb0819
|
| 3 |
-
size 400397230
|
|
|
|
|
|
|
|
|
|
|
|
quakeflow_nc.py
CHANGED
|
@@ -17,21 +17,22 @@
|
|
| 17 |
"""QuakeFlow_NC: A dataset of earthquake waveforms organized by earthquake events and based on the HDF5 format."""
|
| 18 |
|
| 19 |
|
| 20 |
-
from typing import Dict, List, Optional, Tuple, Union
|
| 21 |
-
|
| 22 |
-
import datasets
|
| 23 |
-
import fsspec
|
| 24 |
import h5py
|
| 25 |
import numpy as np
|
| 26 |
import torch
|
|
|
|
|
|
|
|
|
|
|
|
|
| 27 |
|
| 28 |
# TODO: Add BibTeX citation
|
| 29 |
# Find for instance the citation on arxiv or on the dataset repo/website
|
| 30 |
_CITATION = """\
|
| 31 |
@InProceedings{huggingface:dataset,
|
| 32 |
-
title = {
|
| 33 |
-
author={
|
| 34 |
-
|
|
|
|
| 35 |
}
|
| 36 |
"""
|
| 37 |
|
|
@@ -50,74 +51,38 @@ _LICENSE = ""
|
|
| 50 |
# TODO: Add link to the official dataset URLs here
|
| 51 |
# The HuggingFace Datasets library doesn't host the datasets but only points to the original files.
|
| 52 |
# This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
|
| 53 |
-
_REPO = "https://huggingface.co/datasets/AI4EPS/quakeflow_nc/resolve/main/
|
| 54 |
-
_FILES = [
|
| 55 |
-
"1987.h5",
|
| 56 |
-
"1988.h5",
|
| 57 |
-
"1989.h5",
|
| 58 |
-
"1990.h5",
|
| 59 |
-
"1991.h5",
|
| 60 |
-
"1992.h5",
|
| 61 |
-
"1993.h5",
|
| 62 |
-
"1994.h5",
|
| 63 |
-
"1995.h5",
|
| 64 |
-
"1996.h5",
|
| 65 |
-
"1997.h5",
|
| 66 |
-
"1998.h5",
|
| 67 |
-
"1999.h5",
|
| 68 |
-
"2000.h5",
|
| 69 |
-
"2001.h5",
|
| 70 |
-
"2002.h5",
|
| 71 |
-
"2003.h5",
|
| 72 |
-
"2004.h5",
|
| 73 |
-
"2005.h5",
|
| 74 |
-
"2006.h5",
|
| 75 |
-
"2007.h5",
|
| 76 |
-
"2008.h5",
|
| 77 |
-
"2009.h5",
|
| 78 |
-
"2010.h5",
|
| 79 |
-
"2011.h5",
|
| 80 |
-
"2012.h5",
|
| 81 |
-
"2013.h5",
|
| 82 |
-
"2014.h5",
|
| 83 |
-
"2015.h5",
|
| 84 |
-
"2016.h5",
|
| 85 |
-
"2017.h5",
|
| 86 |
-
"2018.h5",
|
| 87 |
-
"2019.h5",
|
| 88 |
-
"2020.h5",
|
| 89 |
-
"2021.h5",
|
| 90 |
-
"2022.h5",
|
| 91 |
-
"2023.h5",
|
| 92 |
-
]
|
| 93 |
_URLS = {
|
| 94 |
-
"
|
| 95 |
-
"
|
| 96 |
-
"station_train": [f"{_REPO}/{x}" for x in _FILES[:-1]],
|
| 97 |
-
"event_train": [f"{_REPO}/{x}" for x in _FILES[:-1]],
|
| 98 |
-
"station_test": [f"{_REPO}/{x}" for x in _FILES[-1:]],
|
| 99 |
-
"event_test": [f"{_REPO}/{x}" for x in _FILES[-1:]],
|
| 100 |
}
|
| 101 |
|
| 102 |
-
|
| 103 |
class BatchBuilderConfig(datasets.BuilderConfig):
|
| 104 |
"""
|
| 105 |
yield a batch of event-based sample, so the number of sample stations can vary among batches
|
| 106 |
Batch Config for QuakeFlow_NC
|
|
|
|
|
|
|
| 107 |
"""
|
| 108 |
-
|
| 109 |
-
def __init__(self, **kwargs):
|
| 110 |
super().__init__(**kwargs)
|
|
|
|
|
|
|
| 111 |
|
| 112 |
|
| 113 |
# TODO: Name of the dataset usually matches the script name with CamelCase instead of snake_case
|
| 114 |
class QuakeFlow_NC(datasets.GeneratorBasedBuilder):
|
| 115 |
"""QuakeFlow_NC: A dataset of earthquake waveforms organized by earthquake events and based on the HDF5 format."""
|
| 116 |
-
|
| 117 |
VERSION = datasets.Version("1.1.0")
|
| 118 |
|
|
|
|
| 119 |
nt = 8192
|
| 120 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 121 |
# This is an example of a dataset with multiple configurations.
|
| 122 |
# If you don't want/need to define several sub-sets in your dataset,
|
| 123 |
# just remove the BUILDER_CONFIG_CLASS and the BUILDER_CONFIGS attributes.
|
|
@@ -129,80 +94,36 @@ class QuakeFlow_NC(datasets.GeneratorBasedBuilder):
|
|
| 129 |
# You will be able to load one or the other configurations in the following list with
|
| 130 |
# data = datasets.load_dataset('my_dataset', 'first_domain')
|
| 131 |
# data = datasets.load_dataset('my_dataset', 'second_domain')
|
| 132 |
-
|
| 133 |
# default config, you can change batch_size and num_stations_list when use `datasets.load_dataset`
|
| 134 |
BUILDER_CONFIGS = [
|
| 135 |
-
datasets.BuilderConfig(
|
| 136 |
-
|
| 137 |
-
),
|
| 138 |
-
datasets.BuilderConfig(
|
| 139 |
-
name="event", version=VERSION, description="yield event-based samples one by one of whole dataset"
|
| 140 |
-
),
|
| 141 |
-
datasets.BuilderConfig(
|
| 142 |
-
name="station_train",
|
| 143 |
-
version=VERSION,
|
| 144 |
-
description="yield station-based samples one by one of training dataset",
|
| 145 |
-
),
|
| 146 |
-
datasets.BuilderConfig(
|
| 147 |
-
name="event_train", version=VERSION, description="yield event-based samples one by one of training dataset"
|
| 148 |
-
),
|
| 149 |
-
datasets.BuilderConfig(
|
| 150 |
-
name="station_test", version=VERSION, description="yield station-based samples one by one of test dataset"
|
| 151 |
-
),
|
| 152 |
-
datasets.BuilderConfig(
|
| 153 |
-
name="event_test", version=VERSION, description="yield event-based samples one by one of test dataset"
|
| 154 |
-
),
|
| 155 |
]
|
| 156 |
|
| 157 |
-
DEFAULT_CONFIG_NAME =
|
| 158 |
-
"station_test" # It's not mandatory to have a default configuration. Just use one if it make sense.
|
| 159 |
-
)
|
| 160 |
|
| 161 |
def _info(self):
|
| 162 |
# TODO: This method specifies the datasets.DatasetInfo object which contains informations and typings for the dataset
|
| 163 |
-
if
|
| 164 |
-
(
|
| 165 |
-
or (self.config.name == "station_train")
|
| 166 |
-
or (self.config.name == "station_test")
|
| 167 |
-
):
|
| 168 |
-
features = datasets.Features(
|
| 169 |
{
|
| 170 |
-
"
|
| 171 |
-
"
|
| 172 |
-
"station_id": datasets.Value("string"),
|
| 173 |
-
"waveform": datasets.Array2D(shape=(3, self.nt), dtype="float32"),
|
| 174 |
-
"phase_time": datasets.Sequence(datasets.Value("string")),
|
| 175 |
-
"phase_index": datasets.Sequence(datasets.Value("int32")),
|
| 176 |
-
"phase_type": datasets.Sequence(datasets.Value("string")),
|
| 177 |
-
"phase_polarity": datasets.Sequence(datasets.Value("string")),
|
| 178 |
-
"begin_time": datasets.Value("string"),
|
| 179 |
-
"end_time": datasets.Value("string"),
|
| 180 |
-
"event_time": datasets.Value("string"),
|
| 181 |
-
"event_time_index": datasets.Value("int32"),
|
| 182 |
"event_location": datasets.Sequence(datasets.Value("float32")),
|
| 183 |
-
"station_location": datasets.
|
| 184 |
-
}
|
| 185 |
-
|
| 186 |
-
elif
|
| 187 |
-
features
|
| 188 |
{
|
| 189 |
-
"
|
| 190 |
-
"
|
| 191 |
-
"phase_time": datasets.Sequence(datasets.Sequence(datasets.Value("string"))),
|
| 192 |
-
"phase_index": datasets.Sequence(datasets.Sequence(datasets.Value("int32"))),
|
| 193 |
-
"phase_type": datasets.Sequence(datasets.Sequence(datasets.Value("string"))),
|
| 194 |
-
"phase_polarity": datasets.Sequence(datasets.Sequence(datasets.Value("string"))),
|
| 195 |
-
"begin_time": datasets.Value("string"),
|
| 196 |
-
"end_time": datasets.Value("string"),
|
| 197 |
-
"event_time": datasets.Value("string"),
|
| 198 |
-
"event_time_index": datasets.Value("int32"),
|
| 199 |
"event_location": datasets.Sequence(datasets.Value("float32")),
|
| 200 |
-
"station_location": datasets.
|
| 201 |
-
}
|
| 202 |
)
|
| 203 |
-
|
| 204 |
-
raise ValueError(f"config.name = {self.config.name} is not in BUILDER_CONFIGS")
|
| 205 |
-
|
| 206 |
return datasets.DatasetInfo(
|
| 207 |
# This is the description that will appear on the datasets page.
|
| 208 |
description=_DESCRIPTION,
|
|
@@ -228,135 +149,104 @@ class QuakeFlow_NC(datasets.GeneratorBasedBuilder):
|
|
| 228 |
# By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive
|
| 229 |
urls = _URLS[self.config.name]
|
| 230 |
# files = dl_manager.download(urls)
|
| 231 |
-
|
| 232 |
-
|
| 233 |
-
|
| 234 |
-
|
| 235 |
-
|
| 236 |
-
|
| 237 |
-
|
| 238 |
-
|
| 239 |
-
|
| 240 |
-
|
| 241 |
-
|
| 242 |
-
|
| 243 |
-
|
| 244 |
-
|
| 245 |
-
|
| 246 |
-
|
| 247 |
-
|
| 248 |
-
|
| 249 |
-
|
| 250 |
-
|
| 251 |
-
|
| 252 |
-
|
| 253 |
-
|
| 254 |
-
|
| 255 |
-
|
| 256 |
-
|
| 257 |
-
|
| 258 |
-
|
| 259 |
-
|
| 260 |
-
|
| 261 |
-
|
| 262 |
-
gen_kwargs={"filepath": files, "split": "test"},
|
| 263 |
-
),
|
| 264 |
-
]
|
| 265 |
-
else:
|
| 266 |
-
raise ValueError("config.name is not in BUILDER_CONFIGS")
|
| 267 |
|
| 268 |
# method parameters are unpacked from `gen_kwargs` as given in `_split_generators`
|
| 269 |
def _generate_examples(self, filepath, split):
|
| 270 |
# TODO: This method handles input defined in _split_generators to yield (key, example) tuples from the dataset.
|
| 271 |
# The `key` is for legacy reasons (tfds) and is not important in itself, but must be unique for each example.
|
| 272 |
-
|
| 273 |
for file in filepath:
|
| 274 |
-
|
| 275 |
-
|
| 276 |
-
|
| 277 |
-
|
| 278 |
-
|
| 279 |
-
|
| 280 |
-
|
| 281 |
-
|
| 282 |
-
end_time = event_attrs["end_time"]
|
| 283 |
-
event_location = [
|
| 284 |
-
event_attrs["longitude"],
|
| 285 |
-
event_attrs["latitude"],
|
| 286 |
-
event_attrs["depth_km"],
|
| 287 |
-
]
|
| 288 |
-
event_time = event_attrs["event_time"]
|
| 289 |
-
event_time_index = event_attrs["event_time_index"]
|
| 290 |
-
station_ids = list(event.keys())
|
| 291 |
-
if len(station_ids) == 0:
|
| 292 |
continue
|
| 293 |
-
|
| 294 |
-
|
| 295 |
-
|
| 296 |
-
|
| 297 |
-
|
| 298 |
-
|
| 299 |
-
|
| 300 |
-
|
| 301 |
-
|
| 302 |
-
|
| 303 |
-
|
| 304 |
-
|
| 305 |
-
|
| 306 |
-
|
| 307 |
-
|
| 308 |
-
|
| 309 |
-
|
| 310 |
-
|
| 311 |
-
|
| 312 |
-
|
| 313 |
-
|
| 314 |
-
|
| 315 |
-
|
| 316 |
-
|
| 317 |
-
|
| 318 |
-
|
| 319 |
-
|
| 320 |
-
|
| 321 |
-
|
| 322 |
-
|
| 323 |
-
|
| 324 |
-
|
| 325 |
-
|
| 326 |
-
|
| 327 |
-
(
|
| 328 |
-
|
| 329 |
-
|
| 330 |
-
|
| 331 |
-
|
| 332 |
-
|
| 333 |
-
|
| 334 |
-
|
| 335 |
-
|
| 336 |
-
|
| 337 |
-
|
| 338 |
-
|
| 339 |
-
|
| 340 |
-
|
| 341 |
-
|
| 342 |
-
|
| 343 |
-
|
| 344 |
-
|
| 345 |
-
|
| 346 |
-
|
| 347 |
-
[attrs["longitude"], attrs["latitude"], -attrs["elevation_m"] / 1e3]
|
| 348 |
-
)
|
| 349 |
-
yield event_id, {
|
| 350 |
-
"event_id": event_id,
|
| 351 |
-
"waveform": waveform,
|
| 352 |
-
"phase_time": phase_time,
|
| 353 |
-
"phase_index": phase_index,
|
| 354 |
-
"phase_type": phase_type,
|
| 355 |
-
"phase_polarity": phase_polarity,
|
| 356 |
-
"begin_time": begin_time,
|
| 357 |
-
"end_time": end_time,
|
| 358 |
-
"event_time": event_time,
|
| 359 |
-
"event_time_index": event_time_index,
|
| 360 |
-
"event_location": event_location,
|
| 361 |
-
"station_location": station_location,
|
| 362 |
-
}
|
|
|
|
| 17 |
"""QuakeFlow_NC: A dataset of earthquake waveforms organized by earthquake events and based on the HDF5 format."""
|
| 18 |
|
| 19 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 20 |
import h5py
|
| 21 |
import numpy as np
|
| 22 |
import torch
|
| 23 |
+
from typing import Dict, List, Optional, Tuple, Union
|
| 24 |
+
|
| 25 |
+
import datasets
|
| 26 |
+
|
| 27 |
|
| 28 |
# TODO: Add BibTeX citation
|
| 29 |
# Find for instance the citation on arxiv or on the dataset repo/website
|
| 30 |
_CITATION = """\
|
| 31 |
@InProceedings{huggingface:dataset,
|
| 32 |
+
title = {A great new dataset},
|
| 33 |
+
author={huggingface, Inc.
|
| 34 |
+
},
|
| 35 |
+
year={2020}
|
| 36 |
}
|
| 37 |
"""
|
| 38 |
|
|
|
|
| 51 |
# TODO: Add link to the official dataset URLs here
|
| 52 |
# The HuggingFace Datasets library doesn't host the datasets but only points to the original files.
|
| 53 |
# This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
|
| 54 |
+
_REPO = "https://huggingface.co/datasets/AI4EPS/quakeflow_nc/resolve/main/data"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 55 |
_URLS = {
|
| 56 |
+
"NCEDC": [f"{_REPO}/ncedc_event_dataset_{i:03d}.h5" for i in range(37)],
|
| 57 |
+
"NCEDC_full_size": [f"{_REPO}/ncedc_event_dataset_{i:03d}.h5" for i in range(37)],
|
|
|
|
|
|
|
|
|
|
|
|
|
| 58 |
}
|
| 59 |
|
|
|
|
| 60 |
class BatchBuilderConfig(datasets.BuilderConfig):
|
| 61 |
"""
|
| 62 |
yield a batch of event-based sample, so the number of sample stations can vary among batches
|
| 63 |
Batch Config for QuakeFlow_NC
|
| 64 |
+
:param batch_size: number of samples in a batch
|
| 65 |
+
:param num_stations_list: possible number of stations in a batch
|
| 66 |
"""
|
| 67 |
+
def __init__(self, batch_size: int, num_stations_list: List, **kwargs):
|
|
|
|
| 68 |
super().__init__(**kwargs)
|
| 69 |
+
self.batch_size = batch_size
|
| 70 |
+
self.num_stations_list = num_stations_list
|
| 71 |
|
| 72 |
|
| 73 |
# TODO: Name of the dataset usually matches the script name with CamelCase instead of snake_case
|
| 74 |
class QuakeFlow_NC(datasets.GeneratorBasedBuilder):
|
| 75 |
"""QuakeFlow_NC: A dataset of earthquake waveforms organized by earthquake events and based on the HDF5 format."""
|
| 76 |
+
|
| 77 |
VERSION = datasets.Version("1.1.0")
|
| 78 |
|
| 79 |
+
degree2km = 111.32
|
| 80 |
nt = 8192
|
| 81 |
+
feature_nt = 512
|
| 82 |
+
feature_scale = int(nt / feature_nt)
|
| 83 |
+
sampling_rate=100.0
|
| 84 |
+
num_stations = 10
|
| 85 |
+
|
| 86 |
# This is an example of a dataset with multiple configurations.
|
| 87 |
# If you don't want/need to define several sub-sets in your dataset,
|
| 88 |
# just remove the BUILDER_CONFIG_CLASS and the BUILDER_CONFIGS attributes.
|
|
|
|
| 94 |
# You will be able to load one or the other configurations in the following list with
|
| 95 |
# data = datasets.load_dataset('my_dataset', 'first_domain')
|
| 96 |
# data = datasets.load_dataset('my_dataset', 'second_domain')
|
| 97 |
+
|
| 98 |
# default config, you can change batch_size and num_stations_list when use `datasets.load_dataset`
|
| 99 |
BUILDER_CONFIGS = [
|
| 100 |
+
datasets.BuilderConfig(name="NCEDC", version=VERSION, description="yield event-based samples one by one, the number of sample stations is fixed(default: 10)"),
|
| 101 |
+
datasets.BuilderConfig(name="NCEDC_full_size", version=VERSION, description="yield event-based samples one by one, the number of sample stations is the same as the number of stations in the event"),
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 102 |
]
|
| 103 |
|
| 104 |
+
DEFAULT_CONFIG_NAME = "NCEDC" # It's not mandatory to have a default configuration. Just use one if it make sense.
|
|
|
|
|
|
|
| 105 |
|
| 106 |
def _info(self):
|
| 107 |
# TODO: This method specifies the datasets.DatasetInfo object which contains informations and typings for the dataset
|
| 108 |
+
if self.config.name=="NCEDC":
|
| 109 |
+
features=datasets.Features(
|
|
|
|
|
|
|
|
|
|
|
|
|
| 110 |
{
|
| 111 |
+
"waveform": datasets.Array3D(shape=(3, self.nt, self.num_stations), dtype='float32'),
|
| 112 |
+
"phase_pick": datasets.Array3D(shape=(3, self.nt, self.num_stations), dtype='float32'),
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 113 |
"event_location": datasets.Sequence(datasets.Value("float32")),
|
| 114 |
+
"station_location": datasets.Array2D(shape=(self.num_stations, 3), dtype="float32"),
|
| 115 |
+
})
|
| 116 |
+
|
| 117 |
+
elif self.config.name=="NCEDC_full_size":
|
| 118 |
+
features=datasets.Features(
|
| 119 |
{
|
| 120 |
+
"waveform": datasets.Array3D(shape=(None, 3, self.nt), dtype='float32'),
|
| 121 |
+
"phase_pick": datasets.Array3D(shape=(None, 3, self.nt), dtype='float32'),
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 122 |
"event_location": datasets.Sequence(datasets.Value("float32")),
|
| 123 |
+
"station_location": datasets.Array2D(shape=(None, 3), dtype="float32"),
|
| 124 |
+
}
|
| 125 |
)
|
| 126 |
+
|
|
|
|
|
|
|
| 127 |
return datasets.DatasetInfo(
|
| 128 |
# This is the description that will appear on the datasets page.
|
| 129 |
description=_DESCRIPTION,
|
|
|
|
| 149 |
# By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive
|
| 150 |
urls = _URLS[self.config.name]
|
| 151 |
# files = dl_manager.download(urls)
|
| 152 |
+
files = dl_manager.download_and_extract(urls)
|
| 153 |
+
# files = ["./data/ncedc_event_dataset_000.h5"]
|
| 154 |
+
|
| 155 |
+
return [
|
| 156 |
+
datasets.SplitGenerator(
|
| 157 |
+
name=datasets.Split.TRAIN,
|
| 158 |
+
# These kwargs will be passed to _generate_examples
|
| 159 |
+
gen_kwargs={
|
| 160 |
+
"filepath": files,
|
| 161 |
+
"split": "train",
|
| 162 |
+
},
|
| 163 |
+
),
|
| 164 |
+
# datasets.SplitGenerator(
|
| 165 |
+
# name=datasets.Split.VALIDATION,
|
| 166 |
+
# # These kwargs will be passed to _generate_examples
|
| 167 |
+
# gen_kwargs={
|
| 168 |
+
# "filepath": os.path.join(data_dir, "dev.jsonl"),
|
| 169 |
+
# "split": "dev",
|
| 170 |
+
# },
|
| 171 |
+
# ),
|
| 172 |
+
# datasets.SplitGenerator(
|
| 173 |
+
# name=datasets.Split.TEST,
|
| 174 |
+
# # These kwargs will be passed to _generate_examples
|
| 175 |
+
# gen_kwargs={
|
| 176 |
+
# "filepath": os.path.join(data_dir, "test.jsonl"),
|
| 177 |
+
# "split": "test"
|
| 178 |
+
# },
|
| 179 |
+
# ),
|
| 180 |
+
]
|
| 181 |
+
|
| 182 |
+
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 183 |
|
| 184 |
# method parameters are unpacked from `gen_kwargs` as given in `_split_generators`
|
| 185 |
def _generate_examples(self, filepath, split):
|
| 186 |
# TODO: This method handles input defined in _split_generators to yield (key, example) tuples from the dataset.
|
| 187 |
# The `key` is for legacy reasons (tfds) and is not important in itself, but must be unique for each example.
|
| 188 |
+
num_stations = self.num_stations
|
| 189 |
for file in filepath:
|
| 190 |
+
with h5py.File(file, "r") as fp:
|
| 191 |
+
# for event_id in sorted(list(fp.keys())):
|
| 192 |
+
for event_id in fp.keys():
|
| 193 |
+
event = fp[event_id]
|
| 194 |
+
station_ids = list(event.keys())
|
| 195 |
+
|
| 196 |
+
if self.config.name=="NCEDC":
|
| 197 |
+
if len(station_ids) < num_stations:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 198 |
continue
|
| 199 |
+
else:
|
| 200 |
+
station_ids = np.random.choice(station_ids, num_stations, replace=False)
|
| 201 |
+
|
| 202 |
+
waveforms = np.zeros([3, self.nt, len(station_ids)])
|
| 203 |
+
phase_pick = np.zeros_like(waveforms)
|
| 204 |
+
attrs = event.attrs
|
| 205 |
+
event_location = [attrs["longitude"], attrs["latitude"], attrs["depth_km"], attrs["event_time_index"]]
|
| 206 |
+
station_location = []
|
| 207 |
+
|
| 208 |
+
for i, sta_id in enumerate(station_ids):
|
| 209 |
+
# trace_id = event_id + "/" + sta_id
|
| 210 |
+
waveforms[:, :, i] = event[sta_id][:,:self.nt]
|
| 211 |
+
attrs = event[sta_id].attrs
|
| 212 |
+
p_picks = attrs["phase_index"][attrs["phase_type"] == "P"]
|
| 213 |
+
s_picks = attrs["phase_index"][attrs["phase_type"] == "S"]
|
| 214 |
+
phase_pick[:, :, i] = generate_label([p_picks, s_picks], nt=self.nt)
|
| 215 |
+
station_location.append([attrs["longitude"], attrs["latitude"], -attrs["elevation_m"]/1e3])
|
| 216 |
+
|
| 217 |
+
std = np.std(waveforms, axis=1, keepdims=True)
|
| 218 |
+
std[std == 0] = 1.0
|
| 219 |
+
waveforms = (waveforms - np.mean(waveforms, axis=1, keepdims=True)) / std
|
| 220 |
+
waveforms = waveforms.astype(np.float32)
|
| 221 |
+
|
| 222 |
+
if self.config.name=="NCEDC":
|
| 223 |
+
yield event_id, {
|
| 224 |
+
"waveform": torch.from_numpy(waveforms).float(),
|
| 225 |
+
"phase_pick": torch.from_numpy(phase_pick).float(),
|
| 226 |
+
"event_location": torch.from_numpy(np.array(event_location)).float(),
|
| 227 |
+
"station_location": torch.from_numpy(np.array(station_location)).float(),
|
| 228 |
+
}
|
| 229 |
+
elif self.config.name=="NCEDC_full_size":
|
| 230 |
+
|
| 231 |
+
yield event_id, {
|
| 232 |
+
"waveform": torch.from_numpy(waveforms).float().permute(2,0,1),
|
| 233 |
+
"phase_pick": torch.from_numpy(phase_pick).float().permute(2,0,1),
|
| 234 |
+
"event_location": torch.from_numpy(np.array(event_location)).float(),
|
| 235 |
+
"station_location": torch.from_numpy(np.array(station_location)).float(),
|
| 236 |
+
}
|
| 237 |
+
|
| 238 |
+
|
| 239 |
+
def generate_label(phase_list, label_width=[150, 150], nt=8192):
|
| 240 |
+
|
| 241 |
+
target = np.zeros([len(phase_list) + 1, nt], dtype=np.float32)
|
| 242 |
+
|
| 243 |
+
for i, (picks, w) in enumerate(zip(phase_list, label_width)):
|
| 244 |
+
for phase_time in picks:
|
| 245 |
+
t = np.arange(nt) - phase_time
|
| 246 |
+
gaussian = np.exp(-(t**2) / (2 * (w / 6) ** 2))
|
| 247 |
+
gaussian[gaussian < 0.1] = 0.0
|
| 248 |
+
target[i + 1, :] += gaussian
|
| 249 |
+
|
| 250 |
+
target[0:1, :] = np.maximum(0, 1 - np.sum(target[1:, :], axis=0, keepdims=True))
|
| 251 |
+
|
| 252 |
+
return target
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
upload.py
DELETED
|
@@ -1,11 +0,0 @@
|
|
| 1 |
-
from huggingface_hub import HfApi
|
| 2 |
-
|
| 3 |
-
api = HfApi()
|
| 4 |
-
|
| 5 |
-
# Upload all the content from the local folder to your remote Space.
|
| 6 |
-
# By default, files are uploaded at the root of the repo
|
| 7 |
-
api.upload_folder(
|
| 8 |
-
folder_path="./",
|
| 9 |
-
repo_id="AI4EPS/quakeflow_nc",
|
| 10 |
-
repo_type="space",
|
| 11 |
-
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
waveform.h5
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:77fb8b0bb040e1412a183a217dcbc1aa03ceb86b42db39ac62afe922a1673889
|
| 3 |
-
size 20016390
|
|
|
|
|
|
|
|
|
|
|
|
waveform_h5/1987.h5
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:8afb94aafbf79db2848ae9c2006385c782493a97e6c71c1b8abf97c5d53bfc9d
|
| 3 |
-
size 7744528
|
|
|
|
|
|
|
|
|
|
|
|
waveform_h5/1988.h5
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:c1398baca3f539e52744f83625b1dbb6f117a32b8d7e97f6af02a1f452f0dedd
|
| 3 |
-
size 46126800
|
|
|
|
|
|
|
|
|
|
|
|
waveform_h5/1989.h5
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:533cd50fe365de8c050f0ffd4a90b697dc6b90cb86c8199ec0172316eab2ddaa
|
| 3 |
-
size 48255208
|
|
|
|
|
|
|
|
|
|
|
|
waveform_h5/1990.h5
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:f5a282a9a8c47cf65d144368085470940660faeb0e77cea59fff16af68020d26
|
| 3 |
-
size 60092656
|
|
|
|
|
|
|
|
|
|
|
|
waveform_h5/1991.h5
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:5ba897d96eb92e8684b52a206e94a500abfe0192930f971ce7b1319c0638d452
|
| 3 |
-
size 62332336
|
|
|
|
|
|
|
|
|
|
|
|
waveform_h5/1992.h5
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:d00021f46956bf43192f8c59405e203f823f1f4202c720efa52c5029e8e880b8
|
| 3 |
-
size 67360896
|
|
|
|
|
|
|
|
|
|
|
|
waveform_h5/1993.h5
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:eec41dd0aa7b88c81fa9f9b5dbcaab80e1c7bc8f6c144bd81761941278c57b4f
|
| 3 |
-
size 706087936
|
|
|
|
|
|
|
|
|
|
|
|
waveform_h5/1994.h5
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:b1cd002f20573636eaf101a30c5bac477edda201aba3af68be358756543ed48a
|
| 3 |
-
size 609524864
|
|
|
|
|
|
|
|
|
|
|
|
waveform_h5/1995.h5
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:948f19d71520a0dd25574be300f70e62c383e319b07a7d7182fca1dcfa9d61ee
|
| 3 |
-
size 1728452872
|
|
|
|
|
|
|
|
|
|
|
|
waveform_h5/1996.h5
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:23654b6f9c3a4c5a0aa56ed13ba04e943a94b458a51ac80ec1d418e9aa132840
|
| 3 |
-
size 1752242680
|
|
|
|
|
|
|
|
|
|
|
|
waveform_h5/1997.h5
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:d1c0f4c8146fc8ff27c8a47a942b967a97bd2835346203e6de74ca55dd522616
|
| 3 |
-
size 2661543208
|
|
|
|
|
|
|
|
|
|
|
|
waveform_h5/1998.h5
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:1afac9c1a33424b739d26261ac2e9a4520be9c86c57bae4c8fe1a7a422356e45
|
| 3 |
-
size 2070489120
|
|
|
|
|
|
|
|
|
|
|
|
waveform_h5/1999.h5
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:2f2595a1919a5435148cdcf2cfa1501ce5edb53878d471500b13936f0f6f558c
|
| 3 |
-
size 2300297608
|
|
|
|
|
|
|
|
|
|
|
|
waveform_h5/2000.h5
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:250fd52d9f8dd17a8bfb58a3ecfef25d62b0a1adf67f6fe6f2b446e9f72caf7a
|
| 3 |
-
size 434865160
|
|
|
|
|
|
|
|
|
|
|
|
waveform_h5/2001.h5
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:d70dea6156b32057760f91742f7a05a336e4f63b1f793408b5e7aad6a15551e5
|
| 3 |
-
size 919203704
|
|
|
|
|
|
|
|
|
|
|
|
waveform_h5/2002.h5
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:f88c4c5960741a8d354db4a7324d56ef8750ab93aa1d9b11fc80d0c497d8d6ae
|
| 3 |
-
size 2445812792
|
|
|
|
|
|
|
|
|
|
|
|
waveform_h5/2003.h5
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:943d649f1a8a0e3989d2458be68fbf041058a581c4c73f8de39f1d50d3e7b35c
|
| 3 |
-
size 3618485352
|
|
|
|
|
|
|
|
|
|
|
|
waveform_h5/2004.h5
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:ed1ba66e10ba5c165568ac13950a1728927ba49b33903a0df42c3d9965a16807
|
| 3 |
-
size 6158740712
|
|
|
|
|
|
|
|
|
|
|
|
waveform_h5/2005.h5
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:c816d75b172148763b19e60c1469c106c1af1f906843c3d6d94e603e02c2b6cb
|
| 3 |
-
size 2994468240
|
|
|
|
|
|
|
|
|
|
|
|
waveform_h5/2006.h5
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:521e6b0ce262461f87b4b0a78ac6403cfbb597d6ace36e17f92354c456a30447
|
| 3 |
-
size 2189511664
|
|
|
|
|
|
|
|
|
|
|
|
waveform_h5/2007.h5
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:ae6654c213fb4838d6a732b2c8d936bd799005b2a189d64f2d74e3767c0c503a
|
| 3 |
-
size 4393926088
|
|
|
|
|
|
|
|
|
|
|
|
waveform_h5/2008.h5
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:d8163aee689448c260032df9b0ab9132a5b46f0fee88a4c1ca8f4492ec5534d6
|
| 3 |
-
size 3964283536
|
|
|
|
|
|
|
|
|
|
|
|
waveform_h5/2009.h5
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:6702c2d3951ddf1034f1886a79e8c5a00dfa47c88c84048edc528f047a2337b5
|
| 3 |
-
size 4162296168
|
|
|
|
|
|
|
|
|
|
|
|
waveform_h5/2010.h5
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:2f2de7c07f088a32ea7ae71c2107dfd121780a47d3e3f23e5c98ddb482c6ce71
|
| 3 |
-
size 4547184704
|
|
|
|
|
|
|
|
|
|
|
|
waveform_h5/2011.h5
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:520d62f3a94f1b4889f583196676fe2eccb6452807461afc93432dca930d6052
|
| 3 |
-
size 5633641952
|
|
|
|
|
|
|
|
|
|
|
|
waveform_h5/2012.h5
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:98b90529df4cbff7f21cd233d482454eaeac77b81117720ca7fe6c2697819071
|
| 3 |
-
size 9520058832
|
|
|
|
|
|
|
|
|
|
|
|
waveform_h5/2013.h5
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:e6f1030ff4ebe488ef9072ec984c91024a8be4ecdbe7e9af47c6e65de942c2fe
|
| 3 |
-
size 8380878704
|
|
|
|
|
|
|
|
|
|
|
|
waveform_h5/2014.h5
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:a63f5e6d7d5bca552dcc99053753603dfa3109a6a080f8402f843ef688927d4c
|
| 3 |
-
size 12088815344
|
|
|
|
|
|
|
|
|
|
|
|
waveform_h5/2015.h5
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:42be6994ad27eb8aee241f5edfb4ed0ee69aa3460397325cc858224ba9dd9721
|
| 3 |
-
size 8536767520
|
|
|
|
|
|
|
|
|
|
|
|
waveform_h5/2016.h5
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:6e706aefd38170da41196974fc92e457d0dc56948a63640a37cea4a86a297843
|
| 3 |
-
size 9287201016
|
|
|
|
|
|
|
|
|
|
|
|
waveform_h5/2017.h5
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:e20f8e5a3f5ec8927e5d44e722987461ef08c9ceb33ab982038528e9000d5323
|
| 3 |
-
size 8627205152
|
|
|
|
|
|
|
|
|
|
|
|
waveform_h5/2018.h5
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:ad6e83734ff1e24ad91b17cb6656766861ae9fb30413948579d762acc092e66a
|
| 3 |
-
size 7158598240
|
|
|
|
|
|
|
|
|
|
|
|