--- language: - en license: cc-by-nc-4.0 size_categories: - 10K train.zip unzip train.zip && unzip valid.zip && unzip test.zip rm train.zip.part_aa train.zip.part_ab train.zip valid.zip test.zip ``` Train and evaluate: ```python from opensportslib import model if __name__ == '__main__': myModel = model.classification( config="path/to/sngar-frames.yaml", data_dir="sngar-frames", ) myModel.train( train_set="sngar-frames/annotations_train.json", valid_set="sngar-frames/annotations_valid.json", use_ddp=False, ) myModel.infer(test_set="sngar-frames/annotations_test.json") ``` See the [pixels_vs_positions](https://github.com/drishyakarki/pixels_vs_positions) repository for the specific config files needed to reproduce each experiment in the paper. ## Dataset Structure ### Action Classes The dataset contains 10 action classes reflecting common football events: | Class | Count | Proportion | |---|---|---| | PASS | 57,521 | 65.4% | | TACKLE | 10,943 | 12.4% | | OUT | 5,873 | 6.7% | | HEADER | 5,723 | 6.5% | | THROW IN | 2,598 | 3.0% | | CROSS | 2,175 | 2.5% | | FREE KICK | 1,788 | 2.0% | | SHOT | 1,041 | 1.2% | | GOAL | 188 | 0.2% | | HIGH PASS | 89 | 0.1% | The dataset exhibits severe class imbalance (646:1 ratio between PASS and HIGH PASS), reflecting the natural distribution of football events. ### Splits Data is split at the match level to prevent leakage: | Split | Matches | Events | Proportion | |---|---|---|---| | Train | 45 | 62,159 | 70.7% | | Validation | 9 | 12,091 | 13.7% | | Test | 10 | 13,689 | 15.6% | ### Data Quality - **Player tracking completeness**: 99.9% of 1,485,008 frames contain all 11 players per team. - **Ball visibility**: 93.4% of frames contain ball tracking data. - **Event-level ball coverage**: 85.9% of annotated events have complete ball tracking within their temporal window. ## Branches This repository is organized into the following branches: | Branch | Contents | |---|---| | `main` | Dataset card and documentation. | | `paper-data` | The exact dataset needed to reproduce the results in the paper. Contains broadcast videos (1 npy clip per event) and tracking files (1 parquet file per full match). | | `frames` | 1 npy clip per event for the video modality. Annotations are in SoccerNetPro format. | | `tracking-parquet` | 1 parquet clip per event for the tracking modality. Annotations are in SoccerNetPro format. | | `multimodal-data` | Combined video (npy) and tracking (parquet) data with 1 file per event per modality. Uses a unified annotation file for both modalities in SoccerNetPro format. | ## Benchmark Results ### Pixels vs. Positions | Modality | Model | Params | Bal. Acc. | F1 | Training | |---|---|---|---|---|---| | **Tracking** | GIN + MaxPool + Positional Edges | **180K** | **77.8%** | **57.0%** | 4 GPU hours | | Video | VideoMAEv2-B (finetuned) | 86.3M | 60.9% | 50.1% | 28 GPU hours | The tracking model outperforms the video baseline by 16.9 percentage points in balanced accuracy and 6.9 percentage points in macro F1 while using 479x fewer parameters and training 7x faster. ### Per-Class Comparison (Test Set, Balanced Accuracy) | Class | Samples | Tracking | Video | |---|---|---|---| | PASS | 9,009 | **81.1** | 77.6 | | TACKLE | 1,690 | **54.0** | 32.2 | | OUT | 884 | **94.2** | 75.8 | | HEADER | 867 | 65.2 | **66.3** | | THROW IN | 12 | **84.2** | 78.6 | | CROSS | 392 | **86.7** | 77.2 | | FREE KICK | 347 | **90.4** | 79.4 | | SHOT | 272 | **76.3** | 63.4 | | GOAL | 186 | **73.3** | 16.7 | | HIGH PASS | 30 | **83.3** | 41.7 | Tracking dominates on 9 of 10 classes, with its largest gains on less frequent classes like GOAL (+56.7 pp) and HIGH PASS (+41.7 pp). Video shows a slight advantage only on HEADER (+1.1 pp). Tracking models learn discriminative features even in severely data-scarce regimes (GOAL: 73.3%, HIGH PASS: 83.3%), whereas video models collapse on these classes (16.7% and 41.7%). ## Uses ### Direct Use - Benchmarking video-based vs. tracking-based group activity recognition - Training and evaluating GAR models on football broadcast data - Studying multimodal fusion approaches combining visual and positional features - Analyzing spatial interaction patterns in team sports ## Dataset Creation ### Curation Rationale No standardized benchmark previously existed that aligns broadcast video and tracking data for the same group activities. This made fair, apples-to-apples comparison between video-based and tracking-based approaches impossible. SoccerNet-GAR was created to fill this gap by providing synchronized multimodal observations under a unified evaluation protocol. ### Source Data The dataset was constructed from the PFF FC website (now Gradient Sports), which provides broadcast videos, player tracking data, and event annotations across all 64 FIFA World Cup 2022 tournament matches. ### Data Cleaning and Alignment Event annotations are aligned with both input modalities by merging them with tracking streams using UTC timestamps. Three successive filters ensure data quality: 1. **Temporal alignment**: Events where no tracking frame falls within a 10 ms tolerance of the event timestamp are removed. 2. **Modality coverage**: Events lacking corresponding data in either modality are discarded. 3. **Duplicate resolution**: When a single timestamp is annotated with more than one action class (e.g., a goal also labeled as a shot), only the most semantically specific label is retained based on a predefined priority ordering. Together, these filters remove 6,346 events (6.8% of raw annotations), yielding the final dataset of 87,939 annotated group activities. ### Annotation Process Event annotations with precise timestamps were created by trained annotators and verified through quality control procedures by PFF FC using both video and tracking views. Each event is labeled with one of 10 group activities and temporally marked at the moment of occurrence. ## Comparison with Existing Datasets | Dataset | Year | Domain | Events | Classes | Modalities | |---|---|---|---|---|---| | CAD | 2009 | Pedestrian | 2,511 | 5 | V | | Volleyball | 2016 | Volleyball | 4,830 | 8 | V | | SoccerNet | 2018 | Football | 6,637 | 3 | V | | NBA | 2020 | Basketball | 9,172 | 9 | V | | SoccerNet-v2 | 2021 | Football | 110,458 | 17 | V | | NETS | 2022 | Basketball | 61,053 | 3 | T | | SoccerNet-BAS | 2024 | Football | 11,041 | 12 | V | | Cafe | 2024 | Indoor | 10,297 | 6 | V | | FIFAWC | 2024 | Football | 5,196 | 12 | V | | **SoccerNet-GAR** | **2026** | **Football** | **87,939** | **10** | **V + T** | SoccerNet-GAR is the second largest GAR dataset (after SoccerNet-v2) and the only one providing synchronized video and tracking modalities for the same action instances. ## Citation ```bibtex @article{karki2025pixels, title={Pixels or Positions? Benchmarking Modalities in Group Activity Recognition}, author={Karki, Drishya and Ramazanova, Merey and Cioppa, Anthony and Giancola, Silvio and Ghanem, Bernard}, journal={arXiv preprint arXiv:2511.12606}, year={2025} } ``` ## Authors - **Drishya Karki** (KAUST) - **Merey Ramazanova** (KAUST) - **Anthony Cioppa** (University of Liege) - **Silvio Giancola** (KAUST) - **Bernard Ghanem** (KAUST) ## Contact - drishya.karki@kaust.edu.sa / karkidrishya1@gmail.com - silvio.giancola@kaust.edu.sa