King's College London
Browse

Dataset-I-drinking-related-object-detection (in both YoloV8 and COCO format)

dataset
posted on 2025-02-27, 11:39 authored by Xin ChenXin Chen, Xinqi BaoXinqi Bao, Ernest KamavuakoErnest Kamavuako
<p dir="ltr">This dataset contains annotated images for object detection for containers and hands in a first-person view (egocentric view) during drinking activities. Both YOLOV8 format and COCO format are provided.<br></p><p dir="ltr">Please refer to our paper for more details.</p><p dir="ltr"><br></p><ul><li><b>Purpose:</b> Training and testing the object detection model.</li><li><b>Content:</b> Videos from Session 1 of Subjects 1-20.</li><li><b>Images:</b> Extracted from the videos of Subjects 1-20 Session 1.</li><li><ul><li><b>Additional Images:</b></li><li><ul><li>~500 hand/container images from Roboflow Open Source data.</li><li>~1500 null (background) images from VOC Dataset and MIT Indoor Scene Recognition Dataset:</li><li><ul><li>1000 indoor scenes from 'MIT Indoor Scene Recognition'</li><li>400 other unrelated objects from VOC Dataset</li></ul></li></ul></li><li><b>Data Augmentation:</b></li><li><ul><li>Horizontal flipping</li><li>±15% brightness change</li><li>±10° rotation</li></ul></li><li><b>Formats Provided:</b></li><li><ul><li>COCO format</li><li>PyTorch YOLOV8 format</li></ul></li><li><b>Image Size:</b> 416x416 pixels</li><li><b>Total Images:</b> 16,834</li><li><ul><li>Training: 13,862</li><li>Validation: 1,975</li><li>Testing: 997</li></ul></li><li><b>Instance Numbers:</b></li><li><ul><li>Containers: Over 10,000</li><li>Hands: Over 8,000</li></ul></li></ul></li></ul><p></p>

History

Temporal coverage

2 months

Geospatial coverage

BioSignals and Sensors laboratory, Strand, King’s College London

Data collection from date

1/10/2022

Data collection to date

30/11/2022