News

What is EPIC-Kitchens?

The largest dataset in first-person (egocentric) vision; multi-faceted non-scripted recordings in native environments - i.e. the wearers' homes, capturing all daily activities in the kitchen over multiple days. Annotations are collected using a novel `live' audio commentary approach.

Characteristics

  • 32 kitchens - 4 cities
  • Head-mounted camera
  • 55 hours of recording - Full HD, 60fps
  • 11.5M frames
  • Multi-language narrations
  • 39,594 action segments
  • 454,158 object bounding boxes
  • 125 verb classes, 331 noun classes

Updates

Stay tuned with updates on epic-kitchens2018, as well as EPIC workshop series by joining the epic-community mailing list send an email to: sympa@sympa.bristol.ac.uk with the subject subscribe epic-community and a blank message body.

EPIC-Kitchens Stats

Some graphical representations of our dataset and annotations

Time Of Day

Activities

Wordle of annotations

Download

Dataset publicly available for research purposes

Publication(s)

Cite the following paper (available now on Arxiv):

@INPROCEEDINGS{Damen2018EPICKITCHENS,
   title={Scaling Egocentric Vision: The EPIC-KITCHENS Dataset},
   author={Damen, Dima and Doughty, Hazel and Farinella, Giovanni Maria  and Fidler, Sanja and 
           Furnari, Antonino and Kazakos, Evangelos and Moltisanti, Davide and Munro, Jonathan 
           and Perrett, Toby and Price, Will and Wray, Michael},
   booktitle={European Conference on Computer Vision (ECCV)},
   year={2018}
} 

Sequences and Annotations

Sequences: Available at Data.Bris servers (1TB zipped)
     To download parts of the dataset, we provide three scripts for downloading the videos, frames or object annotation images separately.
     Note: These scripts will work for Linux and Mac. For Windows users a bash installation should work.
Annotations: Available at GitHub Repo epic-kitchens/annotations

Copyright Creative Commons License

All datasets and benchmarks on this page are copyright by us and published under the Creative Commons Attribution-NonCommercial 4.0 International License. This means that you must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use. You may not use the material for commercial purposes.

Challenges

Details of challenges and leaderboard --- available soon

Object-Detection Challenge

Detect and localise the 300-object classes

Action-Recognition Challenge

Classify action segments

Action-Anticipation Challenge

Predict the next action in the sequence

The Team

We are a group of researchers working in computer vision from the University of Bristol, University of Toronto, and University of Catania.

Dima Damen

Principal Investigator
University of Bristol
United Kingom

Sanja Fidler

Co-Investigator
University of Toronto
Canada

Giovanni Maria Farinella

Co-Investigator
University of Catania
Italy

Davide Moltisanti

(Apr 2017 - )
University of Bristol

Michael Wray

(Apr 2017 - )
University of Bristol

Hazel Doughty

(Apr 2017 - )
University of Bristol

Toby Perrett

(Apr 2017 - )
University of Bristol

Antonino Furnari

(Jul 2017 - )
University of Catania

Jonathan Munro

(Sep 2017 - )
University of Bristol

Evangelos Kazakos

(Sep 2017 - )
University of Bristol

Will Price

(Oct 2017 - )
University of Bristol

Sponsors

The dataset is sponsored by: