TEgO: Teachable Egocentric Objects Dataset
Principal Investigator(s): View help for Principal Investigator(s) Hernisa Kacorri, University of Maryland, College Park
Version: View help for Version V1
Version Title: View help for Version Title TEgO initial deposit
Name | File Type | Size | Last Modified |
---|---|---|---|
|
application/x-gtar | 2 GB | 05/31/2019 09:06:AM |
Project Citation:
Kacorri, Hernisa. TEgO: Teachable Egocentric Objects Dataset. Ann Arbor, MI: Inter-university Consortium for Political and Social Research [distributor], 2019-05-31. https://doi.org/10.3886/E109967V1
Project Description
Summary:
View help for Summary
TEgO dataset includes egocentric images of 19 distinct objects taken by two people for training and testing a teachable object recognizer. Specifically, a blind and a sighted individual takes photos of the objects using a smartphone camera to train and test their own teachable object recognizer.
A detailed description of the dataset (people, objects, environment, and lighting) can be found in our CHI 2019 paper titled “Hands holding clues for object recognition in teachable machines” (see citation below).
A detailed description of the dataset (people, objects, environment, and lighting) can be found in our CHI 2019 paper titled “Hands holding clues for object recognition in teachable machines” (see citation below).
Funding Sources:
View help for Funding Sources
NIDILRR (90REGE0008)
Scope of Project
Data Type(s):
View help for Data Type(s)
images: photographs, drawings, graphical representations
Related Publications
Published Versions
Report a Problem
Found a serious problem with the data, such as disclosure risk or copyrighted content? Let us know.
This material is distributed exactly as it arrived from the data depositor. ICPSR has not checked or processed this material. Users should consult the investigator(s) if further information is desired.