**** We provide subtitles for this video. Watch it by https://youtu.be/JEUb-Q7TbJQ
We report on a demonstration using the Imperial College Domestic Environment Dataset. This is a suitable dataset for this test since all scenes were captured under various clutter and contain several objects. In this demonstration, the system initially had no prior knowledge, and all objects are recognized as unknown. Later, a user interacts with the system in an online manner and teaches all object categories including amita, colgate, lipton, elite, oreo and softkings to the system using the objects extracted from scenes captured from the blue cameras as shown in the picture shown at the beginning of the video. The system conceptualizes those categories using the extracted object views. Afterward, the system is tested by the remaining ten scenes captured from different viewpoints (i.e., shown by red cameras). The system could recognize all objects properly by using the knowledge learned from the first three scenes. The underlying reason was that, at some points, the object tracking could not track the object accurately and the distinctive parts of the object were not included in the object’s point cloud.
Later, we moved the system to two new contexts, where the first context contains six instances of three categories including oreo, amita, and lipton. The robot could recognize all objects correctly by using knowledge from the previous environment. The second context comprises four instances of two object categories with very similar shapes (lipton vs. softkings). The system could recognize all objects properly by using the learned knowledge. Some misclassification also occurred throughout the demonstration. This evaluation illustrates the process of learning object categories in an open-ended fashion.