Dataset Declaration This document lists the datasets used in the thesis. Each entry includes: - Dataset name - Links to the datasets - Brief descriptions of the data - The sections in the thesis where the data was used. 1. NYU Depth Dataset V2 (abbrev. NYU V2) Data link: https://cs.nyu.edu/~silberman/datasets/nyu_depth_v2.html Description: The NYU-Depth V2 data set is comprised of video sequences from a variety of indoor scenes as recorded by both the RGB and Depth cameras from the Microsoft Kinect. It offers 1449 indoor RGB-D images with densely segmented objects at the instance level with 2D support relationships. Usage: The NYU V2 dataset is used in Chapter 3 and Chapter 4 for instance segmentation and support inference. 2. SceneNN Data link: http://103.24.77.34/scenenn/home/ Description: The SceneNN dataset contains 50 sophisticated 3D scenes segmented at the instance level. These scenes are captured at various places, e.g., offices, dormitory, classrooms, pantry, etc. All scenes are reconstructed into triangle meshes and have per-vertex and per-pixel annotation. Usage: 3. In Chapter 3, we use the SceneNN dataset to extract support priors between 3D objects for support inference. 4. SUN RGB-D Data link: https://rgbd.cs.princeton.edu/ Description: The SUN RGB-D dataset is a 3D indoor scene understanding benchmark. It contains 10,355 RGB-D images labelled with oriented 3D object bounding boxes, room layout bounding boxes and camera poses. Usage: 5. In Chapter 4 and Chapter 5, we use the RGB images from SUN RGB-D as the input for single view scene modelling and reconstruction, where the corresponding camera poses, object and room bounding boxes are used for supervision. 6. Pix3D Data link: http://pix3d.csail.mit.edu/ Description: The Pix3D dataset contains 395 furniture CAD models with 9 categories, which are aligned with 10,069 images. Usage: We use the Pix3D dataset for single-view object reconstruction in Chapter 5. 7. ShapeNetCore Data link: https://shapenet.org/ Description: ShapeNetCore is a subset of the full ShapeNet dataset with single clean 3D models and manually verified category and alignment annotations. It covers 55 common object categories with about 51,300 unique 3D models. The 12 object categories of PASCAL 3D+, a popular computer vision 3D benchmark dataset, are all covered by ShapeNetCore. Usage: We use the ShapeNetCore dataset for model retrieval in Chapter 4 and for shape completion in Chapter 6. 8. ShapeNet-Skeleton Datalink: https://github.com/tangjiapeng/SkeletonBridgeRecon Description: ShapeNet-Skeleton contains the meso skeletal points extracted from ShapeNetCore. Usage: In Chapter~6, we use the ShapeNet-Skeleton dataset for our shape completion task. 9. SUNCG Data link: https://sscnet.cs.princeton.edu/ Description: SUNCG is a synthesized indoor scene dataset which contains 2644 unique object meshes covering 84 categories for our shape retrieval task. Usage: We use the CAD models in the SUNCG dataset for shape retrieval in Chapter 4. 10. ScanNet Data link: http://www.scan-net.org/ Description: ScanNet contains 1,513 real-scanned 3D scenes with point-wise annotated object instances. Usage: We use the ScanNet dataset to obtain the object height priors in Chapter 4.