——The codes used to conduct the registration experiments in this thesis can be found on GitHub: https://github.com/nancyzhao0828/Gneralised-discriminative-optimisation. ——The datasets for instrument registration can be found on GitHub: https://github.com/nancyzhao0828/Datasets-for-instruments-registration. ——The public datasets for experiments include the following datasets/models : 1. Stanford bunny (Chapter 3 & Chapter 5) http://www-graphics.stanford.edu 2. UWA data set (Chapter 3 & Chapter 4) http://staffhome.ecm.uwa.edu.au/~00053650/databases.html 3. Dancing Children (Chapter 3 & Chapter 4 & Chapter 5) http://visionair.ge.imati.cnr.it/ 4. ModelNet40 (Chapter 3 & Chapter 4 & Chapter 5) http://modelnet.cs.princeton.edu/ 5. Indoor Scene (Chapter 3) http://rgbd-dataset.cs.washington.edu/ 6. Happy Buddha (Chapter 4) http://www-graphics. stanford. edu/data/3dscanrep 7. Skeleton Hand (Chapter 4 & Chapter 5) https://www.cc.gatech.edu/projects/large_models/ 8. Bimba Model (Chapter 4 & Chapter 5) http://visionair.ge.imati.cnr.it/ 9. ptCloudRef and ptCloudCurrent (Chapter 4) https://ww2.mathworks.cn/help/vision/ug/3-d-point-cloud-registration-and-stitching.html?requestedDomain=cn ——[Registration between scene and model] The dataset for the registration between scenes and models (bunny and chicken) includes the following information: 1) The RGBfile folder includes the .png files generated by using the Microsoft Kinect V2 to capture the scenes. 2) The PLYfile folder shows the reconstruction results after segmentation. 3) The Templates folder shows the chicken model and the parasaurolophus model will be matched to the segmented scenes. ——[Object Tracking on Stanford Bunny Model] The dataset for the object tracking experiments on the bunny model includes the following information: 1) The DEPTHfile folder and the RGBfile folder contain the information which is generated by using the Microsoft Kinect V2 to capture the scenes.The former shows the depth information and the latter represents the RGB image of scenes. 2) The PCDfile folder shows the reconstructed scenes via the camera parameters which are provided in the cameraParameters folder. We transform the .pcd file in PCDfile to .ply file and put it in the Reconstruction Results folder. 3) The Segmentation Results folder provides the bunny model and the human who holds the bunny model in his hand, which are segmented from the reconstructed scenes. 4) The Templates folder shows the bunny model which will be matched to the reconstructed scenes or the segmented results. ——[Object Tracking on the Chicken Model] The dataset for the object tracking experiments on the chicken model includes the following information: 1) The DEPTHfile folder and the RGBfile folder contain the information which is generated by using the Microsoft Kinect V2 to capture the scenes. The former shows the depth information and the latter represents the RGB image of scenes. 2) The PCDfile folder shows the reconstructed scenes via the camera parameters which are provided in the cameraParameters folder. We transform the .pcd file in PCDfile to .ply file and put it in the Reconstruction Results folder. 3) The Segmentation Results folder provides the chicken model and the human who holds the chicken model in his hand, which are segmented from the reconstructed scenes. 4) The Templates folder shows the chicken model which will be matched to the reconstructed scenes or the segmented results.