Abstract
th the rise of deep neural networks a number of approaches for learning over 3D data have gained popularity. In this paper, we take advantage of one of these approaches, bilateral convolutional layers to propose a novel end-to-end deep auto-encoder architecture to efficiently encode and reconstruct 3D point clouds. Bilateral convolutional layers project the input point cloud onto an even tessellation of a hyperplane in the (d Å1)-dimensional space known as the permutohedral lattice and perform convolutions over this representation. In contrast to existing point cloud based learning approaches, this allows us to learn over the underlying geometry of the object to create a robust global descriptor. We demonstrate its accuracy by evaluating across the shapenet and modelnet datasets, in order to illustrate 2 main scenarios, known and unknown object reconstruction. These experiments show that our network generalises well from seen classes to unseen classes.
| Original language | English |
|---|---|
| DOIs | |
| Publication status | Published - 1 Jan 2019 |
| Externally published | Yes |
| Event | IMVIP 2019: Irish Machine Vision & Image Processing - Technological University Dublin, Dublin, Ireland Duration: 28 Aug 2019 → 30 Aug 2019 |
Conference
| Conference | IMVIP 2019: Irish Machine Vision & Image Processing |
|---|---|
| Country/Territory | Ireland |
| City | Dublin |
| Period | 28/08/19 → 30/08/19 |
Keywords
- deep neural networks
- 3D data
- bilateral convolutional layers
- deep auto-encoder
- 3D point clouds
- permutohedral lattice
- global descriptor
- shapenet
- modelnet
- object reconstruction