My general research interest lies in solving memory and computation intensive problems by innovative algorithm-architecture mapping approaches. My current focus is on improving the scalability of graph representation learning. I have developed sampling methods to efficiently and accurately train Graph Convolutional Network (GCN), especially for deep models and large graphs. I have also developed various GCN/CNN accelerators by parallelization on heterogeneous platforms (GPU, CPU and FPGA).