Boosting the Efficiency of Graph Convolutional Networks via Algorithm and Accelerator Co-Design

dc.contributor.advisorLin, Yingyanen_US
dc.creatorYou, Haoranen_US
dc.date.accessioned2022-09-23T16:20:58Zen_US
dc.date.available2022-09-23T16:20:58Zen_US
dc.date.created2022-08en_US
dc.date.issued2022-08-11en_US
dc.date.submittedAugust 2022en_US
dc.date.updated2022-09-23T16:20:58Zen_US
dc.description.abstractGraph Convolutional Networks (GCNs) have emerged as the state-of-the-art graph learning model. However, it can be notoriously challenging to inference GCNs over large graph datasets, limiting their application to large real-world graphs and hindering the exploration of deeper and more sophisticated GCN graphs. This is because real-world graphs can be extremely large and sparse. Furthermore, the node degree of GCNs tends to follow the power-law distribution and therefore have highly irregular adjacency matrices, resulting in prohibitive inefficiencies in both data processing and movement and thus substantially limiting the achievable GCN acceleration efficiency. To this end, this paper proposes a GCN algorithm and accelerator Co-Design framework dubbed GCoD which can largely alleviate the aforementioned GCN irregularity and boost GCNs’ inference efficiency. Specifically, on the algorithm level, GCoD integrates a split and conquer GCN training strategy that polarizes the graphs to be either denser or sparser in local neighborhoods without compromising the model accuracy, resulting in graph adjacency matrices that (mostly) have merely two levels of workload and enjoys largely enhanced regularity and thus ease of acceleration. \underline{On the hardware level}, we further develop a dedicated two-pronged accelerator with a separated engine to process each of the aforementioned denser and sparser workloads, further boosting the overall utilization and acceleration efficiency. Extensive experiments and ablation studies validate that our GCoD consistently reduces the number of off-chip accesses, leading to speedups 15286x, 294x, 7.8x, and 2.5x as compared to CPUs, GPUs, and prior-art GCN accelerators including HyGCN and AWB-GCN, respectively, while maintaining or even improving the task accuracy. Additionally, we visualize GCoD trained graph adjacency matrices for a better understanding of its advantages.en_US
dc.format.mimetypeapplication/pdfen_US
dc.identifier.citationYou, Haoran. "Boosting the Efficiency of Graph Convolutional Networks via Algorithm and Accelerator Co-Design." (2022) Master’s Thesis, Rice University. <a href="https://hdl.handle.net/1911/113242">https://hdl.handle.net/1911/113242</a>.en_US
dc.identifier.urihttps://hdl.handle.net/1911/113242en_US
dc.language.isoengen_US
dc.rightsCopyright is held by the author, unless otherwise indicated. Permission to reuse, publish, or reproduce the work beyond the bounds of fair use or other exemptions to copyright law must be obtained from the copyright holder.en_US
dc.subjectGCNsen_US
dc.subjectalgorithm and accelerator co-designen_US
dc.titleBoosting the Efficiency of Graph Convolutional Networks via Algorithm and Accelerator Co-Designen_US
dc.typeThesisen_US
dc.type.materialTexten_US
thesis.degree.departmentElectrical and Computer Engineeringen_US
thesis.degree.disciplineEngineeringen_US
thesis.degree.grantorRice Universityen_US
thesis.degree.levelMastersen_US
thesis.degree.nameMaster of Scienceen_US
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
YOU-DOCUMENT-2022.pdf
Size:
2.6 MB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 2 of 2
No Thumbnail Available
Name:
PROQUEST_LICENSE.txt
Size:
5.84 KB
Format:
Plain Text
Description:
No Thumbnail Available
Name:
LICENSE.txt
Size:
2.6 KB
Format:
Plain Text
Description: