TACoS: Transformer and Accelerator Co-Search Towards Ubiquitious Vision Transformer Acceleration

dc.contributor.advisorLin, Yingyanen_US
dc.creatorPuckett, Danielen_US
dc.date.accessioned2024-01-22T22:13:30Zen_US
dc.date.available2024-01-22T22:13:30Zen_US
dc.date.created2023-08en_US
dc.date.issued2023-12-06en_US
dc.date.submittedAugust 2023en_US
dc.date.updated2024-01-22T22:13:30Zen_US
dc.description.abstractRecent works have combined pruned Vision Transformer (ViT) models and specialized accelerators to achieve strong accuracy/latency tradeoffs in many computer vision tasks. However, it takes a significant amount of expert labor to adapt these systems to real-world scenarios with specific accuracy, latency, power, and/or area constraints. Automating the design and exploration of these systems is a promising solution but is hampered by two unsolved problems: 1) Existing methods of pruning the attention maps of a ViT model involve fully training the model, pruning its attention maps, then fine-tuning the model. This is infeasible when exploring a design space containing millions of model architectures. 2) The design space is complicated and the system’s area efficiency, scalability, and data movement are hurt because we lack a unified accelerator template that efficiently computes each operation in sparse ViT models. To solve these problems, I propose TACoS: Transformer and Accelerator Co-Search, the first automated method to co-design pruned ViT model and accelerator pairs. TACoS answers the above challenges using 1) a novel ViT search algorithm that simultaneously prunes and fine-tunes many models at many different sparsity ratios, and 2) the first unified ViT accelerator template, which efficiently accelerates each operation in sparse ViT models using adaptable PEs and reconfigurable PE lanes. With these innovations, the TACoS framework quickly and automatically designs state-of-the-art systems for real-world applications and achieves accuracy/latency tradeoffs superior to hand-crafted ViT models and accelerators.en_US
dc.format.mimetypeapplication/pdfen_US
dc.identifier.citationPuckett, Daniel. "TACoS: Transformer and Accelerator Co-Search Towards Ubiquitious Vision Transformer Acceleration." (2023) Master's thesis, Rice University. https://hdl.handle.net/1911/115349en_US
dc.identifier.urihttps://hdl.handle.net/1911/115349en_US
dc.language.isoengen_US
dc.rightsCopyright is held by the author, unless otherwise indicated. Permission to reuse, publish, or reproduce the work beyond the bounds of fair use or other exemptions to copyright law must be obtained from the copyright holder.en_US
dc.subjectVision Transformeren_US
dc.subjecthardware acceleratoren_US
dc.subjectmachine learningen_US
dc.subjectpruningen_US
dc.subjectsparsityen_US
dc.titleTACoS: Transformer and Accelerator Co-Search Towards Ubiquitious Vision Transformer Accelerationen_US
dc.typeThesisen_US
dc.type.materialTexten_US
thesis.degree.departmentElectrical and Computer Engineeringen_US
thesis.degree.disciplineEngineeringen_US
thesis.degree.grantorRice Universityen_US
thesis.degree.levelMastersen_US
thesis.degree.nameMaster of Scienceen_US
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
PUCKETT-DOCUMENT-2023.pdf
Size:
1.62 MB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 2 of 2
No Thumbnail Available
Name:
PROQUEST_LICENSE.txt
Size:
5.84 KB
Format:
Plain Text
Description:
No Thumbnail Available
Name:
LICENSE.txt
Size:
2.98 KB
Format:
Plain Text
Description: