Prefetching and buffer management for parallel I/O systems
dc.contributor.advisor | Varman, Peter J. | en_US |
dc.creator | Kallahalla, Mahesh | en_US |
dc.date.accessioned | 2009-06-04T08:02:09Z | en_US |
dc.date.available | 2009-06-04T08:02:09Z | en_US |
dc.date.issued | 2000 | en_US |
dc.description.abstract | In parallel I/O systems the I/O buffer can be used to improve I/O parallelism by improving I/O latency by caching blocks to avoid repeated disk accesses for the same block, and also by buffering prefetched blocks and making the load on disks more uniform. To make best use of available parallelism and locality in I/O accesses, it is necessary to design prefetching and caching algorithms that schedule reads intelligently so that the most useful blocks are prefetched into the buffer and the most valuable blocks are retained in the buffer when the need for evictions arises. This dissertation focuses on algorithms for buffer management in parallel I/O systems. Our aim is to exploit the high parallelism provided by multiple disks to reduce the average read latency seen by an application. The thesis is that traditional greedy strategies fail to exploit I/O parallelism thereby necessitating new algorithms to make use of the available I/O resources. We show that buffer management in parallel I/O systems is fundamentally different from that in systems with a single disk, and develop new algorithms that carefully decide which blocks to prefetch and when, together with which blocks to retain in the buffer. Our emphasis is on designing computationally simple algorithms for optimizing the number of I/Os performed. We consider two classes of I/O access patterns, read-once and read-often, based on the frequency of accesses to the same data. With respect to buffer management for both classes of accesses, we identify fundamental bounds on performance of online algorithms, study the performance of intuitive strategies, and present randomized and deterministic algorithms that guarantee higher performance. | en_US |
dc.format.extent | 110 p. | en_US |
dc.format.mimetype | application/pdf | en_US |
dc.identifier.callno | THESIS E.E. 2001 KALLAHALLA | en_US |
dc.identifier.citation | Kallahalla, Mahesh. "Prefetching and buffer management for parallel I/O systems." (2000) Diss., Rice University. <a href="https://hdl.handle.net/1911/17987">https://hdl.handle.net/1911/17987</a>. | en_US |
dc.identifier.uri | https://hdl.handle.net/1911/17987 | en_US |
dc.language.iso | eng | en_US |
dc.rights | Copyright is held by the author, unless otherwise indicated. Permission to reuse, publish, or reproduce the work beyond the bounds of fair use or other exemptions to copyright law must be obtained from the copyright holder. | en_US |
dc.subject | Electronics | en_US |
dc.subject | Electrical engineering | en_US |
dc.subject | Computer science | en_US |
dc.title | Prefetching and buffer management for parallel I/O systems | en_US |
dc.type | Thesis | en_US |
dc.type.material | Text | en_US |
thesis.degree.department | Electrical Engineering | en_US |
thesis.degree.discipline | Engineering | en_US |
thesis.degree.grantor | Rice University | en_US |
thesis.degree.level | Doctoral | en_US |
thesis.degree.name | Doctor of Philosophy | en_US |
Files
Original bundle
1 - 1 of 1