Stochastic Scheduling: Strategies for Abandonment Management
dc.contributor.advisor | Perez-Salazar, Sebastian | en_US |
dc.creator | Xu, Yihua | en_US |
dc.date.accessioned | 2024-08-30T15:47:42Z | en_US |
dc.date.created | 2024-08 | en_US |
dc.date.issued | 2024-06-23 | en_US |
dc.date.submitted | August 2024 | en_US |
dc.date.updated | 2024-08-30T15:47:42Z | en_US |
dc.description.abstract | Motivated by applications where impatience is pervasive and resources are limited, we study a job scheduling model where jobs may depart at an unknown point in time. Initially, we have access to a single server and n jobs with known non-negative values. These jobs also have unknown stochastic service and departure times with known distributional information, which we assume to be independent. When the server is free, we can run a job that has neither been run nor departed and collect its value. This occupies the server for an unknown amount of time, and we aim to design a policy that maximizes the expected total value obtained from jobs run on the server. Natural formulations of this problem suffer from the curse of dimensionality. Furthermore we show that even when the service and departure times are deterministic, our problem is NP-hard to solve. Hence, we focus on policies that can provide high expected reward compared to the optimal value. We demonstrate a polynomial-time Linear Program (LP) based approximation algorithm with guaranteed performance under mild assumptions on service times. Our methodology is flexible, allowing additional constraints to be incorporated. We develop efficient approximation algorithms with provable guarantees for extensions like job release times, deadlines, and knapsack constraints. We further extend our analysis to the setting where all jobs have independent and identically distributed (i.i.d.) service times. In this case, we show that the greedy policy that always runs the highest-valued job whenever the server is free guarantees a factor of 1/2 compared to the optimal expected value. We evaluate our LP-based policies and the greedy policy empirically on synthetic and real datasets. | en_US |
dc.embargo.lift | 2025-02-01 | en_US |
dc.embargo.terms | 2025-02-01 | en_US |
dc.format.mimetype | application/pdf | en_US |
dc.identifier.citation | Xu, Yihua. Stochastic Scheduling: Strategies for Abandonment Management. (2024). Masters thesis, Rice University. https://hdl.handle.net/1911/117758 | en_US |
dc.identifier.uri | https://hdl.handle.net/1911/117758 | en_US |
dc.language.iso | eng | en_US |
dc.rights | Copyright is held by the author, unless otherwise indicated. Permission to reuse, publish, or reproduce the work beyond the bounds of fair use or other exemptions to copyright law must be obtained from the copyright holder. | en_US |
dc.subject | Approximation Algorithms | en_US |
dc.subject | Discrete Optimization | en_US |
dc.subject | Job Scheduling | en_US |
dc.subject | Online Algorithm | en_US |
dc.subject | Optimization Under Uncertainty | en_US |
dc.title | Stochastic Scheduling: Strategies for Abandonment Management | en_US |
dc.type | Thesis | en_US |
dc.type.material | Text | en_US |
thesis.degree.department | Computational and Applied Mathematics | en_US |
thesis.degree.discipline | Engineering | en_US |
thesis.degree.grantor | Rice University | en_US |
thesis.degree.level | Masters | en_US |
thesis.degree.name | Master of Arts | en_US |
Files
Original bundle
1 - 1 of 1