Deng, Xiaodi2018-06-192018-06-192017-05Deng, Xiaodi. "A Parallel-In-Time Gradient-Type Method For Optimal Control Problems." (2017) <a href="https://hdl.handle.net/1911/102253">https://hdl.handle.net/1911/102253</a>.https://hdl.handle.net/1911/102253This work was also published as a Rice University thesis/dissertation: http://hdl.handle.net/1911/96102This thesis proposes and analyzes a new parallel-in-time gradient-type method for time-dependent optimal control problems. When the classical gradient method is applied to such problems, each iteration requires the forward solution of the state equations followed by the backward solution of the adjoint equations before the gradient can be computed and control can be updated. The solution of the state/adjoint equations is computationally expensive and consumes most of the computation time. The proposed parallel-in-time gradient-type method introduces parallelism by splitting the time domain into N subdomains and executes the forward and backward computation in each time subdomain in parallel using state and adjoint variables at time subdomain boundaries from the last optimization iteration as initial values. The proposed method is generalized to allow different time domain partitions for forward/backward computations and overlapping time subdomains. Convergence analyses for the parallel-in-time gradient-type method applied to discrete-time optimal control problems are established. For linear-quadratic problems, the method is interpreted as a multiple-part splitting method and convergence is proven by analyzing the corresponding iteration matrix. In addition, the connection of the new method to the multiple shooting reformulation of the problem is revealed and an alternative convergence proof based on this connection is established. For general non-linear problems, the new method is combined with metric projection to handle bound constraints on the controls and convergence of the method is proven. Numerically, the parallel-in-time gradient-type method is applied to linear-quadratic optimal control problems and to a well-rate optimization problem governed by a system of highly non-linear partial differential equations. For linear-quadratic problems, the method exhibits strong scaling with up to 50 cores. The parallelization in time is on top of the existing parallelization in space to solve the state/adjoint equations. This is exploited in the well-rate optimization problem. If the existing parallelism in space scales well up to K processors, the addition of time domain decomposition by the proposed method scales well up to K X N processors for small to moderate number N of time subdomains.297 ppengA Parallel-In-Time Gradient-Type Method For Optimal Control ProblemsTechnical reportTR17-07