A Parallel-In-Time Gradient-Type Method For Optimal Control Problems

Date
2017-05
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract

This thesis proposes and analyzes a new parallel-in-time gradient-type method for time-dependent optimal control problems. When the classical gradient method is applied to such problems, each iteration requires the forward solution of the state equations followed by the backward solution of the adjoint equations before the gradient can be computed and control can be updated. The solution of the state/adjoint equations is computationally expensive and consumes most of the computation time. The proposed parallel-in-time gradient-type method introduces parallelism by splitting the time domain into N subdomains and executes the forward and backward computation in each time subdomain in parallel using state and adjoint variables at time subdomain boundaries from the last optimization iteration as initial values. The proposed method is generalized to allow different time domain partitions for forward/backward computations and overlapping time subdomains. Convergence analyses for the parallel-in-time gradient-type method applied to discrete-time optimal control problems are established. For linear-quadratic problems, the method is interpreted as a multiple-part splitting method and convergence is proven by analyzing the corresponding iteration matrix. In addition, the connection of the new method to the multiple shooting reformulation of the problem is revealed and an alternative convergence proof based on this connection is established. For general non-linear problems, the new method is combined with metric projection to handle bound constraints on the controls and convergence of the method is proven. Numerically, the parallel-in-time gradient-type method is applied to linear-quadratic optimal control problems and to a well-rate optimization problem governed by a system of highly non-linear partial differential equations. For linear-quadratic problems, the method exhibits strong scaling with up to 50 cores. The parallelization in time is on top of the existing parallelization in space to solve the state/adjoint equations. This is exploited in the well-rate optimization problem. If the existing parallelism in space scales well up to K processors, the addition of time domain decomposition by the proposed method scales well up to K X N processors for small to moderate number N of time subdomains.

Description
This work was also published as a Rice University thesis/dissertation: http://hdl.handle.net/1911/96102
Advisor
Degree
Type
Technical report
Keywords
Citation

Deng, Xiaodi. "A Parallel-In-Time Gradient-Type Method For Optimal Control Problems." (2017) https://hdl.handle.net/1911/102253.

Has part(s)
Forms part of
Published Version
Rights
Link to license
Citable link to this page