Pennsylvania State University
Control of Large-Scale Parallel Server Networks
Parallel server networks arise in a variety of applications in data centers, telecommunications, manufacturing and service systems. Optimal scheduling and routing controls of such networks are very challenging because of complex network structures, as well as the heterogeneity of demand and server
capabilities. Since the exact analysis is prohibitive, diffusion models are developed to provide approximate solutions to support scheduling and routing decisions. Our objective is to solve the optimal scheduling problem for Markovian parallel server networks under the long-run average (ergodic) cost criteria
asymptotically in the Halfin-Whitt regime. Efficiency and quality of offered service are balanced in this asymptotic regime.
We consider two formulations of control problems: (i) both queueing and idleness costs are minimized, and (ii) the queueing cost is minimized while a constraint is imposed upon the idleness of all server pools. The second formulation concerns fairness of capacity allocations. The optimal solution of the
scheduling problem is approximated by that of the ergodic diffusion control in the limit via the HJB equations. Since the existing ergodic control theory does not apply to the class of diffusions arising from such network models, we introduce a new class of ergodic control problems for diffusions. A significant
component in solving the ergodic control problems is to understand the stability properties of the limiting controlled diffusions, for which a leaf elimination algorithm has been developed.
In order to prove the asymptotic optimality, it is necessary to understand the stochastic stability properties of the network models. We identify a class of stationary Markov scheduling policies under which the queueing processes are geometrically stable. The asymptotic convergence of value functions relies on an approximation
method via the spatial truncations for the ergodic control of diffusions. The class of geometrically stable Markov policies plays a key role in the approximation method as they are used as the fixed controls outside a compact set. (This is joint work with Ari Arapostathis at UT Austin.)
Bio: Dr. Guodong Pang is currently an associate professor in the Harold and Inge Marcus Department of Industrial and Manufacturing Engineering at Pennsylvania State University and also an associate professor by courtesy appointment in the Department of
Mathematics. Dr. Pang received his Ph.D. in Operations Research at Columbia University in 2010. He joined Penn State in 2010, and held the Marcus Career Professorship. His research interests are in applied probability, stochastic networks, queueing systems, with applications
in service systems (customer contact centers, healthcare), energy (smart grids), data centers, cloud computing and telecommunications. His work has been published in journals such as: Annals of Applied Probability, Stochastic Processes and their Applications,Mathematics of Operations Research, Advances in Applied Probability, Queueing Systems, Management Science, Manufacturing & Service Operations
Management. His work has been funded by the US National Science Foundation, the Army Research Office - Probability and Statistics Program, the Marcus Endowment Grants, and College of Engineering Multidisciplinary Seed Grant at Penn State. Dr. Pang received the Outstanding Faculty Award in recognition of
the excellence in teaching at Penn State in 2016. He has also received several grants to improve engineering education, including Service and Enterprise Engineering Initiative Grants and Leonhard Center Grant at Penn State.
*Join us for light
refreshments and meet our guest from 3:45 to 4:00 in the lobby of Duncan
Hall. The colloquium begins at 4:00 and ends at 5:00. Open to the
Rice University affiliated general public.
Monday, September 12, 2016
4:00 PM to 5:00 PM
Duncan Hall, RM1070