Optimal Resource Allocation and Scheduling in Networks and Systems

Speaker: 

Rahul Singh

Affiliation: 

The Ohio State University
Columbus, Ohio, United States.

Time: 

Monday, 24 February 2020, 11:00 to 12:00

Venue: 

  • A-201 (STCS Seminar Room)

Organisers: 

Abstract: I will provide an account of my research in various areas. As an exemplar, I will discuss the first two topics in greater detail.

1) Decentralized Control of Stochastic Dynamical Systems: We begin by developing new methods to design decentralized control laws for stochastic dynamical systems which perform as well as an optimal centralized policy. We illustrate these methods on real-time multi-hop communication networks. The problem is challenging because it involves inducing coordination amongst the controllers without knowing all the states of individual agents.

2) Reinforcement Learning: We consider the problem of designing learning rules for Markov decision processes under constraints on the cost expenditures by the controller/agent.

3) Asymptotic Smoothness of a Service Discipline: We introduce a new performance metric that is useful in order to characterize schedulers for networks serving real-time traffic. We also show that the popular MaxWeight scheduler performs well with respect to it.

4) Networked Control Systems: We address the problem of how to optimally schedule data packets over an unreliable channel in order to minimize the estimation error of a remote linear estimator that tracks the state of a Gauss Markov process. We show that a simple index rule that calculates the value of information (VoI) of each packet, and then schedules the packet with the largest current value of VoI, is optimal.

I will conclude the talk by discussing my immediate and long-term research directions.

Bio: Rahul Singh is a postdoctoral researcher at the Ohio State University. He received the B.Tech. degree in Electrical Engineering from Indian Institute of Technology Kanpur in 2009, the M.S. degree in Electrical Engineering from the University of Notre Dame in 2011, and the Ph.D. degree in Computer Engineering from the Department of  Electrical and Computer Engineering, Texas A&M University, College Station, in 2015. He was Postdoctoral Researcher at the Laboratory for Information and Decision Systems (LIDS), Massachusetts Institute of Technology. He also worked as a Data Scientist at Encored Inc., and was part of the Machine Learning Group at Intel. His research interests include decentralized control of large-scale complex cyber-physical systems, operation of electricity markets with renewable energy, optimal scheduling and control of networks serving real time traffic, machine learning, game theory, stochastic control, multi-armed bandits and reinforcement learning.