Distributed learning protocols are designed to train on distributed data without gathering it all on a single centralized machine, thus contributing to the efficiency of the system and enhancing its privacy.
This talk will start with an overview of the relatively young field of QBF proof complexity, explaining the QBF proof system QURes, and an assessment of existing lower bound techniques.
The K-armed bandit problem is a sequential decision making problem wherein one has to sequentially sample from a given set of K probability distributions (belonging to a known family) informally called 'arms of the bandit'.
A set function f on the subsets of a set E is called submodular if it satisfies a natural diminishing returns property: for any two subsets S \subseteq T \subseteq E and an element x outside T, we have f(T + x) - f(T) \leq f(S+x) - f(S).
We consider a node-monitor pair, where updates are generated stochastically (according to a known distribution) at the node that it wishes to send to the monitor.
This will be a tweaked version of my Qualifier talk. I will mostly focus on:
1. The notion of interventional distributions (as defined by Judea Pearl) and how they can be used to identify causal linkages.
The paper, "Improved Bounds for Perfect Sampling of k-Colorings in Graphs," jointly authored by Siddharth Bhandari and Sayantan Chakraborty, received the Best Student