Algorithms working on large data(say of size n) should be both space and time efficient. Any algorithm has to see atleast the whole data once, so time required has to be atleast n(see sublinear time algorithms for exceptions).
Petri net theory has nice theorems which relate subclasses of Petri nets defined by structural conditions -- for example T-nets (also known as marked graphs) and free choice nets -- to their behavioural properties -- such as chec
Computer scientists devise randomized algorithms when they cannot find good deterministic ones. Then they try to decrease the randomness used and still try to prove that the algorithm answers correctly with high probability.
In a landmark paper, Papadimitriou introduced several syntactic subclasses of the search class TFNP (Total Function Nondeterministic Polynomial) based on proof styles that (unlike TFNP) admit complete problems.
We will discuss a lower bound (due to Noga Alon) on the rank of any real matrix in which all the diagonal entries are significantly larger (in absolute value) than all the other entries.
We will see the definition of Probably Approximately CORRECT Learning. Then we will prove that its easy to learn about Rectangles and Conjunctions but hard to learn about 3-Term Disjunctions.