For a perceptron algorithm with a fixed learning rate(r), prove
or disprove that if the classes are not linearly separable, it may
not converge. Can you modify the learning rate(r) so that the
iterations converge in expectation? Explain.
Maximum-weight independent set.
(a) Prove that the two greedy algorithms, optimal substructure
and activity selection are equivalent. That is, they return the
same output for a given input.
(b) Prove that Greedy 2 returns an optimal solution to the
maximum weight independent set problem by application of
lemmata.
Maximum-weight independent set.
(a) Prove that the two greedy algorithms(i.e., Greedy
and Greedy 2) are equivalent. That is, they return the
same output for a given input.
(b) Prove that Greedy 2 returns an optimal solution to the
maximum weight independent set problem by application of
lemmata.
Prove or disprove that the union of two subspaces is a subspace.
If it is not true, what is the smallest subspace containing the
union of the two subspaces.