Information-Theoretic Lower Bounds for Compressive Sensing With Generative Models
It has recently been shown that for compressive sensing, significantly fewer measurements may be required if the sparsity assumption is replaced by the assumption the unknown vector lies near the range of a suitably-chosen generative model. In particular, in (Bora et al., 2017) it was shown roughly O(k log L) random Gaussian measurements suffice for accurate recovery when the generative model is an L-Lipschitz function with bounded k-dimensional inputs, and O(kdlogw) measurements suffice when the generative model is a k-input ReLU network with depth d and width w.