In this thesis I consider three unconventional estimators and augment them with statistical guarantees. First, I consider a cone projected power iteration algorithm designed to recover the first principal eigenvector from a noisy positive semidefinite matrix. When the true principal eigenvector is assumed to belong to a convex cone, the proposed algorithm is fast and has a tractable error. Specifically, the method achieves polynomial time complexity for certain
convex cones equipped with fast projection such as the monotone cone. It attains a small error when the noisy matrix has a small cone-restricted operator norm. The above results are supplemented with a minimax lower bound of the error under the spiked covariance model. Numerical experiments on simulated and real data, show that the proposed method achieves shorter run time and smaller error in comparison to the ordinary power iteration
and some sparse principal component analysis algorithms if the principal eigenvector is in a convex cone. Second, I develop an abstract procedure for debiasing constrained or regularized potentially high-dimensional linear models. The proposed procedure can produce 1/n-confidence
{\displaystyle {\sqrt {~^{~}}}}
intervals for individual coordinates (or even bounded contrasts) in models with unknown covariance, provided that the covariance has bounded spectrum. Examples where the
proposed algorithm can be implemented in practice are given. One fairly general class of instances which are amenable to applications of the procedure include convex constrained least squares. I translate the procedure to an abstract algorithm over this class of models, and we give concrete examples where efficient polynomial time methods for debiasing exist. Those include the constrained version of LASSO, regression under monotone constraints,
regression with positive monotone constraints and non-negative least squares. In addition, the proposed abstract procedure can be applied to efficiently debias SLOPE and square-root SLOPE, among other popular regularized procedures under certain assumptions. Thorough
simulation results are provided in support of theoretical findings. Finally, I derive finite sample guarantees for the Chebyshev aka `1 estimator in linear regression, when the noise is known to be uniformly distributed. With a mild assumption on the design matrix, the error rate would be Q(p)=n where Q(p) is a polynomial of the dimension p. In addition, I extend the Chebyshev estimator to high-dimensional settings by incorporating an l1 penalty, and provide convergence guarantees of such an estimator. The high-dimensional Chebyshev estimator achieves a faster rate than the usual l1 penalized least squares estimator, under uniformly distributed noise and sparsity of the signal vector.