Conjugate priors allow for fast inference in large dimensional vector autoregressive (VAR) models. But at the same time, they introduce the restriction that each equation features the same set of explanatory variables. This paper proposes a straightforward means of postprocessing posterior estimates of a conjugate Bayesian VAR to effectively perform equation-specific covariate selection. Compared with existing techniques using shrinkage alone, our approach combines shrinkage and sparsity in both the VAR coefficients and the error variance-covariance matrices, greatly reducing estimation uncertainty in large dimensions while maintaining computational tractability. We illustrate our approach by means of two applications. The first application uses synthetic data to investigate the properties of the model across different data-generating processes, and the second application analyzes the predictive gains from sparsification in a forecasting exercise for U.S. data.