In a wide range of applications it is required to replace an empirically obtained unit diagonal indefinite symmetric matrix with a valid correlation matrix (unit diagonal positive semidefinite matrix). A popular replacement is the nearest correlation matrix in the Frobenius norm. The first method for computing the nearest correlation matrix with guaranteed convergence was the alternating projections method proposed by Higham in 2002. The rate of convergence of this method is at best linear, and it can require a large number of iterations to converge to within a given tolerance. Although a faster globally convergent Newton algorithm was subsequently developed by Qi and Sun in 2006, the alternating projections method remains very widely used. We show that Anderson acceleration, a technique for accelerating the convergence of fixed-point iterations, can be applied to the alternating projections method and that in practice it brings a significant reduction in both the number of iterations and the computation time. We also show that Anderson acceleration remains effective, and indeed can provide even greater improvements, when it is applied to the variants of the nearest correlation matrix problem in which specified elements are fixed or a lower bound is imposed on the smallest eigenvalue. This is particularly significant for the nearest correlation matrix problem with fixed elements because no Newton method with guaranteed convergence is available for it. Both methods for computing the nearest correlation matrix are based on repeated eigenvalue decompositions and so they can be infeasible in time-critical situations. We have recently proposed an alternative method to restore definiteness to an indefinite matrix called shrinking. The method is based on computing the optimal parameter in a convex linear combination of the indefinite starting matrix and a chosen positive definite target matrix. We show how this problem can be solved by the bisection method and posed as a generalized eigenvalue problem, and we demonstrate how exploiting positive definiteness in these two methods leads to impressive computational savings. The work on these two topics is joint with Nicholas J. Higham, and, for shrinking, with Vedran Šego.