This was a study? Really? Isn't this common sense? I guess I should be happy that someone went out to prove an argument that we've been trying to point out since value added measures first began to be discussed...
The abstract of the study says:
We find that failing to account for tracks leads to large biases in teacher value-added estimates. A teacher of all lower track courses whose measured value-added is at the 50th percentile could increase her measured value-added to the 99th percentile simply by switching to all upper-track courses. We estimate that 75-95 percent of the bias is due to student sorting and the remainder due to test misalignment. We also decompose the remaining bias into two parts, metric and multidimensionality misalignment, which work in opposite directions. Even after accounting for explicit tracking, the standard method for estimating teacher value-added may yield biased estimates.