Thursday, October 4, 2012

More ways to sabotage selection

Yesterday we saw how weighting the different measures you combine to rate applicants for jobs or promotions or school placements or grants can end up undermining your ratings. The measures to which you assign the highest weight end up having almost all the influence on selection, while the other measures end up with none.

There are times, though, when people don't intend to weight their measures but end up weighting them inadvertently anyway. For example, if you measure one characteristic on a scale of 10 and another on a scale of 5, the measure with a maximum score of 10 will end up having more influence (barring extraordinary and very rare circumstances).

That problem's easy to deal with: just make sure that all your measures have scales with the same maximum score. The second is a little more difficult. It is that differences in variability can accidentally weight the measures.

Some of your measures will almost always vary over a wider range than others. The statistic most widely used to assess variability is the standard deviation. The bigger the standard deviation, the more variable the scores. An example will demonstrate the problem differences in variability create.

Let's suppose that a professor gives two tests in a course, each of which is to count for 50% of the final mark. The first test has a mean of 65 and a standard deviation of 8, while the second has a mean of 65 and a standard deviation of 16. The problem with these statistics is that two students can do equally well but end up with different final marks. We'll look at two students' possible results.

The first student finishes one standard deviation above the mean on the first test and right at the mean on the second. That is, her marks were 73 and 65, and her final mark is half of 73 + 65, or 69. The second student finishes at the mean on the first test and one standard deviation above the mean on the second. That is, her marks are 65 and 81, and her final mark is (65 + 81)/2, or 73. So, even though each student finished at the mean on one test and one standard deviation above the mean on the other, one ended up with a higher mark than the other.

To eliminate this bias you can calculate standard scores. You simply subtract the mean from each applicant's score and divide by the standard deviation. That gives you a standard score with a mean of zero; applicants with scores above the mean will have positive standard scores and applicants with scores below the mean will have negative ones. If that sounds complicated, it's not. Spreadsheets will do it for you; in Excel you use the AVERAGE function to get the mean and the STDDEV function to get the standard deviation (there is a STANDARDIZE function, but since it requires you to enter the mean and standard deviation it it's no faster than writing a formula yourself)).

Even if that still seems like a lot of work to you, the choice is clear: either you do the work or you sabotage your ratings. If you sabotage your ratings you sabotage your selection, and if you sabotage your selection you sabotage your organization (and maybe others, if you're doing something like selecting outside applicants for grants).

For more information about standardization click here for the first of a series of brief articles. Alternatively, the next time you're compiling ratings you can involve staff with statistical training or a consultant.

More Ways to Sabotage Selection © 2012, John FitzGerald

No comments:

Post a Comment