Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This assumes that everyone within such a large group would end up ranked accurately.

But the truth is that when you assemble such a large group, you end up comparing apples to oranges. Jane works on a doomed software project and valiantly manages to salvage some of it for other efforts, but management ultimately looks at her project as a failure and has trouble disassociating her work from the result. Meanwhile Bob works on a fairly straightforward hardware project that gets completed on time and within budget, but not necessarily as a result of his actions, and looks like a "team player" who gets rewarded as such.

Assigning a single scalar value ("top $x%") to each of these people necessitates ignoring a huge number of important differences both personal and contextual. Small groups with similar functions allow you to make more meaningful comparisons but only at the cost of ruining the statistical basis for making those comparisons in the first place.

The whole problem with forcing a predetermined model on your organization is that performance appraisals should be an empirical process where the observations peers and management make determine the model that is ultimately adopted (and which will guide future hiring/firing decisions). Forcing a predetermined model onto your organization gives you no data and leads to a self-fulfilling prophecy at best and corruption and politicking at worst.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: