Rank Favoritism?

Posted on Feb 1, 2012 | 3 Comments

Unless we’re absolutely sure we’re at the top, most of us don’t like being ranked against our peers. But many companies like to rank their employees from best to worst, promoting the top and even going so far as to fire the bottom.

“The method, sometimes called ‘rank and yank,’ was pioneered by Jack Welch when he ran General Electric Co. from 1981 to 2001, and was rapidly adopted by other firms,” The Wall Street Journal explained this week. “Today, an estimated 60% of Fortune 500 firms still use some form of the ranking, though they might use gentler-sounding names like ‘talent assessment system’ or ‘performance procedure.’”

Of course, the system has its detractors. “Critics of forced ranking say that it demoralizes workers and fosters backstabbing and favoritism,” the Journal noted.

So, is forced ranking something we should stop? Peter Drucker (who, as we’ve pointed out, had very strong ideas about performance reviews) would probably say no. In The Frontiers of Management, Drucker observed that most companies (this was in 1985) didn’t like to measure productivity among white-collar workers, claiming that such as assessment was nearly impossible. “But this is simply not true,” Drucker countered. “The yardsticks we have may be crude, but they are perfectly adequate.” And they had to be deployed, for on the global stage “the competitive battle will be won or lost by white-collar productivity.”

He was also a fan of Welch’s, crediting him with having “the courage of a lion.” In Management Challenges for the 21st CenturyDrucker wrote approvingly that General Electric, during Welch’s tenure, had “created more wealth than any other company in the world” and that much of the credit was due to its success in organizing information about performance. A “focus on the innovative performance of the business” became a “major factor in determining compensation and bonuses of the general manager and of senior management people of a business unit,” and, similarly, an analytical look at performance data was helpful in “deciding on the promotion of an executive, and especially of the general manager of a business unit.”

Still, Drucker did sound a cautionary note about performance measurements: They only work if they quantify the right things. “Businesses usually define performance too narrowly—as the financial bottom line,” Drucker wrote. “If that’s all you have as a performance measurement and performance goal in the business, you are not likely to do well or survive very long.”

What do you think: Should we keep ranking and yanking, or is it time to yank ranking?


  1. Sergio
    February 2, 2012

    If we remove ‘Ranking’ , then how can we really understand the health and performance of an organization?

    In the true sense of the word, ‘management’ is inward looking and Drucker reminds us that measuring management performance requires looking across the following:

    *Market standing
    *Development of people
    *Financial results

    I see the ‘ranking’ strategy as the way to assess individual and team performance across these categories. It serves to provide the necessary feedback for the individual, team and organization to improve. The ‘yanking’ strategy should then incorporate this feedback to define the most effectiveness way to guarantee this improvement. Eliminating the worst performers from the organization is one way to do it, but there are more, and which is best will vary based on the context.

  2. Left Sikalidis
    February 12, 2012

    True story: a director in a multinational company operating in Europe, well respected among his industry colleagues, was ranked below base for three years in a row from general management for not reaching company’s desired financial objectives. This was quite irritating for him, his colleagues and his team, and made him having several cooperation issues with his coworkers too. Finally, they started ranking him in a lower figure, year by year. The situation seemed irreversible and his future within the organization, unstable. In the fourth year the manager secured one of the accounts he was dealing (on which he was working for the last three years), bringing his company a 33% of its total annual return and providing a huge potential to his company for the years to come.

    Besides “congratulations”, “thank you” messages and bonuses paid as part of his contract, nothing changed. Coworkers kept the perception about him, and when the manager asked from the GM to make him a partner, as they have promised, the answer was a “sorry, but our priorities have changed” attitude. Finally, when it was more than obvious to him that the ranking procedure was being used from GM for balancing purposes among departments’ directors, the manager finally quitted his job after 6 years.

    This incident may not make a general rule about how things work, but it shows how things might turn to. People may be affected by external or internal situations and ranking cannot accurately describe this. We cannot use closed based measured technics or repeated ones, because people are not machines and if are being treated like ones, then organisations are losing their most needed tool, their ability to think and create. Furthermore, perceptions and biases created are unpredictable and rarely spread out.

    The best scenario for a ranking system to be successful is having 95% of accuracy (most of them are far less accurate). Does anyone take into account the impact of the 5% in a 10.000 employee organization? According to modest calculations, this may affect up to 20% of the people within the organization, in an unpredictable way, depending on structure and other parameters.

    The greater risk systems face, in creation, implementation, and analysis (besides systemic ones), is the human factor. The argument therefore is on how we may redesign ranking systems to care about peoples’ thoughts, concerns and be used on people’s favor rather than mirroring organization’s objectives.

    I am not trying to reinvent the wheel here. All I say is that we should focus on alternative ways that may boost employees’ performance rather than evaluating their success.

  3. Edaw
    March 30, 2014

    True Story: Because of stack ranking one manager who had close relationship with a director, went at bottom of the stack and was being fired. However he and his director figured out that all they had to do was add somebody to the team, work with HR (which works as a tool for senior management) and declare the new hire as being at the bottom of the stack and fire the new hire. That is how stack ranking works in the real business world.


Leave a Reply