David Letterman isn’t the only one crazy for top ten lists; corporate sustainability rankings have become one way companies prove their green laurels to consumers. For example the Corporate Knights, a research group that also produces a magazine about sustainability, published a list of top sustainable companies that looks a lot different than 3p’s own Sustainable CEO’s List. In fact, none of the companies on 3p’s list made it on the Corporate Knights list. This article looks at why sustainability metrics differ and whose metrics to trust.
Methodology is key
The greatest–and most obvious–reason for metric differentiation is varying methodology. With many NGOs and consulting companies volunteering ranked lists, their purposes, funding, reputation and methodology all matter.
For example, 3p’s Top Ten Sustainable CEO list was the result of a straw poll among 3p readers. So, it makes sense that this poll selects out for companies that 3p covers frequently, companies based in the US, where many readers live, and accounts for other 3p demographic information. For example, it is not surprising that 3p readers selected small cap CEOs for top spots, since this is the world many 3p readers come from.
The Corporate Knights’ results also mirrored its methodology. The Corporate Knights’ list integrated a variety of proprietary metrics on an international scale, and focused almost exclusively on midcap firms and larger. Any overlap between the 3p list and the Corporate Knights list would have indicated a clear winner, but it’s not logical that there be one, considering the variation in modeling methodology.
Disclosure is key
Like statistics, gauging credibility and meaning depends on understanding the data behind the results, to judge the validity of the conclusions. Where 3p’s methodology was obvious–votes from readers–the Corporate Knights’ methodology is more complex. Responding to accusations of ambiguity, The Corporate Knights’ strove to make 2010 rankings very transparent. The Corporate Knights are respected across sectors in the corporate world, so it is important that its rankings are as veracious as possible. In its “detailed methodology” pdf, the Corporate Knights are detailed and transparent about where data comes from. Multiple streams of reporting data are combined, which account for variation, financial analysis and key performance indicators endorsed by the likes of the UN and McKinsey. In terms of disclosure, there is no winner between 3p and the Corporate Knights rankings; they both meet disclosure standards that are suitable for the purpose of the rankings.
Motive is key
Another helpful form of disclosure is who is paying for the data and the purpose of the list. Unlike the U.S. Supreme Court, most people recognize the influential effect money can have on results. Rankings are suspicious when ranked companies financially support the organization ranking them, especially without disclosing such information. Corporate Knights ranks for its dollars, but does not appear to be sponsored specifically. 3p supplies news and content and makes money from advertising.
Unfortunately there is no shortcut to analyzing metrics. There is no J.D. Power or Moody’s for sustainability ranking, though the Corporate Knights are certainly trying to become the gold-standard. Yet even if it is a leader, it’s not responsible for consumers to trust blindly. For example, since Enron’s collapse, many prestigious credit agencies–agencies that investors trust completely–have faced near-constant scrutiny over what appears to be “pay to play” credit rankings. So while it is great to find brands you respect, never sacrifice trust for comprehension. The most reliable strategy is to appreciate the data and rankings for what they are, check the data and do not treat any single list as if it is the only one that matters.