Close Window

Partner Evaluations: “It’s got a good beat and you can dance to it”

dance_bandstandFor decades “It’s got a good beat and you can dance to it” or some variation of that sentence was heard over and over on American Bandstand’s Rate-a-Record segment – often in complete disregard as to whether you actually could dance to the tune in question, good beat, or not. Arbitrary and subjective, these ratings had no common criteria and were given based on how the teens providing the score happened to be feeling that day, and their own personal music preferences. And while there is no quantitative data on the impact of these “scores” with the show’s large North American audience of record-buying teens, there was likely at least small reflection of their views in related record sales.

Perhaps only the egos of the rated artists were affected by the scoring, but this kind of random rating could have a much more serious impact when applied in the context of a business decision, as in the case of selecting a new technology platform or agency partner. These decisions have deep, long-term affects and should not be made without structure and context.

Rating vs. Ranking

So whether you have already gone through the written RFP process when you first meet potential partners, or follow Digital Clarity Group’s methodology of meeting with potential partners before issuing the RFP document, ranking rather than rating will net you a more definitive view into which potential partners can best meet your business needs and why.

Ranking, as in assigning 1st, 2nd, and 3rd place, rather than individual evaluators rating or scoring each on an arbitrary 1-to-5 scale, simplifies identifying the top choice team by removing the ambiguity and complexity of trying to assign a score. Providing the selection team with a summary sheet of the agenda and purpose for the on-site information exchange and product demonstration, as well as the agreed upon focal needs, related business goals, and the types of functionality and competencies being sought, will help evaluators focus and provide them a common ground for the appraisals. Each stakeholder should rank each platform and/or partner team on the components of the day that are relevant to them – not a lot of point asking the copy writer to rank a potential partner’s technical integration capabilities if they don’t have a technical background – but by all means let team members provide the input outside of their typical scope if they have the related knowledge/background and are keen to contribute.

Once the rankings are in you may want to take it a step further and weight individual or team ranking for the different areas of evaluation based on the level of pertinence or impact of those areas on what the group or individual does. So for instance, content contributors’ and editors’ rankings might be given more consideration in the areas of a platform’s usability and partner’s content strategic approach. Whereas your team’s system architects’ opinions would bear more weight in the context of the platform’s technical capabilities and the potential partners’ technical chops.

 3-steps to successful ranking

Stakeholder involvement and consistency are key to getting the most thorough and valuable assessment.

Step 1 – Observe and take notes.
Go through the information exchanges and have all stakeholders capture their own comments and subsequent ranking on scenarios, particularly those pertinent to their role. And have the same team members evaluate the same components/requirements/scenarios for each of the vendor/partner sessions if possible.

Step 2 – Compare the different sessions.
Which potential partner did better in what area – what means most to you (the focal needs) – how do the systems stack up and do you feel the partners have a deep understanding of the platform(s) being considered and capabilities to fulfill the role you want/need them to.

Step 3 – Rank the sessions.
First as individual evaluators, then as a team, rank each of the on-site sessions, assigning 1st, 2nd, and 3rd place, in relation to the others – there can be no tie! This should help identify which vendor and service provider will be most successful in meeting your implementation needs.

Ranking rules apply to evaluating the written RFP responses as well, or really any part of seeking new partners or technology where an evaluation of criteria is warranted, such as a CMS selection process. Similar to the on-site evaluations, create an RFP evaluation matrix as a guide, and have all the evaluator fill it in.

Gather more data

If you have held the information exchange sessions ahead of sending out the RFP, you now have the opportunity to tailor the RFP based on what you learned, or didn’t, during the on-site sessions. If you already have the RFP responses, make sure to follow up with the potential partners to get answers to outstanding questions before finalizing the rankings and making a selection decision.

Discussion is key

With the ranking results and commentary from the on-site information exchanges, the written responses to the RFP, and the data collected throughout the selection process, you are well armed for the discussions needed to make a your decision on who are your preferred partners. And discussion is the keystone to the ranking process – the ranking serves not as the decision maker, but rather the fodder for the debate and discussions about the rankings and the different perspectives. Open dialogue amongst stakeholders on why they ranked the potential partners, and where they see value or potential pitfalls is vital to ensuring consistency in understanding of the offered capabilities, and that there is buy-in from the (majority) of the team on the partner(s) selected.

So you have decided which vendor and/or partner has the best beat that you think you can groove to, and you are ready for the solo dance, or in selection terms – the proof of concept (PoC) phase.


, , , , , , , , , , , , ,

Meet us at: