February 10th, 2015

Measuring Sales Training Effectiveness


Measuring Sales Training Effectiveness: When Quick and Approximate is Enough

When organizations invest in sales training, they are eager to know how their investments are paying off.

Learning the answer doesn’t take complex research design or studies of the sort published in scholarly journals. Quick and approximate measures are often enough.

The changes in behaviors and sales results post-training should be major, not minor ones. The need is for visible evidence to build a reasonably high level of confidence that the sales training intervention led to a material change in results.

The standard research design for measuring such changes is to compare control group vs. experimental group results. This works in scientific discovery but doesn’t really suit sales performance system interventions. Who wants to be in the control group and be “left behind”?

An easier method for assessing impact is to measure performance during the rollout period. New sales training is typically rolled out in stages, especially in large organizations. This provides an ideal opportunity to compare the results of those who have been through the training vs. those who have yet to be trained. This can be done weekly, monthly, quarterly — whatever measurement period makes the most sense.

Ideally, you would capture the results for each month following the sales training. If, when examining the data, you see results between groups are alike — that is, the first month after training provides similar increases in revenue — then consistent trends can be noted.

As each new group goes through training, you might need to adjust the results for systemic changes between periods, such as with seasonality or other regular patterns in the business. These adjustments can help factor out changes not related to the training, helping you maintain an apples-to-apples comparison.

It is possible to design a system for measuring sales training effectiveness that adjusts for many different variables and detects small changes in performance, but that would be both complicated and essentially immaterial. I would argue that quick and approximate measurement is enough. The material changes are the ones that are most important, revealing visible evidence of the value delivered from the training.

The results you are looking for should enable you to say: “In the last period, revenue for those who went through the sales training went up by X%, on average, compared with those who have yet to be trained.”

It’s hard to argue with evidence like that. It’s even harder to argue against training that achieves such measurable results.

Complimentary New Research Report




To download this report, please click here.


About The Author: Carter W. Brown

Carter W. Brown, a professional Director and CEO, has focused his career helping professional service firms successfully navigate complex business and cultural changes. Most recently Carter has concentrated on the corporate training and development, wine, executive search and legal industries.

On Social media:
| LinkedIn
Carter W. Brown

One Response to “Measuring Sales Training Effectiveness”

  1. February 11, 2015 at 7:34 pm, Mike Kunkle (@Mike_Kunkle) said:

    That works exceptionally well with shorter sales cycles, Carter. I’ve often benchmarked 3 months prior, left the month of training blank, and compared three month’s post results. If the content was the right stuff and was reinforced and coached, I’ve seen some good results that way, without the analysis becoming rocket science. It’s “evidence” vs. “proof,” but often enough for the exec team.

    A few companies I’ve worked for wanted long-term deep ROI studies, which are fun (and require a lot more cross-functional cooperation), but very few pushed to that level.

    The pre/post method is a lot less effective (or more difficult and lengthy) with a long sales cycle. In those cases, you do need either a deeper analysis (often attributing some results to other factors) or you need to measure lead indicators like increase in the number of opportunities added over a time period, YOY comparisons (when seasonality comes into play), pipeline velocity, and others. Still possible, just a different approach.


Leave a Reply

Previous Post:


Next Post:


HE Blog Directory Business Blogs best blog sites