Feedback is one of those things that should be so simple but ends up being ridiculously hard sometimes, especially when it comes to meetings. For all too many conferences, it's like pulling teeth to get people to fill out the evaluation forms. And when they do, they all too often just run a line down the 3 or 4 point on the 5-point Likert scale. The only ones who take the time to comment, for the most part, are those who either adored or abhored the session. And their reasons for adoration or otherwise may have a lot more to do with the call they had just fielded from the office before walking in the session room door, the temperature of the coffee at the break, or the fact that the speaker once worked with a colleague who said he was a real pain, or that the attendee's mother read the speaker's book and loved it, or a host of other reasons that have nothing to do with you, your decisionmaking process, or even the actual quality of the session.
But no matter how flawed our evaluations are (and I would argue that everything, from the questions we ask to the forms we use, are often fatally flawed), they're all we have to go on, so we carefully collate the answers and rank the speakers and use the outcomes to decide who and what type of person to invite next time around.
Then along came The Bacon/Yelp Correlation:, and all of a sudden we have a huge data dump of opinions about speakers, along with everything else that pops up on your conference's hashtag. Oh joy, some real, unsolicited feedback! But before you get too excited, remember that those who speak up on Twitter may not represent everyone your meeting serves. As Seth Godin points out in
If you try to reverse engineer preferences from Yelp reviews, you're likely to make a common error. It turns out that bacon-as-a-topping comes up often in Yelp, which might lead you to believe that adding bacon to the menu is a surefire crowdpleaser.
In fact, what it tells you is that bacon lovers are more likely to post Yelp reviews.
There are now two crowds. There is the crowd of mass, of everyone, of what the average folks want. And there is the crowd of the loud, the interested and the connected.
And while your meeting likely isn't getting reviewed on broad-based sites like Yelp, those online critics hovering around your hashtag likely are a similarly self-selecting crowd, and one that may or may not reflect what the majority of people are thinking.
I know meeting managers, like the rest of us, tend to jump on those data points of one, especially if that one is particularly harsh, or particularly sweet. But, just as with the formal evaluation forms, keep the source in mind and, if you make changes accordingly, at least do so with the knowledge that while bacon-lovers may be the squeakiest online wheels, there are lots of other cogs turning who may in fact be vegan and not appreciate a sharp turn into cured-meat territory. Which, as Seth says, is fine:
By all means, then, get weird and amplify what the outliers want if your goal is to attract raving fans online. But at the same time, it's way too early to confuse acceptance by the critics with delight of the masses.
I'm not sure what the answer is to the age-old problem of getting good, usable, meaningful meeting feedback, but until we get it figured out, just tuck it into the back of your mind that it always, always pays to consider the source.
Update: And if you want some ideas on how to improve your evaluations, Donna Kastner has some good ones.