Who cares what you think? Feedback, Dialogue & Improvement
For sound engineers, feedback is something to be avoided. This may hold true for others of us, too.
Evaluative Feedback:
Anyone who has worked in any training or education context has received critical feedback scores. Such “scores” are often the only quantifiable measures in the domain of Training and Development, and like other scoring, will provide a basis for judging performance. The participant’s overall impression can be influenced by a good/bad morning, resenting/embracing having to return to the office four days a week, or any number of external factors. As Morrissey asks us to “Hang the DJ,” the calls to “Scrap the Facilitator” can be tough to resist if these participant scores are below expectations. In one such instance that I remember very clearly, rather than side with the client in focussing on the delivery, the project lead shared that success of training intervention rests on the interplay of three things:
Content Relevance (Was this what the group needed?)
Participant Mindset (Were they ready for it?)
Delivery (Was the session run well?)
When asked for post-session feedback, participants can rightfully feel placed in a position of expertise (i.e. knowing what good is and how to make things better) when both “good” and “better” are difficult to pin down. Content Relevance can hit a snag when “reinforcing our common knowledge” (good?) meets “telling us what we already know” (bad?). The training Delivery brings a fairly straightforward discussion about actual behaviours (e.g. they shared tangible examples, they kept to schedule, etc.), but quickly veers into what Rob Briner refers to as pseudo behaviours (e.g. their level of energy, their approachability, etc.) that leaves much to interpretation. Without discounting such scoring completely, we can agree that there is a good deal going on here beyond the spreadsheet that collates feedback form data. Like many things, sincere efforts to improve impact require collaboration among providers, clients, facilitators and end participants.
Expert Feedback:
For skill development in general, a tried-and-true means of improvement is to work with an expert instructor, coach or mentor. Such experts have deep knowledge of what high-level performance entails, even if they themselves are not at the highest level of performance (e.g. Adele works with a voice coach). Often those to whom skills come naturally struggle to instruct others because they do it intuitively and lack insight into how they do it. Hence, the pressure we can feel when asked for feedback to provide insightful commentary, especially if that is included in the query, e.g. “What could make this better?” Understandably, responding, “I actually don’t know what to tell you,” is more difficult than offering helpful (but maybe uninformed) suggestions. The movie Amadeus has a scene where Mozart receives feedback that one of his pieces has “too many notes.” In challenging the expert nature of the feedback, he cheekily replies, “Which ones would you have me remove?”
In environments that have a lot going on, it is equally important that we create an environment conducive to sharing real perspectives. This can be termed “creating psychological safety.” My grade 7 English teacher, Mr. Lorne Williams, taught me something about eliciting delicate information by making it harder to hide or couch. On a reading comprehension test, we answered a series of short answer and multiple choice questions about short stories that were assigned as readings. Before handing the page in, we were instructed to turn the page over and record the actual number of stories that we had read from zero (none of them) to six (all of them). I can remember feeling conflicted: Did I say “four” and maybe reveal that I was a good guesser? If I say “six,” will he question my ability to retain details? I can’t remember what I actually wrote, but the question stuck with me because it was so hard to lie.
Anonymous Feedback:
A well-intended way to make the most of hearing the reactions from others is to make feedback anonymous. The anonymity relieves erases any retribution so people can, purportedly, share what they “really think.” The set-up for feedback tends be a shortening of “developmental feedback,” or “constructive criticism.” The shroud of anonymity de-emphasizes anything positive that you would want to say. Speaking truth to power can be difficult, and the threat of retributive consequences is not helpful. Social media may have proven that anonymity and lack consequences can embolden harmful comments that may lack thoughtful consideration.
One popular model for delivering verbal feedback in a work setting is to first, describe the situation to set the context. The message then continues to describe an observable behaviour and its perceived result. “We were in a client meeting and you challenged my statement. I think this created an impression that you and I are not aligned.” One can imagine the follow-on to this being a productive discussion about the role of optics in client interactions, as well as the role of preparedness. One can also imagine such ensuing hot takes as, “Well, if you did your research, I wouldn’t have to correct you!” or, “What are you talking about? This sort of discussion in front of the client shows our authenticity and passion.” The joys of different perspectives!
There is always a risk of discounting the message because of the messenger, especially if the feedback appears hypocritical or evinces a do-as-I-say-not-as-I-do vibe. That said, removing that perspective makes the message even more discountable because we can’t assess their level of insight or estimate their desire to help. In situations where there is no “one way” or “right answer,” we need the fuller context of knowing whose perspective we are hearing.
Board Self-Assessment Evaluations (which is feedback, right?):
Such tools are listed among “best practices” for Boards seeking to improve. As the name suggests, “we” elicit feedback from our fellow Board Members as to how “we” are doing as a Board. Given the role of the Board in the organizational success, such evaluations could well speak to overall performance. Similar to the training environment described above, no matter what mechanism we use to gather our feedback, multiple factors will contribute to anyone’s specific assessment of current performance. If you have ever been part of one of these, you will know that assessments can vary greatly in direction (e.g. think, GOOD vs. BAD) and in amplitude (e.g. think, scaling from small issue to BIG ISSUE). The task is not to determine which view is “correct,” but use the differences as an area to discuss.
Aligning on intent is vital to avoid the sense that the whole exercise is performative (i.e. ticking a box). Framing around the exercise around these two specific questions can be effective:
What are possible avenues we could explore to improve? (No one is going to say our status quo is perfect.)
How prepared are we to undertake such efforts? (There could be completely valid reasons to NOT pursue recommendations or, at least, not now.)
Boards should be prepared to see reinforcement of things already underway, and should be open to suggestions to change, whether that is starting something new or ceasing an existing practice. Here are three broad categories, all of which include a large degree of ambiguity:
From a strategic perspective: Is the organization focussed on the right things? (Direction, as well as narrow vs. wide)
From an impact perspective: Are we evaluating effectively? (Current measures, as well as developing new ones)
From a decision-making perspective: Is our Board collaborating effectively amongst themselves, as well as with Leadership? (Process, as well as actual behaviours)
With any such exercise, the danger is that turning a “great discussion” into actions for improvement (even if they are simply reinforcing current approaches) is very hard to do. We should note that even getting a “great discussion” to happen is no mean achievement!
New Board Members as Dialogue Sparks:
Each week of our Board Development program takes on one of those areas and gives participants frameworks and criteria on which to evaluate each. This thinking challenges new Board members to assess the current environment, not as experts who know what to do, but as observers who know what is important to discuss. No organization meets an ideal, so there will be criticism, which can start a discussion:
Criticism of the organization’s vision, mission and values, can invite a discussion on several fronts: Is the focus well understood? Should we revisit this to clarify? How could it be better phrased (with the understanding no words will capture this fully)?
Criticism of how we are evaluating impact can invite such useful discussions about who determines our current performance measures, and how well do these attach to the actual impact that we want to be having? Maybe we have a rich conversation ABOUT the specific impact we seek…
Observations of the behaviours we see at Board meetings can spur discussions about how we are engaging with each other and to what end. If we observing unevenness in Board members contributing in meetings, do those contributing more dominate in ways that are not helpful? Do those contributing less have unspoken contributions? Do we need clarity on roles? Should our Board members be better prepared? Should we try to make the pre-reads more user friendly?
To answer the titular question, especially for new Board members:
We should all CARE what you THINK, and make use of your perspective to further develop your effectiveness on our Board AND our effectiveness as a Governance body.