Getting Used to Better Meeting Technology
What's already possible with remote meeting software, and how might it evolve and improve?
This article originally appeared on No Jitter.
I was on a call recently in which a colleague on our all-remote team warned us at the outset that construction on her building would probably be an issue for her. During the call, when she was talking, she interrupted herself a couple of times with, “Are you hearing this?” and “I can’t believe you’re not hearing this construction noise; it’s so loud.” Nobody was bothered, so all I can assume is that the background noise suppression feature on the collaboration app really kicked in—which suggests it will soon be possible, much of the time, to have very good conferencing experiences even in less-than-optimal settings.
Very good conferencing experiences, but maybe a little jarring for participants nonetheless, as more of the conference seems to “happen” inside whatever device you have in front of you, instead of in a hybrid physical and virtual space.
After all, people on video calls still exist in their physical environment, and react to it. My colleague in the noisy apartment was bothered by the construction sounds, and it affected her experience of the meeting. The rest of us would have had no clue what was going on—which certainly hasn’t been typical when someone joins your call from the airport or wherever. If she’d been wearing noise-cancelling headsets, she might have been as unaffected as we were—but not everyone is going to be that well equipped.
There’s a similar case with meeting equity. As Irwin Lazar writes on No Jitter this week, vendors are ramping up the competition to sell this capability to enterprises for meeting rooms. The promise is that multi-camera rooms, driven by AI algorithms, will essentially “cover” a room-based meeting the way the TV director for a sports event changes shots to capture the action in real time. The result will be that the unhelpful single-camera “bowling alley” view of meeting rooms will be replaced by images that frame those in the meeting room similarly to the way remote participants appear.
But there’s bound to be some adjustment here too. First of all, the more complex the setup, the more likely that the wrong shot will turn up, which will be distracting to the remote participants. Then there’s the question of what the experience in the room will be like. We assume everyone will immediately relax and just act naturally, letting the cameras and AI do all the work. But most people aren’t used to interacting with multiple cameras, and people in meeting rooms together tend to talk over each other, move unpredictably, and relate directly to each other. I don’t think we really understand yet how this behavior will, in turn, train the AI to recognize speakers going forward. For example, given the well-documented tendency of men to interrupt women in meetings, there seems to be potential for bias entering the equation.
I also wonder how well our brains will process those room-based participants as individual images, when we know they’re gathered together in the same room.
In other words, it’s easy to see the power of these multi-camera systems with a brief demo. But living in them for a series of 50-minute stretches all day might be more taxing than we now imagine. We’ll certainly adjust over time, but meeting equity is likely to bring new employee experience challenges of its own in the meantime.
I’ll close with a midsummer digression: If you don’t already know the story, check out this article about how the TV shot of Carlton Fisk “waving” his home run ball fair in Game 6 of the 1975 World Series came to be so iconic. How we watch video changes when someone gives us a new way to see the experience—but it often happens by accident.
Read more about:
No JitterAbout the Authors
You May Also Like