In the previous week, I marvelled at how novel an experience it was, to be able to instantaneously and simultaneously view the facial expressions of all the learners, at once onscreen, as they were reacting to the speaker, and media shown.
In this lesson, the emotions of the tutor and guest speaker, were similarly optically and acoustically amplified through my 24 inch monitor and amplifier, with paralinguistic markers and facial expressions more visible, than they would have been onsite — most noticeably, tone of voice, nostril flare, lip-compression, contraction of the orbital muscles narrowing the eyelids ending in a gaze cut-off (to the keyboard or area beyond the computer monitor perhaps), movement of the muscle groups lowering the brows, constriction of the facial orifices such as the oral cavity, suggesting unpleasant stimuli.
I wonder if the learners had picked up the preceding paralinguistic cues as well. Were the learners less vocal and less fidgety in the second half of the lesson that I observed, compared to the first Adobe Connect lesson? Were learners’ postures more erect, less relaxed, with their gaze more focussed?
If the technology makes paralinguistic and non-verbal cues more explicit onscreen than onsite, it would be pertinent to examine how the speaker’s level of preparedness and expressions or actions influence the learning process and outcomes, especially in learning motivation and attitude (affective domains).