Introduction: Body language is important:
Drawing from Linguistics and the Neurosciences this article will explain how important it is to the commercial function for body language to be visible during business communications. Especially during critical interactions! The more important the meeting, the more important it is for involved parties to see each other. We’ll explain why face-to-face communication is so much more effective than otherwise in promoting the following communicative outcomes:
Understanding
Information transfer
Clarifying miscommunications: Eliciting uptake / back-channelling
Emotional transfer
Team building
Building trust
The article will start by discussing the various modalities of language, after which we’ll explore non-verbal communication (NVC) and deep-dive into body-language (kinemics). We’ll then explore kinemic primacy during interactions and how its’ absence exacerbates the ascription of emotional responses to stimuli before they are processed cognitively; before a person has begun thinking about the stimuli! On that bombshell, let’s begin our examination of human communication in the most general terms, progressively becoming more specific in focus. The conclusion we’ll find ourselves drawn to will be this: If a conversation is important, it is vital for body language to be visible during that conversation.
What makes us Human, and what makes an organisation tick?:
The human ability to communicate abstract concepts is what separates us from all other organisms on The Globe (Aitchinson 1976). Humans are social creatures (Aitchinson 1976; Sandstrom et al 2014). Social creatures operate within social systems. Social systems depend upon communication between their constituent parts to exist and function. The more complex the social system, the more complex are its’ communicative requirements. Commercial / NFP organisations are human social systems. Therefore these organisations depend upon communication between their constituent parts to exist and function. The Organisation comprises people: No matter what the organisational structure, at its most fundamental level the primary facilitator underpinning any commercial entity is communication; Communication via Language.
What though is language? It’s a word one might hear daily, but often times its usage is not entirely accurate or correct. As we’ll later discuss some subtle concepts surrounding the human communication function, it might first serve us best to establish some general definitions.
Defining ‘Language’:
‘A Language’ denotes a communication system common to a geographic region, or a Nationality; such as ‘The English Language’, or ‘The Welsh Language’. However, for the linguist ‘Language’ has a more potent and general meaning: productive, structured, symbolic communication. That sounds similar to a definition of ‘semiotics’ doesn’t it? Semiotics and language are intimate constructs but not identical; Language is a subset of semiotics: Whilst the discipline of semiotics deals with all types of symbolic communication (semiosis) at all levels of complexity, ‘language’ deals with symbolic communication, that is productive, structured by grammar. In this context ‘productive’ means that limitless concepts can be constructed and conveyed using limited tools: each language has a finite number of words (lexicon), yet these words can be combined to describe all we can conceive of: Even if it is only to the extent of describing something as ‘indescribable’ (which in itself is descriptive!). Such is the wonderful productivity of natural language. ‘Grammar’ refers to the rules and structures of language. Whilst grammar is a key feature possessed by language, and one which is not shared by all semiotic systems, it is not grammar alone which demarcates language from other forms of semiosis, but the productivity of language combined with grammar.
An example semiotic system that is not language, and why it isn’t:
To further elucidate on productivity defining language, let’s look at the semiosis of interacting with traffic lights (in Queensland, Australia in 2016) and how it lacks productivity. Traffic lights are found by ‘stop lines’ painted on the road, either at junctions or by pedestrian crossings. Their location is in itself a semiotic sign: the existence of traffic lights near a stop line constitutes a semiotic sign imbued with the meaning ‘be prepared to receive instructions from these traffic lights which will influence your passage past this point: be attentive to the signal’. Having absorbed that forewarning, a green light means ‘Continue / Proceed’, a red light means ‘Stop / Don’t Start’, an amber light means ‘Stop now unless it’s dangerous to do so – a red light follows’. The above sentence represents a taxonomy of three lexical items, to put it another way, a definition system comprising three units of meaning. In linguistic terms this is the lexicon of traffic lights in Queensland. So, we’ve now defined the building blocks of a communicative system, but what construct arranges / coordinates these blocks to create meaning as a system? Having discussed what meaning inheres to what light, let’s question the meaning expressed by the order in which the lights activate:
A red light is always followed by a green light
A green light is always followed by an amber light
An amber light is always followed by a red light.
This sequence is arbitrary in construction but accepted by convention. That means there is no pre-ordained naturalistic meaning inherent in this sequence, but that somebody ascribed meaning to it, which was consequently absorbed and accepted by society. The same is true of the meaning assigned to the individual lights (although arguments have been made regarding the non-arbitrary / naturalistic nature of some colour coding (Taylor 1995)). As long as the lights are functioning correctly there is no exception to these sequence statements. However, if this sequence is broken, the lights are regarded as malfunctioning (they no longer operate according to their convention). They lose their meaning. Semiosis stops in all ways bar transmitting the default message ‘this traffic light is now malfunctioning and is no longer useful for controlling traffic flow’.
The sequence in which the lights are activated provides the structure via which the lights convey meaning: The sequence is the grammar of this semiotic system, through the application of which the lights communicate traffic commands. If there is any deviation from the above stated grammatical rules of activation sequence then the meaning is lost from the system. An English Language illustration of how ordering lexical items conveys meaning is this: there is meaning in the phrase ‘The cat sat on the mat’ because it adheres to the grammatical structure of English, whereas mixing the word order to make ‘Sat mat the the on cat’ loses the meaning because the structure has changed such that the sentence no longer adheres to the grammar of English: Despite possessing exactly the same lexicon. From the above we see the semiotic model of a set of traffic lights possesses lexicon, grammar, and interlocutors (someone to communicate with) who share the same understanding of the meaning inherent to the three different coloured lights and their order of activation. These represent some, but not all the aspects of language.
If we have lexicon, grammar and a shared understanding of the semiotic system, what then is missing from this system that denies us from defining it as a language? Whilst being vital to the smooth operation of our city streets, the code transmission of traffic lights does not exhibit enough functionality to classify as a language. Why not? Because this system is not productive in and of itself! Given the lexicon and grammar present, the choice of possible communicative concepts is very limited: Try programming traffic lights to convey the meaning behind the phrase ‘subsequent to April 2016, on the second Wednesday of every month you will proceed to your nearest online banking interface and deposit $1,000.00 into the 5thCorner bank account: In perpetuity.’ With just 3 lights at your disposal it is not possible to convey such depth of meaning. The only method of using this simple interface to express such complex meaning would be employing the lights as visual output devices for a separate code capable of expressing such concepts using only three lights with binary states. Morse code (or a variant thereof) could do the job, but then Morse code is simply a method of transmitting language (letter by letter) without being able to vocalise or write the language in question. If using the traffic lights to convey Morse code, the lights in this scenario have become just a vehicle for a higher level semiotic modality, itself merely conveying a natural language.
Language is not just speech:
Now we’ve established the general functional characteristics of language, lets’ examine the forms in which it manifests. Oft-times people perceive language as meaning ‘speech’. Not so: De Saussure (1916), (widely regarded as the progenitor of modern linguistics), stated Langue is not to be confused with human speech. So what then are the forms of language? In terms of the modalities available for conveying meaning, linguists define natural language as consisting of cues transferred both via the verbal channel and the non-verbal channel. However, in the minds of most people language is still understood as a communicative system occurring primarily on the verbal channel (Segerstrale & Molnar 1997:4). Such is the understanding this article aims to correct.
The Verbal Channel and the Non-Verbal Channel:
Before explaining how much communication occurs without the verbal channel, let’s firstly understand exactly what is the verbal channel? The verbal channel consists of vocalised words and the written word. A natural language can exist without the written word, but not without the spoken word: Historically, and developmentally, the spoken word precedes the written word: as a species we spoke before we wrote, and as an individual we speak before we write (in a majority of cases large enough to affirm this as the standard). However, the perception of language occurring solely on the verbal channel is far from accurate. During the speech act (not to be confused with ‘vocalisation’) humans sing what they say and they accompany this with mime: All meaningful. During speech non-verbal cues accompany all verbal cues.
The nonverbal channel refers to units of meaning conveyed by the following vehicles: gesture (including oculesics), expression, tone, intonation, volume, métier, non-verbal interjections (such as laughter, growling, spitting out your drink in surprise), and turn-taking. Even proxemic cues can influence the meaning of an utterance. It’s a fascinating subject.
Intonation: An obvious example of a non-verbal modality:
In beginning an exploration of the extent to which language is not simply spoken or written, let’s examine intonation as a communicative modality: We’ll examine a normal, mundane telephone conversation, and we’ll only examine the first utterance therein: We’ll use the English language as the communicative device for this exercise (Standard English for the sake of simplicity, but any local variant will work equally well to explicate the point at hand). The English Language, in all its variegated glory, is what’s known as an ‘intonational language’. This means that each utterance, (most commonly structured in the form of a sentence) is delivered with an accompanying ‘tune’ (consisting of variations in pitch) applied along the length of the utterance. This ‘tune’ is called intonation. Intonation is clause specific. Be careful not to confuse ‘intonational’ with ‘tonal’: English is an intonational language, French is an intonational language. This is because the meaningful variances of pitch occur along the full length of the clause. The Chinese languages, however, are tonal languages. Thai is a tonal language, as is Vietnamese. In a tonal language the tunes (pitch variations including portamento) do not occur along the clause to anything approaching the same extent, but instead occur within each syllable: meaning that each syllable is defined as much by the pitch event occurring within it as by its other phonemic phenomena. For example, in Mandarin, a language with 4 tones, where a word is transliterated to be represented by the anglicised written form ‘SHI’ the choice of tone applied to the phoneme ‘SHI’ completely alters its meaning: For example a version of ‘SHI’ incorporating the first tone ( – ) might look like this when written ‘师’whereas the same‘SHI’incorporating the fourth tone ( \ ) might look like ‘是’. I say ‘might look like’ because each variant has plenty homophones: words that sound the same but have different visual form and (most often) different meaning. Here’s a visual representation which may help to clarify:
English version (intonational: tune happens across the clause)
The same concept in Mandarin (tonal: tune happens across the syllable).
Interesting but possibly a digression, let’s get back on track: To reiterate: Unlike Mandarin, which is a tonal language, English is intonational. Now we understand what is intonation let’s apply that knowledge to our analysis of a telephone conversation in English. When interacting via telephone, we can’t see our interlocutor so we can’t observe or process their visual communicative cues, therefore we’re left with the following modalities to process:
Verbal
Intonation
Volume
Metier
In this case we’re considering intonation as it pairs with verbal transmission. Imagine a telephone conversation where each party talks in complete monotone. It’s not a usual scenario is it? The comedian Steven Wright (insert link) made a successful career delivering his jokes in a ‘deadpan’ style, using minimal intonation. It’s so unusual for an English speaker to omit congruent intonation that we instinctively laugh when we hear it. Or it confuses us.
Because we have no access to audio data in this instance, we can instead use English language written punctuation to trigger intonation events in the spoken realm. Read the following examples aloud to yourself…..
Example 1:
Them: Phone rings.
You: Hello?
Example 2:
Them: Phone rings.
You: Hello!
When I read aloud these two different deliveries of ‘hello’ I note the following intonation events:
Example 1:
Them: Phone rings.
Me: Hello ?: uttered such that the pitch at the end of the word is higher (sharper) than the pitch at the start of the word. A visual representation of which might look like:
Example 2:
Them: Phone rings.
Me: Hello!: uttered such that the pitch at the end of the word is lower (flatter) than the pitch at the start of the word. A visual representation of which might look like:
If you (the reader) were to repeat these two utterances back to back, would you infer any kind of mood, or social intent from the different intonations? If English is your ‘Mother Tongue’ (and you are not hearing / pronunciation impaired / suffering related brain impairment) you will derive differing meaning from each of the two utterances based on their intonation. That’s how an intonational language works.
By way of a brief digression: Further to our earlier reminder that intonational languages are not tonal, here’s a nice play on words: Intonation conveys tone: ‘Tone’ as in emotional tone (describing emotional content: Affect). I’m sure I’m not alone in having endured the scolding (in my distant youth of course), ‘Don’t use that tone of voice with me Young Man!’ We all know what is meant by this particular phrase: To whit: ‘don’t talk to me in such a way as to convey the negative emotional and social meaning that I am inferring when I hear you speak in the fashion you now employ’. Nerd-humour alert!: From a linguistic perspective the irony behind the phrase ‘Don’t use that tone of voice with me’ is that the interlocutor is actually annoyed by the intonation used, not tone…… Cue awkward laughter.
‘So’ I hear you say, ‘on the telephone intonation conveys meaning, so what? Why does this mean anything to me, I already have a telephone which I use very well, how does this lengthy article relate to face-to-face meetings in the business space?’ Having (hopefully) made it clear how intonation (a non-verbal modality) mediates the meaning of the spoken word (a verbal modality) let’s now understand how much meaning inheres to the visual non-verbal modalities of communication between you and your staff, and you and your clients. It’s quite shocking how much pragmatic and affective meaning is conveyed visually during conversation. Please read on.
Three preparatory linguistic definitions:
These three definitions will aid in understanding the rest of this article:
Affect: Emotional response, emotional judgement, or emotional effect.
Pragmatics: The study of converting abstract internal meaning into a transferrable code: Which then can be shared with interlocutors.
Semantics: The study of converting that transferrable code into an abstract meaning upon reception. If 100% efficient communication has occurred then the recipients’ semantic understanding is the same as the pragmatic meaning transmitted. The really fun question here is ‘how can we tell the meanings are the same? How can we tell the semantics match the pragmatics?’ To answer this question, in the interests of brevity we’ll avoid any ontological questions about ‘facts’ as posed by notions of a subjective universe (Russell 1946), or Platos argument that, in an ever-dynamic universe there are no ‘facts’ beyond his perfect ‘forms’ or ‘ideas’ which, by their nature, are impossible to convey essentially, (Popper 1945), and assume (assert if you prefer, I’m not here for an argument) there are such things as ‘facts’ which can be transferred between individuals.
Face-to-face is best, but it could still be better:
We tend to think that what we say is what is heard, and that communication within our organisations is efficient. The science would suggest otherwise I’m afraid. We can all agree face-to-face communication is our preferred communicative model because that’s how we evolved. I’ll provide evidence to support that assertion later, but, if face-to-face is not the preferred communicative model for important discussions, then why do our senior managers / state premiers all meet face-to-face, why do successful BDMs and company / National representatives all meet face-to- face with their clients / counterparts? One has to ask why is face-to-face the preferred communicative model for important communication? The simple answer being it is the only communicative model which allows our interlocutor (the person with whom we communicate) full access to all the communicative modalities and cues that are transmitted during the interaction. However, it’s not all good news I’m afraid, even under circumstances most conducive to communication, due to the emotional and social ‘noise’ we experience at all times, we’re not nearly as good at transferring non-social information as we’d like to think we are. An example: In a face-to-face exchange between university students of the same language group (Canadian English) where both interlocutors wish to exchange information regarding very simple concepts, one study (Li 1996) has shown there to be a maximum actual information exchange rate of 75%. This means, that given the best possible platform upon which to convey information, the top 5% of the Canadian educational cohort in 1996 could only transfer 75% of Information regarding a very simple proposition. Now transfer that terrifying revelation to your organisation, where complex notions are conveyed daily, and under circumstances not always given the luxury of utilising the maximally efficient communicative model. It’s not hard to understand why so much miscommunication occurs.
How much communication occurs on the non-verbal channel?
Now we’ve established language occurs on non-verbal and verbal channels, and that information exchange is less than maximal, what more does one need to know? How much nonverbal information is exchanged in dyadic interaction? There are estimates ranging from 66% (Birdwhistell 1970) to 93% (Mehrabian 1972). The most palatable estimate (which does not contradict either of the above) is over 55% (Klopf & Park 1982:72). The high communicative power of interaction without words can be understood by reference to Lippas’ (1998) study in which students accurately assessed gender preferences, masculinity etc. based solely on nonverbal cues. Also noteworthy is Hale et als’ (1997) study which found the partners of depressed patients (in America), and the patients themselves to exhibit unusually low levels of NVC when interacting with each other, compared to higher levels when interacting with strangers. Whether the initial depression is causative of or resulting from the decreased NVC is not clearly established, but there is a strong positive correlation suggesting that the suppression of communicative cues in this channel contributes to increased and further depression. NVC is important!
Let’s look at some management processes internal to an organisation. Staff training, staff reviews, crisis meetings: All communicative events between management and staff: Intra-organisation interaction. A critical aspect shared by these interactions is, that they are all events requiring real-time, direct communication between management and staff. Antecedent even to communication with any external stakeholders, these are critically important communication events upon which the success of the company rests. Ideally, these events occur as efficiently as possible, for as little cost as possible, whenever required. We’ve seen above the importance of NVC to communication, so it makes sense that critical business communication would occur in an environment that facilitates the transfer of NVC. Further to the earlier questions regarding the preponderance of face-to-face meetings when things are really important the next section of this article will detail the ‘Kinemic’(Birdwhistell 1970), ‘body language’ aspect of NVC.
Body Language: Kinemes:
Haviland (2004) describes kinemes as often being characterized by their language-like-qualities on one hand and their divergence from language on the other. Body language is so important to communication that various taxonomies have evolved to codify it. By way of stressing their importance to efficient communication and in order to make these subdivisions of bodily communicative tools easily understandable, here is a brief outline of some commonly referenced definitions of kinemes:
Addressing overarching, general concepts Haviland (2004) provides the following three categories of kinemes:
- conventionalised language-specific emblems, which in most ways are just like words or spoken expressions, except that they are performed in an unspoken modality. Emblems are regarded as the kinemes which are closest in essence to verbality (Ekman & Friesen 1969; McNeill 1992; Payrato 1993). As with speech, separate technical disciplines have their own ‘technical register’ emblems (Argyle 1988; Finnegan 2004) i.e. auctioneers, traffic police, musicians, etc.
- Gesticulation; demanding an obligatory accompaniment of speech, it lacks language defining properties, possesses idiosyncratic form-meaning pairings, and a ‘precise synchronisation of meaning presentations in gestures with co-expressive speech segments’ (also known as ‘metaphoric’) and ‘iconic’ gestures (things that look like that which they signify).
- Pointing gestures; These are also referred to as ‘deictic‘ gestures.
McNeill (1992) adds the following categories to this base:
- Beat; this is a miniscule signal expressing the speakers’ conception of the discourse as a whole (such as eyebrow raising on the part of a television newsreader, used to emphasize a point).
- Cohesives; spell out continuities by occurring at the same time as the recurrent theme of speech alongside which they co-occur.
The most commonly cited taxonomy is that Ekman & Friesen (1969) who use only four categories.
- Emblems; these are culture specific, visual unit of meaning.
- Illustrators; which are nonverbal hand gestures used to complement spoken interaction: The most ‘pictorial’ kinemes (Ting-Toomey 1999:124). This taxonomy includes deictics as illustrators.
- Regulators include the use of vocalics (or their absence), kinesics, and oculesics to regulate the pacing of an interaction (Ting-Toomey 1999) these are used at a very low level of awareness.
- Adaptors nonverbal habits and gestures like covering your mouth when you cough, also scratching an itch etc.
Now you are aware of these taxonomies there’s plenty of fun to be had defining them as you encounter them in everyday use. Please note also that these descriptors are still not exhaustive; they do not include considerations of posture as markers of meaning. It’s great being a person isn’t it! We all engage in this borderline mystically complex communicative dance all the time, mostly unconsciously. That being said though, we immediately notice if any communicative modalities are missing or being suppressed.
Miscommunication on the nonverbal level creates misunderstanding on a general level:
Here’s a super important concept: Miscommunication on the nonverbal level creates misunderstanding on a general level (Schneller 1986; 1988). Why is it that kinemics is so important to communication? Numerous studies suggest kinemes are processed and produced as aspects of pragmatic units. Haviland (2004:198) notes a ‘semiotic complementarity between gesture and its’ accompanying speech…’ (albeit without specifying form). Kendons’ (1997: 110-111) inductive assertion of a ‘single plan’ (also supported in essence by Lausberg & Kita (2003)) in that ‘[s]peech and gesture are produced together, and that they therefore must be regarded as two aspects of a single process…’ whilst being derived from observation, is (also despite a lack of specificity regarding form) partially supported by neurological studies; Thompson et al (2004:412) found the left hemisphere of the brain is used when decoding human movement/ signaling: The same area of the brain regarded as the area for processing speech: As Brocas’ and Wernickes’ areas are generally accepted to control pragmatic and semantic processing respectively, (Aitchison 1976) and are both situated in the left hemisphere, these results, coupled with the finding that ‘[t]he scalp distribution of the N400-effect, which is more posterior than usually reported in picture processing, suggests that the semantic representations of the concepts expressed by meaningful hand postures have similar properties to those of abstract words[ ]’ (Gunter & Bach 2004: 52), strongly suggest nonverbal communicative signals to accompany the processes of cognizing verbal interactions in their passage through the brain during semantic processing. That the same is true of pragmatic processes has yet to be established neurologically (as far as I am aware). However, Kendon claims a robust occurrence of synchronization between gestural and verbal performances, suggesting certain illustrators (Ekman & Friesen 1969) to operate as ‘lexical affiliates’; functioning to enhance verbal meaning. Further to which, MacClave (2000:855) found users of ‘American English’ to exhibit codifiable consistency in their head movements; suggesting emblem like pragmatic force behind their usage. In simple terms, Kinemes (body language cues) are strongly posited as being processed like the spoken word.
As yet I have found no literature indubitably ascribing the original evolutionary cause of these results to either nurture via acculturation or hard wiring, but there can be no doubt that NVC is prior to verbal communication both ontogenetically and phylogenetically (Goldschmidt 1997:229). Although not completely sure of the cause of the positive correlation, Watson O’Reilly et al. (1997) found children whose language skills developed early manifested behavior suggesting a positive correlation between the early usage, (and early phasing out) of symbolic gestures as a primary communicative channel and the ability for linguistic predication. Put more simply, the infants who learned how to encode / decode body language earliest developed to be more linguistically adept. Bates, Bretherton, Shore, and McNew (1983:65) make the strong claim that” all of the childs’ first words’…begin as actions or procedures for the child. The infant does not have her first words, she does them’. Finnegan (2002) also notes the ontological primacy of NVC, but, unfortunately, offers no empirical evidence to support the claim. To conclude, if we embark on our lifelong communicative journey using the non-verbal channel, it is little wonder that non-verbal cues are so important to our communication.
A digression: Comparing human NVC with that of the non-human primates:
Readers of a Creationist bent have no need to read the following paragraph: In fact the next paragraph serves no purpose for Creationists or Evolutionist alike, other than to share the information that Sociologists and Primatologists (tautologous? that depends entirely on The Readers’ perspective) are interested in the parallels between human kinemics and primate kinemics.
That many human facial expressions are shared by many of the non-human primates (Presshoft & van Hoof 1997; Soumi 1997) particularly the Chimps, Gorillas and Orangs, and that apes rely primarily on NVC to communicate comfortably facilitates the ascription of a genetic communicative homology between the species. This could also serve to explain why NVC signals in human usage are so important to understanding pragmatic intent. However, Maryanski (1997) is less inclined to ascribe genetic hardwiring as causative to this, she points out that, as vision is the dominant sense among the higher primates, therefore it is not surprising that sign modality communication between conspecifics would develop as the primary communicator. Despite this, Segerstrale & Molnar (1997:4) are convinced of the primacy of NVC, believing the field of nonverbal communication to be ‘[a] strategic site for demonstrating the inextricable interrelationship between nature and culture in human behavior ‘. The most extreme expression of this stance is presented by Schiefenhovel (1997), who posits non-sexual social grooming as a universal. Albeit ‘culturally repressed in most post-industrial societies through ‘professionalization’ into the form of massage or the arts of the Barber, and Beautician’. For those readers subscribing to evolutionary hypotheses these findings suggest the prevalence of gesture communication amongst the higher apes when combined with the assertion that humans evolved from the same early ancestors as the apes might go some way to explaining why NVC is such a critical aspect of communication.
The non-verbal cues override the verbal cues:
Back to the problems of communication between humans: For communication to be successful, it must occur in a domain of shared knowledge (Scheu-Lottgen & Hernandez-Campoy 1998), or ‘intersubjectivity’ (Ardissono et al 1998). It is worth noting that if there is a discrepancy between verbal and nonverbal language, it is the latter message which will be believed. For example were one to say ‘I am very happy’ but their teeth are clenched and their eyes are wide open with ‘a furrowed brow’ we are not likely to believe their statement of happiness. The non-verbal cues override the verbal cues. Hence the oft used line ‘I can see you’re not telling the truth’. Once more for effect (and affect actually): The non-verbal cues override the verbal cues. That statement alone would suggest the enormous importance of access to visual communicative cues during all important business communications:
Communication does not equate with understanding: Meaning is negotiated:
Communication does not equate with understanding; that only occurs when both parties have the same interpretation of communication symbols. Let’s revisit the earlier, as yet unanswered question ‘how do we know our pragmatic intent was correctly semantically decoded?’ In short, we don’t, we infer common understanding through processing communicative cues at appropriate times. According to Gumperz’ (1982, 1992) sociolinguistic theory of conversational inference, meaning is constructed during the course of interaction as listeners interpret the pragmalinguistic aspects of behavior, which he calls ‘contextualisation cues’, these cues enable the observer/listener to infer the actor/speakers’ intentions (this concept can also be applied to NVC signals). Thus “meaning in any face to face interaction is always negotiable; it is discovering the grounds for negotiation that requires the participants’ skills” (Gumperz 1982:14). Or, as Fraser (1990) poetically puts it, interactants engage in a ‘conversation contract’. However, a contract is only of worth to both parties if they understand how it works. Therefore any important business interaction must include the maximal amount of communicative cues in order to elicit both understanding upon reception (on the part of the interlocutor) and feedback relative to a common understanding. To comprehensively do so requires visual communication: Tartters’ (1983) shows that in dyadic interactions between two parties of the same language group, if one party does not know they are being monitored on the visual channel (when in fact they are), understanding for both parties decreases further than the already paltry 75% mentioned earlier. This shows our communication transmissions are channel specific and only modified when the transmitter is aware of the need for modification (Schneller 1988). This awareness of the need to modify is a key issue in organisational communication. When relying on non-visual cues (such as when using a telephone) the visual cues of miscomprehension on the part of the interlocutor are missed, with the result being a far higher risk of miscommunication.
Social cues are prime: Politeness always supersedes meaning in conversation:
Let’s now examine the extent to which social information influences the uptake of non-social information: Politeness always supersedes meaning in conversation (Lakoff 1973). As is the case for non-social meaning, politeness is also multi-modally manifested and conveyed. Politeness and pragmatics are culturally specific constructs learned through exposure to ‘correct’ input (Akert & Panter 1987; Arndt & Janney 1990; Bailey 2004; Klopf & Park 1982; Sapir 1958; Scheu-Lottgen & Hernandez-Campoy 1998). This is particularly true of NVC (Irwin 1996; Sapir 1958) and the NVC cues that convey politeness. Even though a person may score high on ‘social intelligence’ (Sternberg et al.1981; Walker & Foley 1973) if they aren’t party to all communicative cues during an interaction, and if the correct politeness markers aren’t observed on all modalities, there is increased risk of an erroneous affective response being ascribed to an interaction. Furthermore, if a person is judged as being impolite during an interaction, the perceived impoliteness will reside in the mind as the most salient aspect of the interaction. I.e. what was said will be overshadowed by how it was said. Therefore, to ensure the meaning of a conversation is not obfuscated by non-exposure to visual politeness markers, it is of prime importance to engage an interlocutor via the maximum possible number of communicative modalities.
Autonomic Nervous System responses to non verbal communicative cues:
In case the case hasn’t been made strongly enough supporting the importance of seeing the person you speak to, as you speak to them, there is even evidence correlating autonomic nervous system responses to certain facial expressions: on the part of both the actor, and the interlocutor! How can this be so? A handful of facial expressions are common to all cultures. All of them: Despite the majority of NVC signals being culture specific, there is also evidence to suggest the existence of nonverbal universals (Argyle 1988). Ekman and Keltner (1997) strongly suggest the existence of at least five universal facial expressions across The Globe, as well as universal fist tensing/foot stamping to display anger, and also the species wide usage of the ‘eyebrow’ flash as a communicative gesture (Eibl-Eibesfeldt 1979). Here’s the really interesting extension to this understanding, Ekman and Keltners (1997) provide physiological support for these five expressions being universal in a study showing the production of these expressions out of context creates responses in the autonomic nervous system (ANS) that are congruent with ANS responses produced during the experience of the emotions ascribed to them. For example, if one is in a bad mood but forces a wide grin, the nervous system will react as if the subject is indeed happy (see Facial Feedback Hypothesis). The semantic weight of these universals is further elucidated by Dimbergs’ (1997) findings that exposure to these universal expressions produces appropriate ANS and facial responses on the part of the receiver. Emotional expressions are processed rapidly and automatically (Batty & Taylor 2003). So this means that resultant to perceiving our interlocutors emotional expressions, our own autonomic nervous system responds in the same manner as if we ourselves are experiencing the emotion conveyed by their facial expression. How’s that for an example of communicative valence? It doesn’t take a marketing / management genius to understand the utility of this effect during sensitive discussions: smile at your interlocutor to make them feel more comfortable about the interaction. Unless their earlier perception of politeness oriented cues have completely influenced all subsequent interactive turns on your part, by smiling at them you are very likely to convey a message of comfort directly to their autonomic nervous system. I can’t be alone in considering that quite profound can I? This effect also goes a long way in explaining why OOH advertisers revert to a segment specific smiling face as central to their campaign theme when they can’t think of anything more creative. Given their overarching quest to create an emotional connection between the brand and target consumer, they regularly take the path of least resistance (right next to the path of least imagination) in assailing the consumer with giant smiling faces accompanying brand specific semiotic markers.
One would hope the preceding paragraphs highlight the necessity of facilitating maximal non-verbal communication during every interaction, to the extent that we should at least see an interlocutor when talking together. Non-verbal communicative proficiency is so important that many studies strongly advise incorporating nonverbal competencies as critical aspects in gaining cross-cultural communicative competency (Eisenchlas & Trevaskes 2003; Schneller 1986; 1988; Spencer-Oatey & Xing 2003). The suggestion being that the learner of a new language is not fully equipped in that language until they understand the non-verbal aspects of that language to the same degree they understand the verbal aspects. Strictly speaking the previous sentence should finish with the clause ‘…to the same degree they think they understand the verbal aspects’; because, as we just went to great lengths to explain, these perceptions of verbal understanding are overridden by actual non-verbal understanding / misunderstanding. Therefore, according to the general principles put forth by: Eisenchlas & Trevaskes (2003); Schneller (1986; 1988); Spencer-Oatey & Xing (2003) the subjects’ actual communicative proficiency in L2 is savagely mediated by their non-verbal proficiency. Meaning that, until their non-verbal proficiency matches their verbal proficiency, the verbal proficiency is merely a chimera.
Affect before Cognition: Emotion over information:
Further to the various abovementioned linguistic arguments regarding the primacy of non-verbal communication, let us now explore the generalities of how information is processed, and how that is mediated by interaction models. Kopytko (2002) posits that ‘The newest developments in neuroscience (Damasio, 1994, 1999; LeDoux, 1996, 2002) show that emotions in the context of decision taking deserve a status equal to, or sometimes even higher than cognition.’: Zajonc (1980) goes so far as to say that affect is usually pre-cognitive. According to this model we find ourselves faced with the primacy of affect over cognition (House 2000; Rachman 1981); our brains ascribe affective judgments to cognition before we cognize. We ascribe an emotional response to something before we know what it is we’re thinking about! Apply this knowledge to social interaction during a meeting / training session / disciplinary: If we’re not armed with all the social information available, the risk of an interaction failure is enormous. The slightest miscommunication can trigger a series of events based on a negative affective response being applied to the Interlocutor before one even understands what is the apparent faux pas being ascribed them. When we have misunderstood our interlocutor we feel bad about them before we know that we have misunderstood them or why. During which process the interlocutor is unaware of what occurred because they too are denied feedback on the visual plane so they are denied the opportunity to repair the interaction. This obviously leads to trouble! If we aren’t party to all communicative modalities during an Interaction the risk of misunderstanding is that much higher.
They keep popping up: The better than average effect, and cognitive dissonance:
Having considered above the many ways we can misunderstand each other, add the apparently universal ‘better that average effect’ (Larwood & Whittaker 1977; Regan et al. 1975; Svenson 1981), where the individual feels they possess notably above average skills in a given field (in this case it would be interpersonal skills), and the ground is laid for the commonly occurring situation described by Jones and Nisbett (1972), where the interlocutor is ascribed negative personal traits resulting from miscommunication based on incomplete exposure to communicative cues: The interlocutor is blamed because actors tend to ascribe their own undesirable behavior to environmental factors, whereas observers ascribe them to the actors’ personal characteristics. Thus, when the interlocutor is judged as not adhering to the rules of the ‘conversation contract’, a personality/psychological idiom is readily available to explain their behavior: their rudeness, stupidity etc. Adding fuel to this ‘fire of misunderstanding’; were a receiver to enter the conversation with an established negative attitude towards the interlocutor, or during the conversation ascribe a negative personal judgment to the interlocutor (being, as we saw earlier, very easy to do), which they suppress for reasons of economic expediency/political correctness/avoidance of litigation etc. the negative attitude will actually be amplified consequent to the subterfuge: Because people’s true attitudes toward an object become more accessible immediately after indicating an attitude that they know to be false, such attitude dissimulation might paradoxically cause the initial unfavourable attitude to have a stronger effect on subsequent judgments (Maio & Olson 1997); including the judgments of the interlocutors communicative capacity. If denied the visual communicative modality the interlocutor is also denied the possibility of assuaging all this hostility by employing their arsenal of visual NVC tools to repair the interaction by transmitting positive social messages. These phenomena in concert are seen to both conceive and amplify negative judgments about our interlocutor. Given the often unconscious ascription of negative affect based on miscommunication (as described earlier) the existence of the better than average effect and cognitive dissonance threaten to exacerbate negative affect, providing non-linear response curves as dependent variables to relatively benign independent factors. The repair of which is made all the more easy when both parties to an interaction have access to a comprehensive communicative toolkit i.e. when they can communicate face-to-face, or at the very least have access to visual communicative cues.
Everyone knows this stuff, which is why we still have face-to-face meetings. But what can be done to make it less costly?
Communication is a big issue. Careers are built on communicative competency and companies crash as a result of poor communication. Since business began we’ve striven to increase our communicative ability around the commercial function, yet, despite all the attempts to mediate the process we’ve not been able to improve on face-to-face communication as the best method of promoting understanding in communication. This is because, as humans we communicate on many more levels than it is possible to convey using just a telephone (and particularly via email). If our interlocutor isn’t party to all the modalities we employ in conversation there is heightened risk of miscommunication. Because we are very emotional creatures and because emotive mis-communication / communication is prime over non-emotive communication, unless we can avail ourselves of all the possible communicative devises at our disposal (verbal and non-verbal channels) we increase our chances of failure in our communicative goals. The effect of which is amplified by the better than average effect. These are things we know intuitively. It is my hope this article has provided some linguistic and neuroscience arguments behind this seemingly tacit knowledge.
Knowing that face-to-face communication is the most effective way of doing things, companies spend a lot of money transporting important staff to important meetings; for fear of losing communicative efficiency if the required parties can’t communicate in person. Surely this expense can be reduced somehow? If we need to ensure at least audio-visual communicative modalities, then the closest substitute for face-to-face communication is high quality videoconferencing. With recent progress in videoconferencing technology, hardware and software prices are dropping to the point where increasing numbers of companies find videoconferencing installations more cost effective than despatching management nationally and internationally to attend meetings that can occur just as effectively online. We still need to see each other clearly, but we don’t always need to be in the same room when so doing.
References
Aitchison, J. (1976). The Articulate Mammal: An introduction to psycholinguistics. London: Unwin Hyman.
Akert, R. M., Panter, A. T. (1987). Extraversion and the ability to decode nonverbal communication. Personality and Individual Differences Vol.9, No.6: 965-972.
Ardissono, L., Boella, G., Damiano, R. (1998). A plan based model of misunderstandings in cooperative dialogue. International Journal of Human-Computer Studies 48:649-679.
Argyle, M. (1988). Bodily Communication. Madison, Connecticut:International Universities Press.
Arndt, H., Janney, R. W. (1991). Verbal, prosodic, and kinesic emotive contrasts in speech. Journal of Pragmatics 15: 521-549.
Bailey, B. (2004). Misunderstanding. In Duranti, A. (Ed.). A Companion to Linguistic Anthropology. :395-413. Oxford: Blackwell Publishers Ltd..
Barnes, M. L., Sternberg, R. J. (1989). Social Intelligence and Decoding Nonverbal Cues. Intelligence 13: 263-287.
Bates, E., Bretherton, I. & Snyder, L. (1988). From first words to grammar: individual differences and dissociable mechanisms. New York Cambridge University Press.
Bates, E., Bretherton, I., Shore, C., McNew, S. (1983). Names, Gestures, and Objects: Symbolization in Infancy and Aphasia. In Nelson, K. E. (Ed.), Childrens’ Language 4:59-123. Hillsdale, NJ: Lawrence Erlbaum.
Batty, M, Taylor, M. (2003). Early processing of the six basic facial emotional expressions. Cognitive Brain Research 17:613-620.
Bazzanella, C., Damiano, R. (1999). The interactional handling of misunderstanding in everyday conversations. Journal of Pragmatics 31: 817-836.
Bilbow, G. T. (1997). Cross-cultural impression management in the multicultural workplace: The special case of Hong Kong. Journal of Pragmatics 28: 461-487.
Birdwhistell, R. (1970). Kinesics and Context. Philadelphia: University of Pennsylvania Press.
Birwhistell, R. L. (1978). Toward Analysing American Movement. In Weitz, S. (Ed.), Nonverbal Communication: Readings with Commentary, 2nd edition, New York: Oxford University Press.
Bretherton, I. & Bates, E. (1984). The development of representation from 10 to 28 months: Differential stability in language and symbolic play. In R.N. Emde & R.J. Harmon (Eds.), Continuities and discontinuities in development. New York: Plenum Press.
Damasio, A., R., (1994). Descartes’ Error: Emotion, Reason and the Human Brain.
New York: Grosset/Putnum.
Damasio, A., R., (1999). The Feeling of What Happens: The Body and Emotions in the Making of Consciousness. New York: Harcourt.
De Saussure, F. (1916). The nature of the linguistic sign. In Burke, L., Crowley, T., Girvin, A. (Eds.), The Routledge Language and Cultural Theory Reader. London and New York: Routledge. 13-21.
Dimberg, U. (1997). Psychophysiological Reactions to Facial Expressions. In Segerstrale, U., Molnar, P, (Eds.), Nonverbal Communication: Where Nature Meets Culture: 48-60. Mahwah, New Jersey: Lawrence Erlbaum Associates, Publishers.
Eibl-Eibesfeldt (1979). Similarities and Differences between Cultures in Expressive Movements. In Weitz, S. (Ed.), Nonverbal Communication: Readings with Commentary, 2nd edition, New York: Oxford University Press.
Eisenchlas, S., Trevaskes, S. (2004). Creating Cultural spaces in the Australian University Setting: A Pilot Study of Structured Cultural Exchanges. Australian Review of Applied Linguistics 2004:84-100.
Ekman, P., Friesen, W. (1969). The repertoire of nonverbal behavior: Categories, origins, usage, and coding. Semiotica 1: 49-98.
Ekman, P., Keltner, D. (1997). Universal Facial Expressions of Emotion: An old Controversy and New Findings. In Segerstrale, U., Molnar, P, (Eds.), Nonverbal Communication: Where Nature Meets Culture:27-47. Mahwah, New Jersey: Lawrence Erlbaum Associates, Publishers.
Festinger, L. (1957). Cognitive Dissonance Theory. Evanston, IL; Harper & Row.
Finnegan, R. (2002). Communicating. London & New York: Routledge.
Fraser, B. (1990). Perspectives on Politeness. Journal of Pragmatics 14: 219-236.
Goldschmidt, W. (1997). Nonverbal Communication and Culture. In Segerstrale, U., Molnar, P, (Eds.), Nonverbal Communication: Where Nature Meets Culture: 229-244. Mahwah, New Jersey: Lawrence Erlbaum Associates, Publishers.
Gumperz, J. (1982). Discourse Strategies. Cambridge: Cambridge University Press.
Gumperz, J. (1992). Contextualization and Understanding. In Duranti, A. & Goodwin, C (Eds.), Rethinking Context:229-252.
Gunter, T. C., Bach, P. (2004). Communicating hands: ERPs elicited by meaningful symbolic hand postures. Neuroscience Letters 372:52-56.
Hale, W. W., Jansen, J. H. C., Bouhuys, A. L., Jenner, J. A., van den Hoofdakker, R. H. (1997). Non-verbal behavioral interactions of depressed patients with partners and strangers: The role of behavioral social support and involvement in depression persistence. Journal of Affective Disorders 44:111-122.
Haviland, J. B. (2004). Gesture. In Duranti, A. (Ed.). A Companion to Linguistic Anthropology. :197-221. Oxford: Blackwell Publishers Ltd..
House, J. (2000). Understanding misunderstanding:A Pragmatic-discourse Approach to Analysing Mismanaged Rapport in Talk Across Cultures. In Spencer-Oatey. H. (Ed.), Culturally Speaking: Managing rapport through talk across cultures: 145-164. London & New York: Continuum.
Irwin, H. (1996). Communicating with Asia: Understanding People and Customs, St Leonards: Allen and Unwin.
Jones, E. E., Nisbett, R. E. (1972). The actor and the observer: Divergent perceptions of the causes of behavior. In E.E. Jones, D. E. Kanhouse, H. H. kelley, R. E. Nisbett, S. Valins & B. Weiner (Eds.), Attribution: Perceiving the causes of behavior :79-94. Morristown, N.J.: General Learning Press.
Kelley, H. H. (1972). Causal schemata and the attribution process. In E.E. Jones, D. E. Kanhouse, H. H. kelley, R. E. Nisbett, S. Valins & B. Weiner (Eds.), Attribution: Perceiving the causes of behavior :79-94. Morristown, N.J.: General Learning Press.
Kendon, A. (1978). Movement Coordination in Social Interaction: Some Examples Described. In Weitz, S. (Ed.), Nonverbal Communication: Readings with Commentary, 2nd edition, New York: Oxford University Press.
Kendon, A. (1997). Gesture. Annual Review of Anthropology 26: 109-128.
Klopf, D., Park, M. S. (1982). Cross-Cultural Communication: An Introduction to the Fundamentals, Seoul: Han Shin Publishing Co.
Kopytko, R., (2002), The affective context in non-Cartesian pragmatics: a theoretical grounding, Journal of Pragmatics, 36, 521-548.
Lakoff, R. (1973). ‘The logic of politeness: or, minding your p’s and q’s. In Corum, C. (Ed.), Articles from the ninth regional meeting of the Chicago Linguistic Society : 292-305.
Larwood, L., Whittaker, W. (1977). Managerial Myopia: self serving biases in organizational planning. Journal of Applied Psychology 62: 194-198.
Lausberg, H., Kita, S. (2003). The content of the message influences tha hand choice in co-speech gstures and gestures without speaking. Brain and Language 86: 57-69.
LeDoux, J. (1996). The Emotional Brain: The Mysterious Underpinnings of Emotional Life. New York: Simon and Schuster.
LeDoux, J., (2002). Synaptic Self: How Our Brains Become Who We Are. New York: Viking.
Li, H. Z. (1996). Communicating Information in Conversations: A Cross-Cultural Comparison. International Journal of Intercultural Relations Volume 23. No.3: 387-409.
Lippa, R (1998). The Nonverbal Display and Judgement of Extraversion, Masculinity, Femininity, and Gender Diagnosticity: A Lens Model Analysis. Journal of Research in Personality 32 :80-107.
Maio, G.R., Olson, J. M. (1998). Attitude Dissimulation and Persuasion. Journal of Experimental Social Psychology 34:182-201.
Maryanski, A. (1997). Primate Communication and the Ecology of a Language Niche. In Segerstrale, U., Molnar, P, (Eds.), Nonverbal Communication: Where Nature Meets Culture: 191-210. Mahwah, New Jersey: Lawrence Erlbaum Associates, Publishers.
McClave, E. (2000). Linguistic functions of head movements in the context of speech. Journal of Pragmatics 32: 855-878.
McNeill, D. (1992). Hand and Mind. What Gestures Reveal about Thought, Chicago: University of Chicago Press.
Payrato, L. (1993). A pragmatic view on autonomous gestures: A first repertoire of Catalan emblems. Journal of Pragmatics 20:193-216.
Popper, K. (1945). The Open Society and Its Enemies. London & New York: Routledge Classics.
Preuschoft, S., Van Hoof, J.A.R.A.M. (1997). The Social Function of “Smile” and “Laughter”: Variations Across Primate Species and Societies. In Segerstrale, U., Molnar, P, (Eds.), Nonverbal Communication: Where Nature Meets Culture: 171-190. Mahwah, New Jersey: Lawrence Erlbaum Associates, Publishers.
Rachman, S. (1981). The Primacy of Affect: Some Theoretical Implications. Behav. Res. & Therapy 19: 279-290.
Regan, J. W., Gosselink, J., Ulsh, E. (1975). Do people have inflated views of their own ability? Journal of Applied Psychology 31:295-301.
Russell, B. (1946). A History of Western Philosophy. London: Allen & Unwin.
Sandstrom, K.,L., LIvely, K.,J., Martin, D.,D., Fine, G.,A. (2014). Symbols, Selves, and Social Reality: A Symbolic Interactionist Approach to Social Psychology and Sociology, 4th edition. Oxford:Oxford University Press
Scheu-Lottgen, U. D., Hernandez-Campoy, J. M. (1998). An Analysis of Sociocultural Miscommunication: English, Spanish and German. International Journal of Intercultural Relations Vol 22. No.4: 375-394.
Schiefenhovel, W. (1997). Universals in Interpersonal Interactions. In Segerstrale, U., Molnar, P, (Eds.), Nonverbal Communication: Where Nature Meets Culture: 61-86. Mahwah, New Jersey: Lawrence Erlbaum Associates, Publishers.
Schneller, R. (1985). Changes in the understanding and use of culture bound non-verbal messages. 16th Conference of the Israeli Sociological Society.
Schneller, R. (1989). Intercultural and Intrapersonal Processes and Factors of Misunderstanding: Implications for Multicultural Training. International Journal of Intercultural Relations 13:465-484.
Segerstrale, U., Molnar, P. (1997). Nonverbal Communication: Crossing the Boundary Between Culture and Nature. In Segerstrale, U., Molnar, P, (Eds.), Nonverbal Communication: Where Nature Meets Culture: 1-27. Mahwah, New Jersey: Lawrence Erlbaum Associates, Publishers.
Spencer-Oatey, H., Jiang, W. (2003). Explaining cross-cultural pragmatic findings: moving from politeness maxims to sociopragmatic interactional principles. Journal of Pragmatics 35:1633-1650.
Spencer-Oatey, H., Xing, J. (2003). Managing Rapport in Intercultural Business Interactions: a comparison of two Chinese-British welcome meetings. Journal of Intercultural Studies Vol.24, No. 1: 33-47.
Suomi, S. J. (1997). Nonverbal Communication in Nonhuman Primates: Implications for the Emergence of Culture. In Segerstrale, U., Molnar, P, (Eds.), Nonverbal Communication: Where Nature Meets Culture: 131-150. Mahwah, New Jersey: Lawrence Erlbaum Associates, Publishers.
Svenson, O. (1981). Are we all less risky and more skillful than our fellow drivers? Acta Psychologica 47:143-148.
Tartter, V.C. (1983). The Effects of Symmetric and Asymmetric Dyadic Visual Access on Attribution During Communication. Language and Communication 3: 1-10.
Taylor, J., R. (1995). Linguistic Categorization: Prototypes in Linguistic Theory. London: Clarendon Press.
Thompson, James C., Abbott, David F., Wheaton, Kylie J., Syngentiotis, Ari, Puce, Aina (2004). Digit representation is more than just hand waving. Cognitive Brain Research 21: 412-417.
Ting-Toomey, S. (1999). Communicating Across Cultures. New York & London: The Guildford Press.
Turner, J. H. (1997). The Evolution of Emotions: The Nonverbal Basis of Human Social Organisation. In Segerstrale, U., Molnar, P, (Eds.), Nonverbal Communication: Where Nature Meets Culture: 211-230. Mahwah, New Jersey: Lawrence Erlbaum Associates, Publishers.
Watosn O’Reilly, A., Painter, K. M., Bornstein, M. H. (1997). Relations Between Language and Symbolic Getsure Development In Early Childhood. Cognitive Development 12: 185-197.
Zajonc, R. (1980). Feeling and thinking: preferences need no inferences. American Psychologist 35: 151–175.
[…] Click here to see the full reference list. […]
[…] Being a human, I am aware the only really efficient way to communicate is face-to-face (to read the science behind this statement click the link). Therefore, if one can’t commute to a meeting, videoconferencing presents the best […]