Research Article - (2018) Volume 4, Issue 3
Philippe Granato1*, Vinekar Shreekumar2, Olivier Godefroy3, Jean-Pierre Van Gansberghe4 and Raymond Bruyer5
1Pôle Santé Mentale et Addictologie, Centre Psychothérapeutique de Maubeuge, Centre Hospitalier Sambre Avesnois (CHSA), Maubeuge, France
2Department of Psychiatry and Behavioral Sciences, University of Oklahoma College of Medicine, Oklahoma City, OK, USA
3Centre Hospitalier Universitaire d’Amiens, Neurologie, Amiens, France
4Data processing consultant in mathematics, Brussels, Belgium. Unfortunately, JPVG died in May 2006
5Unité de Neurosciences cognitives (NESC), Place du cardinal Mercier 10, B-1348 Louvain-la-Neuve, Belgium
*Corresponding Author:
Philippe Granato
Centre Hospitalier Sambre Avesnois ôle de
Santé Mentale et Addictologie, Unité de
Psychiatrie Générale, 13 boulevard Pasteur,
59600, Maubeuge, France.
E-mail: philippe.granato@gmail.com
Received date: June 02, 2018; Accepted date: December 11, 2018; Published date: December 18, 2018
Citation: Granato P, Shreekumar V, Godefroy O, Gansberghe JPV, Bruyer R (2018) Measurement of the Ability to Recognize Facial Emotions over the Adult Lifetime in a Supra-Normal Sample. Clin Psychiatry Vol.4 No.3:55
Copyright: © 2018 Granato P, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
This study on a culturally homogeneous small sample of white Caucasian French population opens many areas for future research, for replication in various other groups, clinical diagnostic studies, and early therapeutic interventions.
Keywords
Recognition; Emotions; Measurement; Aging; Healthy subjects; MARIE; Supra-normal
Introduction
The ability to express and recognize facial emotions (RFE) comes before language [1,2]. This is due to an innate competence [3-13]. Emotions enable a high level of social interaction throughout our lives [14-16]. During the aging process, it appears that 1) control of our emotions improves [17] and 2) the expression and experience of negative emotions decreases, whereas these abilities to express and experience emotions increase for positive emotions with aging [17]. According to Malatesta et al. [18], the ability to recognize anger, fear, and sadness decreases with age. Moreno et al. [19] state that the ability to recognize joy improves, while the ability to recognize sadness worsens. In the opinion of these two authors, the recognition of disgust and surprise is higher. Calder [20,21] reports that aging: 1) leads to a decrease in the recognition of fear, sadness, and anger, and; 2) aging has no effect on the recognition of disgust. Bucks et al. [22] note that older subjects have difficulties recognizing anger. Sullivan and Ruffman [23] report reduced ability to recognize anger and sadness, while joy is well recognized by older subjects.The research work on RFE shows a heterogeneity of: 1) demographic features of samples (sex, age, level of education, ethnicity, linguistic and cultural background, and cognitive level), 2) the canonical emotions used (number, type), 3) the measuring devices, and 4) paradigms used and experimental situation [24-29].The aim of this work was: 1) to demonstrate validity of a computerized measuring device (M.A.R.I.E.) for the RFE of the standardized representations of emotions in the series prepared by EKMAN without the use of language; 2) to examine the decision process involved in the recognition of facial emotions at different ages; 3) to study the effect of demographic factors, and; 4) to observe how the aging process affects the recognition of facial emotions.
Functional imaging (fMRI) locates brain regions that are activated during the recognition process of static or natural in other words canonical or prototypical emotions, and dynamic or morphed (blended or “intermediate”) emotions comprising the emotional series (ibid SE) [30-32]. Our work aims to measure in numerical format the ability for facial visual recognition of two “static” (pictures 1 and 19) (unblended) and seventeen “dynamic” emotions (pictures 2 to 18) that are all blended. Our results should allow us to refine the statistical correlations in future between localized brain activation and 1) emotional typology, 2) emotional intensity, 3) static images, or 4) dynamic images.
Methods
The tests were set up using software that was developed for this study (Method of Analysis and Research of the Integration of Emotions: M.A.R.I.E) [33-35]. It was inspired by a previously used method [36-39]. M.A.R.I.E. examines the decision making of the subject when shown a photograph (“image”) of a face that expresses a “canonical” or “intermediate” emotion. The canonical emotions [5] are anger, disgust, joy, neutral, fear, surprise, and sadness, all of which are expressed on the face of a "blonde woman", a "brunette woman", and a "man". We have created nine series of emotions (SE): anger-fear, anger-sadness, joy-sadness, neutral-anger, neutral-disgust, neutral-joy, neutralfear, neutral-surprise, and neutral-sadness. Each SE was made up of two canonical images number 1 and 19, and 17 intermediate images created by blending or merging both canonical images using computer technology [6]. Photographs of the "brunette woman" and "man" were lent by Paul EKMAN.
The methodology is discussed in detail in the previous articles by the principal author and the current authors in the Open Journal of Psychiatry. It is not practical to describe it detail here but is very simple for the subjects to grasp. [33] The other tools used by Ekman and others are computer programs for morphing of two canonical images with graded mixing of pixels. They are easily and amply available on the Internet.
Participants
Between April 2000 and April 2005, we enrolled 204 healthy, right-handed, native French-speaking volunteers (according to 10 items of the Edinburgh Handedness Inventory, Oldfield, 1971) [40] between the ages of 20 and 70 years at the "Clinical Investigation Centre" of the Lille's University and Regional Hospital complex. We formed seven groups, by age-bracket: three ten-year brackets for the subjects between the ages of 20 and 50 years, and four five-year brackets for the subjects between the ages of 51 and 70 years. Thirty subjects were recruited in each age bracket (except for the 66-70 group, n=24) for varied results evaluating the effects of sex, age and level of education. Each subject provided informed written consent in accordance with requirements of the Ethics committee overseeing the research with human subjects. This mono-centric (contrasted with multi-center study, conducted at a single site) study consisted of a controlled, randomized test (subjects were not randomly assigned to experimental and control groups) (randomized applies to random choice of emotional series), carried out as a single blind study of the parallel groups and without giving any direct benefit to any individual subject. Subjects were blinded to the goals and purposes of the study and were given no knowledge of what responses were being measured or evaluated. Eyesight and hearing, with or without aid, was optimum. Participants were given a medical consultation to determine history, current medications, and check for neurological disease, diabetes, hypertension, and psychiatric problems, amongst others. This consultation consisted of 1) structured interview: Mini International Neuropsychiatric Interview (MINI) [41], 2) Mini Mental Status Examination (MMSE) [42], 3) Hamilton scale for anxiety (HAMA) [43,44], 4) Hamilton scale for depression (HDRS) [45], and 5) a blood and urine drug test. Subjects over the age of 50 were given a cognitive assessment consisting of the 1) Mattis Dementia Rating Scale [46] and the 2) episodic memory: Grober and Buschke scale (Table 1) [47-49].
Age Bracket | n= | Sex (M/F) | Age | Level of education (1/2/3) | HAMA | MMSE | Grober et Buschke | ||
---|---|---|---|---|---|---|---|---|---|
20-30 | 30 | 15/15 | 23.2 ± 3.13 | 30/0/0 | 8 ± 1,1 | 10 ± 1,1 | 30 | ||
31-40 | 30 | 15/15 | 35.3 ± 2.42 | 5/0/25 | 8 ± 1 | 9 ± 1,3 | 30 | ||
41-50 | 30 | 15/15 | 44.8 ± 2.53 | 7/0/23 | 7 ± 0,9 | 10 ± 1,2 | 30 | ||
51-55 | 30 | 15/15 | 53.2 ± 1.29 | 10/1/19 | 9 ± 1 | 11 ± 0,8 | 30 | 144 ± 0,8 | 16 ± 1 /16 ± 1/16 ± 1.2/16 ± 1.6/16 ± 1.4 |
56-60 | 30 | 1515 | 57.9 ± 1.44 | 8/1/21 | 8 ± 0,8 | 10 ± 1 | 30 | 143 ± 1,8 | 16 ± 1 /16 ± 1.8/15 ± 1.5/15.5 ± 2.1/15.2 ± 2.1 |
61-65 | 30 | 15/15 | 63.2 ± 1.37 | 12/1/17 | 8 ± 1,1 | 9 ± 1,1 | 30 | 142 ± 03,1 | 16 ± 1.1 /15 ± 1.4/16 ± 1.9/14 ± 1.3/15 ± 2.1 |
66-70 | 24 | 14/10 | 67.9 ± 1.59 | 7/3/14 | 9 ± 1 | 9 ± 0,8 | 30 | 142 ± 3,8 | 16 ± 1 /16 ± 2/15 ± 1.9/15 ± 2.3/15.3 ± 2.7 |
Total | 204 | 104/100 | 49/6/149 |
Table 1: This table displays all main the main effects and interaction effects of the M/ANOVAS analyses performed on alexithymia and OE.
Inclusion criteria for participating subjects were: men and women of ages between 20 and 70 years; women were not menstruating at the time of the testing for the REF; a score on the HDRS scale below 10; a score on the HAMA scale below 14; visual acuity (corrected or uncorrected) of 20/20; blood and UDS testing negative for drugs, alcohol, and no ongoing medical treatment; able to understand and sign the protocol, comprehend the methodology and; carrying health insurance. Exclusion criteria were presence of neurological or psychological disorders, belonging to the same family as another participating subject, use of any psychoactive drugs for a prolonged period before their visit, and pregnancy. Any subject who did not meet the inclusion criteria was excluded from the study. Any subject that met any exclusion criterion was not included in the sturdy.
Materials and Methodology
Setting up of stimuli
An “intermediate emotion” does not completely represent either a canonical emotion “A” or a canonical emotion “B”; it is a controlled “combination” or morph created by computerized blending that is inversely proportional to the pixels of the two canonical images “A” and “B”. Each SE was specified by the increase in the gradation of emotion’s pixels “B”: 0%, 10%, 20%, 30%, 35%, 38%, 41%, 44%, 47%, 50%, 53%, 56%, 59%, 62%, 65%, 70%, 80%, 90%, and 100% (Figure 1).
Figure 1: Joy-sadness SE created using the intermediate image merging method, of the canonical images 1 and 19 ES. Total of 19 ES comprises one SE.
Each subject was seated in the same quiet room, facing a laptop screen while each of the images (10cm x 18cm) or “stimulus” (emotion stimulus, ES) was shown. The “stimulus” was shown in the center of the upper half of the screen. On the lower half of the screen, 1) labeled underneath the canonical image, to the left of the stimulus, was the name of the canonical emotion “A” (with 5° angle), and; 2) labeled underneath the canonical image, to the right of the stimulus, was the name of the canonical emotion “B” (with a 5° angle) (Figure 2). Images of the blonde woman were presented first, followed by images of the brunette woman, and finally images of the man.
Figure 2: Experimental situation: The subject must choose the image on the right or on the left according to which they identify as the central image. (Subject chooses to click the right or left side button of the mouse in making a choice.
The random order of the stimuli was the same for each subject (9 ES x 19 images per SE=171 stimuli for each poser). The canonical images of each series were labelled and were the two last stimuli at two ends of the series except in the case of the series for man’s face.
Procedure
The subjects undertook a forced-choice binary test by pressing with their right hand on the left or right button of the computer mouse, using the index or the middle finger. The stimulus remained on the display until the subject responded. The order in which the SE’s were displayed was: anger-fear, anger-sadness, joy-sadness, neutrality-anger, neutrality-disgust, neutrality-joy, neutrality-fear, neutrality-surprise, and neutrality-sadness. The three first combinations were called “bipolar” and the last six were called “unipolar” due to the presence of neutrality as canonical expression A. A pause of one minute was applied after each ES similar to that estimated for repolarization of the brain after electrical stimulus. The duration of each ES was 2 minutes on average.
The nine following “measures” have been taken into account: 1) measures No. 1, 2, 8, and 9 corresponded to responses to the stimuli No. 1, 2, 18, and 19, and; 2) measures No. 3, 4, 5, 6, and 7 corresponded to the average of the responses to the stimuli No. 3, 4, and 5; No. 6, 7, and 8 ; No. 9, 10 and 11; No. 12, 13, and 14, and; No. 15, 16, and 17, respectively. The saturation in emotion B was 0%, 10%, 28.3%, 41%, 50%, 71.7%, 90%, and 100%, respectively.
All of the subjects first passed a trial on a monitored task. They used the same skill of decision making on a test having to choose between two possible answers, but applied to a series of geometric intermediate images originally forming a continuum with square and circular outlines. The monitored task did not involve perception of emotion.
Reproducibility study
In order to verify the reliability of our procedure in which each subject only responded once to each stimulus, we carried out a “control test – retest” on a sample of 13 healthy subjects (6 women and 7 men) between the ages of 29 and 41 years (35 ± 5 years). These subjects took the test five times at 5-minute intervals. Inclusion and exclusion criteria were identical to those of the main sample. The individual standard for the consistency of the answers throughout the repetitions was measured at a minimum of 4 identical answers out of 5 (or at least, a concordance of 80%). The average level of concordance was 93.1% (blonde woman=93%, brunette woman=92.5%, man=93.9%). The benchmark of 80% concordance was passed in 481 / 513 cases (it was 100% in 108 cases).
The 32 remaining were tested against the value of 80%, by means of a unilateral Student’s t-test: no result was significantly lower than this value. It therefore was confirmed that conducting the test for each SE once would be of sufficient validity and reliability. The ratio of discordant responses were not significantly influenced by sex (F(1,26)=1,53; p=0.218), age (F(6, 1,8)=1; p=0.405), or level of education (F(1, 2,8)=1,64; p=0.2).
Statistics Used For This Study
The results were analyzed using variance analyses (ANOVAs), for which the “inter-subjects” factor was the age bracket (n=7) and the “intra-subjects” factors were face (n=3), emotional series (n=9), and measure (n=9). Analysis was implemented using the Bonferroni test (inter-subject factor) and by comparison (for repeated measures). For multivariate analysis, Wilks' Lambda was used. The alpha risk was fixed at 5%. The dependent variable was the number of answers "B". We used the software SPSS v.11 (SYSTAT Software, Inc., SPSS.com). The comparison of the qualitative demographic variables was carried out using Chisquare test.
We analyzed the subjects' performances using multivariate analysis. We carried out an analysis of covariance, with the level of education as the co-variable, and the age bracket (n=7), as inter-subject factors. The intra-subject factors were face (n=3), emotional series (n=9), and measure (n=9). The multivariate test Wilks' lambda was used, and subsequent analyses were performed with Bonferroni correction with a level of significance of 0.001.
Results of Statistical Analysis
Performances did not differ according to age bracket (F (6,19669)=1,35; p=0.2) (Figure 3)or according to level of education (F (1,1587)=0,6; p=0.4). The effect of measure was significant (F (8,189)=3896; p=0.0001) (Figure 4), as was the effect of face (F (2,195)=10; p=0.0001). The effect of the face was linked to a significantly higher level of responses “B” for the images of the man (62%) than for the images of the blonde (60.3%) and brunette women (59.7%) (Figure 5). The level of recognition of emotion "B" was significantly different on the degree of pixel saturation of the emotion "B", (F (8,189)=28; p=0.0001) (Figure 6).
Figure 3: Variation in level of performances for recognition of "B" (%) according to age bracket (mean and standard deviation).
Figure 4: Variation of ability to recognize emotion (%) depending on the face (M. for measure). (Average and standard deviation).
Figure 5: Variation of ability to recognize emotion (%) depending on the face (mean and standard deviation).
The effect of the measure resulted from the increase in the number of responses to “B” with the increase in the saturation of the emotion “B” in the stimulus. The evolution of these two variables was not proportional. The graph of the answers has the appearance of a sigmoid curve (Figure 6). The maximum number of responses was reached at measure number 7 (faces 15, 16, and 17 of the series). A 100% saturation of emotion "B" did not lead to 100% recognition by the subjects. The same was true of emotion "A". As a result, it was concluded that 1% of this supranormal sample was unable to recognize canonical emotions.
Figure 6: Level of recognition of emotion "B" (in bold) significantly different on the degree of pixel saturation of the emotion "B" (in italic).
The effect of the series was linked to the number of responses to “B”: 1) significantly lower for the neutral-anger series (51.67%) than all other SEs and 2) significantly higher for neutral-joy (68.71%) than all other SEs (Figure 7).
Figure 7: Variation of ability to recognize emotion (%) depending on emotional series (average and standard deviation).
The significant “face*series” interaction (F (16, 181)=11; p=0.0001) was linked 1) to a lower level of recognition of the anger-sadness and the joy-sadness series for the blonde woman and a higher recognition of the neutral-joy, and neutral-fear series; 2) for the brunette woman, to a lower recognition of the neutral-surprise and neutral-anger series, and a higher recognition for the neutral-sadness series; 3) for the man, to a lower recognition of the neutral-anger and a higher recognition for the neutral-joy series (Figure 8).
Figure 8: Ratio of responses "B" for each face and for each emotional series.
The significant “age bracket*face*series” interaction (F (96, 1032)=2; p=0.0001) was mainly linked to an important dispersion of the responses to “B” according to age. In addition, the responses for the entire set of emotional series of images of the blonde woman were dispersed, except for neutrality-disgust and neutrality-fear (Figure 9a). For images of the brunette woman, all of the emotional series were dispersed, except for neutralfear (Figure 9b). For images of the man, all of the emotional series were dispersed, except for neutral-disgust and neutralfear (Figure 9c). Regardless of age bracket, the responses rates for “B” were very similar for images of 1) the blonde woman: neutral-disgust, neutral-fear; 2) the brunette: neutral-fear; 3) the man: neutral-disgust, neutral-fear. The neutral-disgust and neutral-fear series had thresholds for recognition independent of age, and the responses were unaffected by the aging process. The scattered nature of the results for the neutral-joy series was significant. The recognition of the neutral-joy series improved with age and depended on the face presented in the images.
Figure 9A: Ratio of responses "B" of the blonde woman by age bracket and emotional series.
Figure 9B: Ratio of responses "B" for the brunette woman by age bracket, measure and emotional series.
Figure 9C: Ratio of responses "B" for the man by age bracket, measure and emotional series.
Discussion
The samples of participants
The aim of having 30 subjects for the last age bracket was not achieved as planned, due to the difficulty of including subjects over the age of 65 with 1) an MMSE score of 30 or above, 2) a Mattis score higher than 140, 3) an optimum result in the Grober and Buschke test, and 4) who do not take any psychoactive drugs or medicines. The strictness of the inclusion criteria resulted in our selecting a "supra-normal" sample. On the positive side, this sample was homogeneous, as we had wished. Being able to control the cognitive factors meant that we could better appreciate the effects of sex, age bracket, face, series, and measure on the recognition of emotions.
Despite the higher cognitive functions, 1% of this sample of supranormal culturally homogeneous subjects did not recognize the canonical emotion "B" when all the representations of emotions were blended, merged, mixed, and randomly displayed. It is possible that this percentage could increase in a normal population with a heterogeneous cognitive level. Additionally, comparative studies on the recognition of emotion using “normal samples” that have different cognitive levels would be useful in the understanding of behavioral problems and aggressiveness in social interactional settings. Studies of participants with cerebral lesions or psychiatric pathologies appear to be possible now due to the existence and availability of these quantified benchmarks. The prerequisite is to confirm these benchmarks for various “normal samples” by replicating similar studies.
The faces
The level of recognition of the emotions differed according to the faces in the images. The performances of the participants for the three faces showed an advantage for the man over the two women, without any difference between the women. Furthermore, the effect of the face was quantified as it was found in several significant interactions: face*series, face*series*measure and face*series*age.
The learning process must be taken into consideration. Considering the order in which the images were presented: 1) blonde woman, 2) brunette woman with canonical emotions pictures, and 3) the man’s ES without canonical emotions pictures the subjects will have a practice effect by the time they are exposed to the series of man’s images. For this last series without canonical images, the aim was to put the subject in the most ecological condition as in real life there are no prompts for identifying emotions with subtitles. Paradoxically, we observed that: 1) the scores were lower for the brunette than for the blonde, and 2) the performances for man’s ES were the best. The absence of the learning process between the blonde woman (first) and the brunette (second) could be explained by certain parameters: 1) the quality of the photography (contrast, blackwhite balance), 2) African-American origin, 3) the identity of the face 4) sympathy / antipathy / empathy, 5) theory of mind, or 6) other unknown parameters. The effect of the learning process (practice effect) could explain the high performance on the man’s ES, despite the more confounding experimental conditions without prompts. However, the interference of the previous factors, including sex, should not be ruled out.
The significant differences between “face*series” interaction could be explained by 1) certain faces are able to better express certain emotions, and 2) the sample was more sensitive to a certain type of face and emotions. The performances or responses to the bipolar series were scattered, except for the anger-fear series. Surprise was very poorly recognized when displayed by the brunette woman (47.9% versus 61.6% for the blonde woman and 65.2% for the man).
The recognition of facial emotion does not merely depend on the emotion that is expressed on the face. However, this hypothesis suffers due to having a low number of stimulus faces in the experiment. Nevertheless, the ethnic origin of the stimulus faces and of the sample that was enrolled should be taken into consideration in this type of research. The systems involved in recognizing emotions could depend on many more parameters than solely that of the face's identity and the emotion that is being tested. This hypothesis would explain the paradoxical aspect of the performances depending on the different faces. Faces reflecting different ethnicities (White, Asian, African, aboriginal, etc.) should be tested in a variety of ethnic populations with the aim of testing this hypothesis. The technical quality of the canonical and intermediate emotions photographs should be agreed upon among the researchers for the best comparisons.
The aging process
Our study highlights the lack of change in the ability to recognize emotions with advanced age. This study supports the findings of Phillips et al. [50,51], but does not support [52]. Calder et al. [20,21] hold an intermediate position. According to Calder al., the aging process leads to a strong decrease in the ability to recognize fear and, to a lesser degree, anger. Our study showed a trend of scores becoming lower for the bipolar series, and for the three faces, as the population grows older. Aging did not seem to significantly affect the recognition of anger, surprise, and sadness for the three faces. It seemed more likely that recognition of disgust and fear was unaffected by age. The recognition of joy improved with age for all three faces.
Emotion series
The measures of ability to recognize each emotion showed significant differences when all of the faces were mixed up or randomly displayed, although interactions such as “series*face”, “series*face*measure”, and “series*face*age” did exist. The emotions "anger" and "surprise" were the least well recognized in the unipolar series. Joy was the best recognized, followed by fear and disgust. Observation of the average performance for the recognition of emotion, showed an increasing progression: anger, surprise, disgust, sadness, fear, and joy, with joy as the most recognized. These results point towards the possibility of a unique emotion recognition system with differential treatment of the emotions: 1) poor for anger and surprise; 2) higher for disgust, sadness, and fear, and; 3) highest for joy. This hypothesis from these findings concerns with underlying neuronal circuits in the brain of "autonomous system" for experiencing and expressing specific emotions with an index of innate emotions, with no relationship between it and other systems. What should also be remembered is that, for one of the faces, "the man", there were no photographs from the source of the emotion series shown on the screen, and in spite of that the scores were the highest.
The idea of emotional expressions and their recognition being “universal” in humans is defended by Ekman [4] and Darwin [12], but this would be true only for some and probably not all emotions. A study with M.A.R.I.E. of socially and ethnically different groups and the most primitive or aboriginals (without the influence of culture and civilization) would offer enriched information due to the numerical quantitative results. Our results reinforce the idea of an organic substratum for fear, disgust, and joy. Such organic substratum hypothetically has autonomous system of neuronal circuitry in the brain. Disgust and joy would be resistant to the effects of the aging process in the hypothetically postulated teleological aim to ensure the sustainability and survival of the human species. Joy is the emotion most recognized. This recognition improves with aging and depends on the face that expresses it. The recognition of joy could allow a meeting between a group of people and, more precisely, between two people of opposite sex, perhaps assuring sexual relations leading to sustainability of the species. The sensitivity to fear may be to ensure the survival of the species in the context of the contagiousness of fear in instant nonverbal communication. The same mechanism may be operating for the nonverbal recognition of disgust in the context of the group’s negative response to food.
The recognition of any given emotion would therefore have many influencing factors. The following should be taken into consideration: 1) the age of the observer, 2) the emotion type, 3) the intensity of the emotion, 4) the identity of the face, 5) the emotional state of the observer, 6) the environmental context (Ekman and O'Sullivan 1988), and 7) the dynamic or static aspect. The background for our experiment was not intended to be ecological or ethnological. There was no bias in this research as to which factors will reduce or increase the performances. This lack of bias was constant, although the researchers were not blinded. Only subjects were blinded to the purpose and objectives of this study. The findings were based on statistical and mathematical analysis, and therefore, were not influenced by any bias in the researchers. As is the technical quality of the photographs of the faces M.A.R.I.E. was a constant, as it has always been used in the same conditions, with 204 subjects whose scores on HAMA and HDRS and cognitive tests were in the normal range and whose MMSE, Mattis scale and Grober and Buschke scale, related scores consistently fell in the average or above average range. To repeat, this consistency in the high cognitive level of this population, selected with all the inclusion and exclusion criteria, is what leads us to label this population “supra-normal”.The bipolar emotion series had percentages of recognition of emotion that were fairly close or within a narrow range. This was an artificial series. An opportunity to measure the responses to a larger number of bipolar series would have been useful for drawing more accurate inferences and conclusions. This would have enabled us to individually call upon either 1) the distinct systems for the specific recognition of each of the canonical emotions, or 2) a unique system of recognizing emotions. The difficulty of recognition of bipolar series should seek more intensely the organic substratum of these recognition systems whose activity could more easily explored or mined through functional imaging (fMRI). Despite these qualifications, it is worth noting that secondary emotions, i.e., a mix of two basic or canonical emotions, are frequent in daily life. For example, a mix of surprise and sadness leads to disappointment.
Limitations of This Study
The results and some of the conclusions of this study may intuitively appear to be universally applicable. However, since this study is on a small sample of white Caucasian French speaking culturally homogeneous population, these results may not be entirely applicable cross-culturally. Studies will need to be replicated with the same methodology in other countries, with populations with other ethnic, different social-cultural backgrounds before universal aspects of recognition of facial emotional expressions can be discussed in detail. The experimenters will need to use culturally compatible stimulus facial photographs and morphs. The presence of cultural relativism operating in this field will need to be reckoned with.
Conclusion
M.A.R.I.E., the software tool that we used to measure the visual recognition of facial emotions, appears to be relevant for such research with significantly high validity and yields rich quantitative information. This study focuses on establishing the value of standardization in quantitative and qualitative research, which will serve as a reference for later studies.
The results of this study are in agreement with findings of currently published studies. The findings confirm that joy is the emotion that is most recognized, comprehended and understood by humans. We learned that: 1) anger is the most difficult emotion to recognize, and 2) aging alters the ability to recognize an emotion. A certain emotion on a certain face will be recognized significantly differently by people of different ages.Based on these findings, we conclude that the ability to recognize an emotion depends on many factors: the type of emotion, the facial identity, the age of the observer, and other factors that we may not have yet identified. Our measure of the recognition of emotions would actually only be the measure of a combination of these factors. Also, we recognize that measuring the recognition of emotions could be viewed as a mistaken approach in the future when more advanced research in this area reveals more specialized knowledge. It may be necessary to define or specify what we intrinsically believe to be “emotion”, and whether it is only an entanglement of neuropsychological and neurobiological processes, which take into account all of the identified and non-identified factors impacting neurobiology with innate endowment modified by learning, social interaction and enculturation in human species, or is it possible that all emotional experiences and expressions are totally autonomous and independent of the social environment in which the human infant is raised. Authors have an obvious bias resulting from the above findings that experiencing and expressing emotions as well as the acuity and accuracy of their recognition and discernment in humans are based on organism-environmental interactions and developmental factors influencing them and could be significantly disturbed by severe emotional traumatization, abuse and neglect, sensory and social deprivation, etc. especially in infancy leading to dyslexithymia or alexithymia evident in so called normal populations as well as in disturbances seen in the expression of emotions usually as signs of psychopathology.
After demonstrating the benefit of the device, M.A.R.I.E., to measure the ability to recognize facial emotions (RFE) with validity and reliability, we consider it to be useful to compare M.A.R.I.E. with other measuring devices. This will allow us to evaluate the strengths and weaknesses, as well as flaws and special qualities, of M.A.R.I.E. It will be an enormously useful tool as a software for future basic and applied research and diagnostic studies in the area of recognition of facial expressions of emotions.
Future Directions for Research
The above discussion and conclusions naturally lead the authors to encourage future research with cross-cultural studies to continue the work of Paul Ekman and current authors. The most challenging will be research with minors to explore the Developmental Line of this ability and determine if infants, school age children and adolescents have full development of this ability (RFE) or if they have any developmentally normal limitations. It would be also enlightening to study the geriatric population that is delightfully normal cognitively as against cognitively compromised unfortunate geriatric populations with major as well as minor Neurocognitive Disorders. The interaction of this ability (RFE) with emerging functional and organic psychopathology in all age groups and the reversible and irreversible impact of psychopathology on this crucial ability that helps maintain social relatedness, trust, and signal sending power in human communication, communication of needs, both physical and emotional. It will be soon measurable as a result of numerical aspects of this and similar tools. The literature previously cited acknowledges five such computer-based software tools that are accepted by the scientific community, M. A. R. I. E. being only one of them. M.A.R.I.E and other similar tools will be useful in early diagnosis of the disturbances of RFE manifested in degenerative and functional brain disorders.
This work was supported by the grant 1998/1954 of the Programme Hospitalier de Recherche Clinique of the French government. Thanks are due to Paul Ekman who gave permission to use photographs from “Unmasking the faces” (Ekman & Friesen, 1975) and to Olivier Lecherf who designed the computer program for processing and displaying the pictures. This study was made possible thanks to the Centre d’Investigation Clinique (CIC-CHU/INSERM, Lille) and the Laboratoire de Neurosciences Fonctionnelles & Pathologies CNRS UMR 8160 CHR U Lille.