http://www.sciencedaily.com/releases/2007/04/070430181209.htm
ScienceDaily (May 1, 2007) — Researchers at the Yerkes National Primate Research Center, Emory University, have found bonobos and chimpanzees use manual gestures of their hands, feet and limbs more flexibly than they do facial expressions and vocalizations, further supporting the evolution of human language began with gestures as the gestural origin hypothesis of language suggests. This study appears in the current issue of the Proceedings of the National Academy of Sciences.
Working with two groups of bonobos (13 animals) and two groups of chimpanzees (34 animals), Yerkes researchers Amy Pollick, PhD, and Frans de Waal, PhD, distinguished 31 manual gestures and 18 facial/vocal signals. They found both species used facial/vocal signals similarly, but the same did not hold true for the manual gestures. Rather, the researchers found both within and between species the manual gestures were less closely tied to a particular emotion and, thereby, serve a more adaptable function. For example, a single gesture may communicate an entirely different message depending upon the social context in which it is used.
"A chimpanzee may stretch out an open hand to another as a signal for support, whereas the same gesture toward a possessor of food signals a desire to share," said Pollick. "A scream, however, is a typical response for victims of intimidation, threat or attack. This is so for both bonobos and chimpanzees, and suggests the vocalization is relatively invariant," Pollick continued.
By studying similar types of communication in closely related species, researchers are able to determine shared ancestry. We know gestures are evolutionarily younger than facial expressions and vocalizations, as shown by their presence in apes and humans but not in monkeys. "A gesture that occurs in bonobos and chimpanzees as well as humans likely was present in the last common ancestor," said Pollick. "A good example of a shared gesture is the open-hand begging gesture, used by both apes and humans. This gesture can be used for food, if there is food around, but it also can be used to beg for help, for support, for money and so on. It's meaning is context-dependent," added de Waal.
Looking for further distinctions between species, the researchers found bonobos use gestures more flexibly than do chimpanzees. "Different groups of bonobos used gestures in specific contexts less consistently than did different groups of chimpanzees," said Pollick. The researcher's findings also suggest bonobos and chimpanzees engage in multi-modal communication, combining their gestures with facial expressions and vocalizations to communicate a message. "While chimpanzees produce more of these combinations, bonobos respond to them more often. This finding suggests the bonobo is a better model of symbolic communication in our early ancestors," concluded Pollick.
Showing posts with label facial expressions. Show all posts
Showing posts with label facial expressions. Show all posts
Wednesday, June 3, 2009
Wednesday, May 27, 2009
Psychologists Find That Head Movement Is More Important Than Gender In Nonverbal Communication
http://www.sciencedaily.com/releases/2009/05/090525105459.htm
ScienceDaily (May 26, 2009) — It is well known that people use head motion during conversation to convey a range of meanings and emotions, and that women use more active head motion when conversing with each other than men use when they talk with each other.
When men and women converse together, the men use a little more head motion and the women use a little less. But the men and women might be adapting because of their gender-based expectations or because of the movements they perceive from each other.
What would happen if you could change the apparent gender of a conversant while keeping all of the motion dynamics of head movement and facial expression?
Using new videoconferencing technology, a team of psychologists and computer scientists – led by Steven Boker, a professor of psychology at the University of Virginia – were able to switch the apparent gender of study participants during conversation and found that head motion was more important than gender in determining how people coordinate with each other while engaging in conversation.
The scientists found that gender-based social expectations are unlikely to be the source of reported gender differences in the way people coordinate their head movements during two-way conversation.
The researchers used synthesized faces – known as avatars – in video-conferences with naïve participants, who believed they were conversing onscreen with an actual person rather than a synthetic version of a person.
In some conversations, the researchers changed the gender of the avatars and the vocal pitch of the avatar's voice – while still maintaining their actual head movements and facial expressions – convincing naïve participants that they were speaking with, for example, a male when they were in fact speaking with a female, or vice versa.
"We found that people simply adapt to each other's head movements and facial expressions, regardless of the apparent sex of the person they are talking to," Boker said. "This is important because it indicates that how you appear is less important than how you move when it comes to what other people feel when they speak with you."
He presented the findings May 24 at the annual convention of the Association for Psychological Science in San Francisco. A paper detailing the results is scheduled for publication in the Journal of Experimental Psychology: Human Perception and Performance.
The study, funded by the National Science Foundation, used a low-bandwidth, high-frame-rate videoconferencing technology to record and recreate facial expressions to see how people alter their behavior based on the slightest changes in expression of another person. The U.Va.-based team also includes researchers at the University of Pittsburgh, University of East Anglia, Carnegie Mellon University and Disney Research.
A video demonstration is available online at: http://faculty.virginia.edu/humandynamicslab/.
The technology uses statistical representations of a person's face to track and reconstruct that face. This allows the principal components of facial expression – only dozens in number – to be transmitted as a close rendition of the actual face. It's a sort of connect-the-dots fabrication that can be transmitted frame by frame in near-real time.
Boker and his team are trying to understand how people interact during conversation, and how factors such as gender or race may alter the dynamics of a conversation. To do so, they needed a way to capture facial expressions people use when conversing.
"From a psychological standpoint, our interest is in how people interact and how they coordinate their facial expressions as they talk with one another, such as when one person nods while speaking, or listening, the other person likewise nods," Boker said.
It is this "mirroring process" of coordination that helps people to feel a connection with each other.
"When I coordinate my facial expressions or head movements with yours, I activate a system that helps me empathize with your feelings," Boker said.
The technology the team developed further allows them to map the facial expressions of one person onto the face of another in a real time videoconference. In this way they can change the apparent gender or race of a participant and closely track how a naïve participant reacts when speaking to a woman, say, as opposed to a man.
"In this way we can distinguish between how people coordinate their facial expressions and what their social expectation is," Boker said.
This is absolutely amazing to me that we are so intuned to head motion that we can be tricked into perceiving another gender. I guess this exists because the more intuned you are the better your social relations are and therfore are more likely to survive.
ScienceDaily (May 26, 2009) — It is well known that people use head motion during conversation to convey a range of meanings and emotions, and that women use more active head motion when conversing with each other than men use when they talk with each other.
When men and women converse together, the men use a little more head motion and the women use a little less. But the men and women might be adapting because of their gender-based expectations or because of the movements they perceive from each other.
What would happen if you could change the apparent gender of a conversant while keeping all of the motion dynamics of head movement and facial expression?
Using new videoconferencing technology, a team of psychologists and computer scientists – led by Steven Boker, a professor of psychology at the University of Virginia – were able to switch the apparent gender of study participants during conversation and found that head motion was more important than gender in determining how people coordinate with each other while engaging in conversation.
The scientists found that gender-based social expectations are unlikely to be the source of reported gender differences in the way people coordinate their head movements during two-way conversation.
The researchers used synthesized faces – known as avatars – in video-conferences with naïve participants, who believed they were conversing onscreen with an actual person rather than a synthetic version of a person.
In some conversations, the researchers changed the gender of the avatars and the vocal pitch of the avatar's voice – while still maintaining their actual head movements and facial expressions – convincing naïve participants that they were speaking with, for example, a male when they were in fact speaking with a female, or vice versa.
"We found that people simply adapt to each other's head movements and facial expressions, regardless of the apparent sex of the person they are talking to," Boker said. "This is important because it indicates that how you appear is less important than how you move when it comes to what other people feel when they speak with you."
He presented the findings May 24 at the annual convention of the Association for Psychological Science in San Francisco. A paper detailing the results is scheduled for publication in the Journal of Experimental Psychology: Human Perception and Performance.
The study, funded by the National Science Foundation, used a low-bandwidth, high-frame-rate videoconferencing technology to record and recreate facial expressions to see how people alter their behavior based on the slightest changes in expression of another person. The U.Va.-based team also includes researchers at the University of Pittsburgh, University of East Anglia, Carnegie Mellon University and Disney Research.
A video demonstration is available online at: http://faculty.virginia.edu/humandynamicslab/.
The technology uses statistical representations of a person's face to track and reconstruct that face. This allows the principal components of facial expression – only dozens in number – to be transmitted as a close rendition of the actual face. It's a sort of connect-the-dots fabrication that can be transmitted frame by frame in near-real time.
Boker and his team are trying to understand how people interact during conversation, and how factors such as gender or race may alter the dynamics of a conversation. To do so, they needed a way to capture facial expressions people use when conversing.
"From a psychological standpoint, our interest is in how people interact and how they coordinate their facial expressions as they talk with one another, such as when one person nods while speaking, or listening, the other person likewise nods," Boker said.
It is this "mirroring process" of coordination that helps people to feel a connection with each other.
"When I coordinate my facial expressions or head movements with yours, I activate a system that helps me empathize with your feelings," Boker said.
The technology the team developed further allows them to map the facial expressions of one person onto the face of another in a real time videoconference. In this way they can change the apparent gender or race of a participant and closely track how a naïve participant reacts when speaking to a woman, say, as opposed to a man.
"In this way we can distinguish between how people coordinate their facial expressions and what their social expectation is," Boker said.
This is absolutely amazing to me that we are so intuned to head motion that we can be tricked into perceiving another gender. I guess this exists because the more intuned you are the better your social relations are and therfore are more likely to survive.
Subscribe to:
Posts (Atom)