Science News
Robot Learns To Smile And Frown
ScienceDaily (July 11, 2009) — A hyper-realistic Einstein robot at the University of California, San Diego has learned to smile and make facial expressions through a process of self-guided learning. The UC San Diego researchers used machine learning to “empower” their robot to learn to make realistic facial expressions.
“As far as we know, no other research group has used machine learning to teach a robot to make realistic facial expressions,” said Tingfan Wu, the computer science Ph.D. student from the UC San Diego Jacobs School of Engineering who presented this advance on June 6 at the IEEE International Conference on Development and Learning.
The faces of robots are increasingly realistic and the number of artificial muscles that controls them is rising. In light of this trend, UC San Diego researchers from the Machine Perception Laboratory are studying the face and head of their robotic Einstein in order to find ways to automate the process of teaching robots to make lifelike facial expressions.
This Einstein robot head has about 30 facial muscles, each moved by a tiny servo motor connected to the muscle by a string. Today, a highly trained person must manually set up these kinds of realistic robots so that the servos pull in the right combinations to make specific face expressions. In order to begin to automate this process, the UCSD researchers looked to both developmental psychology and machine learning.
Developmental psychologists speculate that infants learn to control their bodies through systematic exploratory movements, including babbling to learn to speak. Initially, these movements appear to be executed in a random manner as infants learn to control their bodies and reach for objects.
“We applied this same idea to the problem of a robot learning to make realistic facial expressions,” said Javier Movellan, the senior author on the paper presented at ICDL 2009 and the director of UCSD’s Machine Perception Laboratory, housed in Calit2, the California Institute for Telecommunications and Information Technology.
Although their preliminary results are promising, the researchers note that some of the learned facial expressions are still awkward. One potential explanation is that their model may be too simple to describe the coupled interactions between facial muscles and skin.
To begin the learning process, the UC San Diego researchers directed the Einstein robot head (Hanson Robotics’ Einstein Head) to twist and turn its face in all directions, a process called “body babbling.” During this period the robot could see itself on a mirror and analyze its own expression using facial expression detection software created at UC San Diego called CERT (Computer Expression Recognition Toolbox). This provided the data necessary for machine learning algorithms to learn a mapping between facial expressions and the movements of the muscle motors.
Once the robot learned the relationship between facial expressions and the muscle movements required to make them, the robot learned to make facial expressions it had never encountered.
For example, the robot learned eyebrow narrowing, which requires the inner eyebrows to move together and the upper eyelids to close a bit to narrow the eye aperture.
“During the experiment, one of the servos burned out due to misconfiguration. We therefore ran the experiment without that servo. We discovered that the model learned to automatically compensate for the missing servo by activating a combination of nearby servos,” the authors wrote in the paper presented at the 2009 IEEE International Conference on Development and Learning.
“Currently, we are working on a more accurate facial expression generation model as well as systematic way to explore the model space efficiently,” said Wu, the computer science PhD student. Wu also noted that the “body babbling” approach he and his colleagues described in their paper may not be the most efficient way to explore the model of the face.
While the primary goal of this work was to solve the engineering problem of how to approximate the appearance of human facial muscle movements with motors, the researchers say this kind of work could also lead to insights into how humans learn and develop facial expressions.
“Learning to Make Facial Expressions,” by Tingfan Wu, Nicholas J. Butko, Paul Ruvulo, Marian S. Bartlett, Javier R. Movellan from Machine Perception Laboratory, University of California San Diego. Presented on June 6 at the 2009 IEEE 8th International Conference On Development And Learning.
from : Sciencedaily.com
Selengkapnya...
Selasa, 21 Juli 2009
Science News
Electronic Nose Created To Detect Skin Vapors
ScienceDaily (July 21, 2009) — A team of researchers from the Yale University (United States) and a Spanish company have developed a system to detect the vapours emitted by human skin in real time. The scientists think that these substances, essentially made up of fatty acids, are what attract mosquitoes and enable dogs to identify their owners.
"The spectrum of the vapours emitted by human skin is dominated by fatty acids. These substances are not very volatile, but we have developed an 'electronic nose' able to detect them", Juan Fernández de la Mora, of the Department of Mechanical Engineering at Yale University (United States) and co-author of a study recently published in the Journal of the American Society for Mass Spectrometry, says.
The system, created at the Boecillo Technology Park in Valladolid, works by ionising the vapours with an electrospray (a cloud of electrically-charged drops), and later analysing these using mass spectrometry. This technique can be used to identify many of the vapour compounds emitted by a hand, for example.
"The great novelty of this study is that, despite the almost non-existent volatility of fatty acids, which have chains of up to 18 carbon atoms, the electronic nose is so sensitive that it can detect them instantaneously", says Fernández de la Mora. The results show that the volatile compounds given off by the skin are primarily fatty acids, although there are also others such as lactic acid and pyruvic acid.
The researcher stresses that the great chemical wealth of fatty acids, made up of hundreds of different molecules, "is well known, and seems to prove the hypothesis that these are the key substances that enable dogs to identify people". The enormous range of vapours emitted by human skin and breath may not only enable dogs to recognise their owners, but also help mosquitoes to locate their hosts, according to several studies.
World record for detecting explosives
Aside from identifying people from their skin vapours, another of the important applications of the new system is that it is able to detect tiny amounts of explosives. The system can "smell" levels below a few parts per trillion, and has been able to set a world sensitivity record at "2x10-14 atmospheres of partial pressure of TNT (the explosive trinitrotoluene)".
The "father" of ionisation using the mass spectrometry electrospray is Professor John B. Fenn, who is currently a researcher at the University of Virginia (United States), and in 2002 won the Nobel Prize in Chemistry for using this technique in the analysis of proteins.
from: sciencedaily.com
Selengkapnya...
Rabu, 08 Juli 2009
Every face has special features that define that person, yet faces can also be very similar, explains Lin Huang, of Florida Atlantic University, in Boca Raton. That makes computerized face recognition for security and other applications an interesting but difficult task.
Face recognition software has been in development for many years. However, for biometric authentication at border crossings, for access to buildings, for automated banking, crime investigation, and other applications, has not yet become a mainstream application. The main technical limitation is although the systems are accurate they require a lot of computer power.
Early face recognition systems simply marked major facial features - eyes, nose mouth - on a photograph and computed the distances from these features to a common reference point. In the 1970s, a more automated approach using a facial template extended this idea to map the individual face on to a global template. By the 1980s, an almost entirely statistical approach led to the first fully automated face recognition system.
In the late 1980s researchers at Brown University developed the so-called "eigenface method", which was extended by a team at MIT in the early 1990s. Since then, approaches based on neural networks, dynamic link architectures (DLA), fisher linear discriminant model (FLD), hidden Markov models and Gabor wavelets. Then a way to create a ghost-like image that would succumb to an even more powerful analysis was developed that could accurately identify the majority of differences between faces.
However, powerful techniques have so far required powerful computers. Now, Huang and colleagues Hanqi Zhuang and Salvatore Morgera in the Department of Electrical Engineering, have applied a one-dimensional filter to the two-dimensional data from conventional analyses, such as the Gabor method. This allows them to reduce significantly the amount of computer power required without compromising accuracy.
The team tested the performance of their new algorithm on a standard database of 400 images of 40 subjects. Images are grey scale and just 92 x 112 pixels in size. They found that their technique is not only faster and works with low resolution images, such as those produced by standard CCTV cameras, but also solves the variation problems caused by different light levels and shadows, viewing direction, pose, and facial expressions. It can even see through certain types of disguises such as facial hair and glasses.
source : Sciencedaily.com