A diagram of the proposed age estimation system. Credit: Agbo-Ajala & Viriri.

Over the past few years, researchers have created a growing number of machine learning (ML)-based face recognition techniques, which could have numerous interesting applications, for instance, enhancing surveillance monitoring, security control, and potentially even forensic art. In addition to face recognition, advancements in ML have also enabled the development of tools to predict or estimate specific qualities (e.g., gender or age) of a person by analyzing images of their faces.

In a recent study, researchers at the University of Kwazulu-Natal, in South Africa, developed a -based to estimate people's age by analyzing images of their faces taken in random real-life environments. This new architecture was introduced in a paper published by Spinger and presented a few days ago at the International Conference on Computational Collective Intelligence (ICCCI) 2019.

Most traditional approaches for age classification only perform well when analyzing face images taken in controlled environments, for instance, in the lab or in photography studios. On the other hand, very few of these are able to estimate the age of people in images taken in real everyday settings.

"Deep learning methods have proven to be effective in solving this problem, especially with the availability of both a large amount of data for training and high-end machines," the researchers wrote in their paper. "In view of this, we propose a deep-learning solution to estimate age from real-life faces."

The team of researchers at the University of Kwazulu-Natal developed a deep convolutional neural network (CNN)-based architecture with six layers. Their model was trained to estimate the age of individuals from images of faces taken in uncontrolled environments. The architecture achieves this by learning what facial representations are most crucial for age estimation and focusing on these particular features.

The image preprocessing phase. Credit: Agbo-Ajala & Viriri.

To enhance the performance of their CNN-based model, the researchers pretrained it on a large dataset called IMDB-WIKI, which contains over a half-million images of faces taken from IMDB and Wikipedia, labeled with each subject's age. This allowed them to conform their architecture to face image content.

Subsequently, the researchers tuned the model using images from another two databases, namely MORPH-II and OUI-Adience, training it to pick up peculiarities and differences. MORPH-II contains approximately 70,000 labeled images of faces, while OUI-Adience contains 26,580 face images taken in ideal real-life environments.

When they evaluated their model on images taken in uncontrolled environments, the researchers found that this extensive training led to remarkable performance. Their model achieved state-of-the-art results, outperforming several other CNN-based methods for age estimation.

"Our experiments demonstrate the effectiveness of our method for age estimation in the wild when evaluated on the OUI-Adience benchmark, which is known to contain images of acquired in ideal and unconstrained conditions," the researchers wrote. "The proposed age classification method achieves new state-of-the-art results, with an improvement in accuracy of 8.6 percent (exact) and 3.4 percent (one-off) over the best-reported result on the OUI-Adience dataset."

In the future, the new CNN-based architecture developed by these researchers could enable more effective age estimation implementations in a variety of real-life settings. The team also plans to add layers to the model and train it on other datasets of face images taken in uncontrolled environments as soon as they become available, in order to further improve its performance.

More information: Olatunbosun Agbo-Ajala et al. Age Estimation of Real-Time Faces Using Convolutional Neural Network, Computational Collective Intelligence (2019). DOI: 10.1007/978-3-030-28377-3_26