Klasifikasi Nyeri pada Video Ekspresi Wajah Bayi Menggunakan DCNN Autoencoder dan LSTM

Yosi Kristian, I Ketut Eddy Purnama, Effendy Hadi Sutanto, Lukman Zaman, Esther Irawati Setiawan, Mauridhi Hery Purnomo


Babies are still unable to inform the pain they experience, therefore, babies cry when experiencing pain. With the rapid development of computer vision technologies, in the last few years, many researchers have tried to recognize pain from babies expressions using machine learning and image processing. In this paper, a research using Deep Convolution Neural Network (DCNN) Autoencoder and Long-Short Term Memory (LSTM) Network is conducted to detect cry and pain level from baby facial expression on video. DCNN Autoencoder is used to extract latent features from a single frame of baby face. Sequences of extracted latent features are then fed to LSTM so the pain level and cry can be recognized. Face detection and face landmark detection is also used to frontalize baby facial image before it is processed by DCNN Autoencoder. From the testing on DCNN autoencoder, the result shows that the best architecture used three convolutional layers and three transposed convolutional layers. As for the LSTM classifier, the best model is using four frame sequences.


Neural network, DCNN, LSTM, autoencoder, klasifikasi nyeri, ekspresi wajah bayi

Full Text:



K. M. Prkachin, N. A. Currie, dan K. D. Craig, “Judging Nonverbal Expressions of Pain.,” Can. J. Behav. Sci. Can. des Sci. du Comport., Vol. 15, No. 4, hal. 409, 1983.

P. Lucey, J.F. Cohn, I. Matthews, S. Lucey, S. Sridharan, J. Howlett, dan K.M. Prkachin, “Automatically Detecting Pain in Video Through Facial Action Units,” Syst. Man, Cybern. Part B Cybern. IEEE Trans., Vol. 41, No. 3, hal. 664–674, 2011.

T. Voepel-Lewis, J. R. Shayevitz, dan S. Malviya, “The FLACC: A Behavioral Scale for Scoring Postoperative Pain in Young Children,” Pediatr Nurs, Vol. 23, No. 3, hal. 293–297, 1997.

S. W. Krechel dan J. Bildner, “CRIES: A New Neonatal Postoperative Pain Measurement Score. Initial Testing of Validity and Reliability,” Pediatr. Anesth., Vol. 5, No. 1, hal. 53–61, 1995.

M. Buchholz, H. W. Karl, M. Pomietto, dan A. Lynn, “Pain Scores in Infants: A Modified Infant Pain Scale Versus Visual Analogue,” J. Pain Symptom Manage., Vol. 15, No. 2, hal. 117–124, 1998.

H. Elizeus, “Dynamic Acoustic Pattern as Pain Indicator on Baby Cries Post Surgery Procedure,” Disertasi, Universitas Airlangga, Surabaya, Indonesia, 2013.

M. Schiavenato, J.F. Byers, P. Scovanner, J.M. McMahon, Y. Xia, N. Lu, dan H. He, “Neonatal Pain Facial Expression: Evaluating the Primal Face of Pain,” Pain, Vol. 138, No. 2, hal. 460–471, 2008.

L. Nanni, S. Brahnam, dan A. Lumini, “A Local Approach Based on a Local Binary Patterns Variant Texture Descriptor for Classifying Pain States,” Expert Syst. Appl., Vol. 37, No. 12, hal. 7888–7894, 2010.

Y. Kristian, M. Hariadi, dan M. H. Purnomo, “Ideal Modified Adachi Chaotic Neural Networks and Active Shape Model for Infant Facial Cry Detection on Still Image,” Proceedings of the International Joint Conference on Neural Networks, 2014, hal. 2783–2787.

Y. Kristian, H. Takahashi, I.K.E. Purnama, K. Yoshimoto, E.I. Setiawan, E. Hanindito, M.H. Purnomo, “A Novel Approach on Infant Facial Pain Classification using Multi Stage Classifier and Geometrical-Textural Features Combination,” IAENG Int. J. Comput. Sci., Vol. 44, No. 1, hal. 112-121, 2017.

Y. Lecun, Y. Bengio, dan G. Hinton, “Deep Learning,” Nature, Vol. 521, No. 7553, hal. 436–444, 2015.

M. W. Gardner dan S. R. Dorling, “Artificial Neural Networks (The Multilayer Perceptron)—A Review of Applications in the Atmospheric Sciences,” Atmos. Environ., Vol. 32, No. 14–15, hal. 2627–2636, 1998.

R. Hecht-Nielsen, “Theory of the Backpropagation Neural Network,” Proc. Int. Jt. Conf. Neural Networks, 1989, Vol. 1, hal. 593–605.

A. Krizhevsky, “Learning Multiple Layers of Features from Tiny Images,” Sci. Dep. Univ. Toronto, Technical Report, hal. 1–60, 2009.

Y. LeCun, L. Bottou, Y. Bengio, dan P. Haffner, “Gradient-Based Learning Applied to Document Recognition,” Proc. of the IEEE, Vol. 86, No. 11, hal. 2278–2324, 1998.

A. Krizhevsky, I. Sutskever, dan G. E. Hinton, “Imagenet Classification with Deep Convolutional Neural Networks,” Advances in Neural Information Processing Systems, 2012, hal. 1097–1105.

S. E. Limantoro, Y. Kristian, dan D. D. Purwanto, “Pemanfaatan Deep Learning pada Video Dash Cam untuk Deteksi Pengendara Sepeda Motor,” JNTETI, Vol. 7, No. 2, hal. 167-173, 2018.

W. Ouyang dan X. Wang, “Joint Deep Learning for Pedestrian Detection,” 2013 IEEE International Conference on Computer Vision (ICCV), 2013, hal. 2056–2063.

G. Levi dan T. Hassner, “Age and Gender Classification Using Convolutional Neural Networks,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2015, hal. 34–42.

K. O’Shea dan R. Nash, “An Introduction to Convolutional Neural Networks,” arXiv Prepr., Vol. 1511, hal. 1–11, 2015.

W. Wang, Y. Huang, Y. Wang, dan L. Wang, “Generalized Autoencoder: A Neural Network Framework for Dimensionality Reduction,” IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Work, 2014, hal. 496–503.

G. E. Hinton dan R. R. Salakhutdinov, “Reducing the Dimensionality of Data with Neural Networks,” Science (80-. )., Vol. 313, No. 5786, hal. 504–507, 2006.

P. Vincent dan H. Larochelle, “Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion Pierre-Antoine Manzagol,” J. Mach. Learn. Res., Vol. 11, hal. 3371–3408, 2010.

S. Wold, K. Esbensen, dan P. Geladi, “Principal Component Analysis,” Chemom. Intell. Lab. Syst., Vol. 2, No. 1–3, hal. 37–52, 1987.

J. Masci, U. Meier, D. Cireşan, dan J. Schmidhuber, “Stacked Convolutional Auto-Encoders for Hierarchical Feature Extraction,” International Conference on Artificial Neural Networks, 2011, hal. 52–59.

H. Sak, A. Senior, dan F. Beaufays, “Long Short-Term Memory Recurrent Neural Network Architectures for Large Scale Acoustic Modeling,” Proc. of the Annual Conference of the International Speech Communication Association (Interspeech 2014), 2014, hal. 338–342.

S. Hochreiter, “The Vanishing Gradient Problem During Learning Recurrent Neural Nets and Problem Solutions,” Int. J. Uncertainty, Fuzziness Knowledge-Based Syst., Vol. 6, No. 2, hal. 107–116, 1998.

S. Hochreiter dan J. J. Schmidhuber, “Long Short-Term Memory,” Neural Comput., Vol. 9, No. 8, hal. 1–32, 1997.

P. Viola dan M. J. Jones, “Robust Real-Time Face Detection,” Int. J. Comput. Vis., Vol. 57, No. 2, hal. 137–154, 2004.

Y. Yamasari, S. M. S. Nugroho, D. F. Suyatno, dan M. H. Purnomo, “Meta-Algoritme Adaptive Boosting untuk Meningkatkan Kinerja Metode Klasifikasi pada Prestasi Belajar Mahasiswa,” J. Nas. Tek. Elektro dan Teknol. Inf., Vol. 6, No. 3, hal. 333-341, 2017.

V. Kazemi dan J. Sullivan, “One Millisecond Face Alignment with an Ensemble of Regression Trees,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, hal. 1867–1874.

B. Xu, N. Wang, T. Chen, dan M. Li, “Empirical Evaluation of Rectified Activations in Convolutional Network,” arXiv Prepr. arXiv1505.00853, hal. 1-5, 2015.

Z. Wang, A. C. Bovik, H.R. Sheikh, dan E.P. Simoncelli, “Image Quality Assessment: From Error Visibility to Structural Similarity,” IEEE Trans. Image Process., Vol. 13, No. 4, hal. 600–612, 2004.

F. J. Ordóñez dan D. Roggen, “Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition,” Sensors, Vol. 16, No. 1, hal. 115, 2016.

N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, dan R. Salakhutdinov, “Dropout: A Simple Way to Prevent Neural Networks from Overfitting,” J. Mach. Learn. Res., Vol. 15, No. 1, hal. 1929–1958, 2014.

DOI: http://dx.doi.org/10.22146/jnteti.v7i3.440


  • There are currently no refbacks.

Copyright (c) 2018 Jurnal Nasional Teknik Elektro dan Teknologi Informasi (JNTETI)

Jurnal Nasional Teknik Elektro dan Teknologi Informasi (JNTETI)

Departemen Teknik Elektro dan Teknologi Informasi, Fakultas Teknik Universitas Gadjah Mada
Jl. Grafika No 2. Kampus UGM Yogyakarta 55281
+62 274 552305