Deep Learning as a predictive model to classify handwritten digits

  • Omar Alexander Ruiz-Vivanco Universidad Nacional de Loja
Keywords: Artificial Intelligence, Deep Learning, Handwritten digits, Otsu Method, Wavelet Haar

Abstract

In this research work, the results of applying DeepLearning prediction models to identify the digit of an image,that contains a handwritten number of the MNIST database, arepresented. This set of dataset was acquired from the competitionof Kaggle: Digit Recognizer. The following process was applied:First, image preprocessing techniques were used, which focuson obtaining a pretty clear image and to reduce the size ofthe same, these objectives that are achieved with Otsu Method,transformed from Haar Wavelet and the Principal ComponentAnalysis (PCA), thus obtaining as a result, one set of new datasetto be evaluated. Second, the Deep Learning MxNET and H2omodels, which were executed in the statistical language R, wereapplied to these datasets obtained, this way, several predictionswere acquired. Finally, the best obtained predictions in theexperiment were sent to the Digit Recognizer competition, andthe results of this evaluation scored 99,129% of prediction.

DOI  

Downloads

Download data is not yet available.

Author Biography

Omar Alexander Ruiz-Vivanco, Universidad Nacional de Loja

 

 

References

Kaggle,(2016).Digit Recognizer. Learn computer vision fundamentals with the famous MNIST data. Recuperado de: https://www.kaggle.com/c/digit-recognizer.

Kaggle, (2016). The Home of Data Science & Machine Lear-ning.Kaggle helps you learn, work, and play. Recuperado de:https://www.kaggle.com/.

Y. LeCun and C. Cortes and C. Burges, (2015).THE MNIST DATABASE of handwritten digits, Recuperado de: http://yann.lecun.com/exdb/mnist/.

L. Deng. (2012). The MNIST Database of Handwritten Digit Images forMachine Learning Research [Best of the Web].IEEE Signal Processing Magazine, 29, pp. 141142.

Y. LeCun, L. Bottou, Y. Bengio, y P. Haffner. (1998). Gradient-basedlearning applied to document recognition.IEEE, 86(11), pp. 2278-2324.

P. Y. Simard, D. Steinkraus, y J. C. Platt. (2003). Best Practices forConvolutional Neural Networks Applied to Visual Document Analysis.Document Analysis and Recognition.IEEE Computer Society Washing-ton, DC, USA, pp. 958.

L. Bottou, C. Cortes y J.S. Denke. (2002). Comparison of classifiermethods: a case study in handwritten digit recognition.IEEE. DOI:10.1109/ICPR.1994.576879.

E. Kussul, T. Baidyk. (2004). Improved method of handwritten digitrecognition tested on MNIST database. Image and Vision Computing,22(12), pp. 971-981.

J. Schmidhuber, (2015). Deep learning in neural networks: An overview. Neural Networks, 61, pp. 85 - 117.

CENPARMI,(2010). About CENPARMI. Centre for Pattern Recognition and Machine Intelligence. Recuperado de: http://www.concordia.ca/research/cenparmi.html.

CEDAR, (2008). About CEDAR. Handwriting Recognition. Recuperado de: https://www.cedar.buffalo.edu/Databases/JOCR/.

D. Ciregan, U. Meier, y J. Schmidhuber. (2012). Multi-column deep neural networks for image classification. IEEE,DOI:10.1109/CVPR.2012.6248110.

D. Ciregan, U. Meier, L. M. Gambardella y J. Schmidhuber. (2012).Deep Big Simple Neural Nets Excel on Handwritten Digit Recognition.IEEE, DOI: 10.1162/NECOa00052.

Ruiz-Vivanco, Omar. (2016).Análisis predictivo para clasificar dígitos escritos a mano utilizando la base de datos MNIST(tesis de maestría).Universidad del Pa ́ıs Vasco / Euskal Herriko Unibertsitatea (UPV/EHU),España.

Google Brain Team. (2016).TensorFlow is an Open Source SoftwareLibrary for Machine Intelligence. Recuperado de: https://www.tensorow.org/.

L. Jianzhuang, L. Wenqing y T. Yupeng. (2002). The Automatic Th-resholding of Gray-Level Pictures via Two-Dimensional Otsu Method.IEEE, DOI: 10.1109/CICCAS.1991.184351 .

L. Hui, C. Shi, A. Min-si y W. Yi-qi. (2008). Application of an Improved Genetic Algorithm in Image Segmentation. IEEE, DOI: 10.1109/CS-SE.2008.794.

J. Rodríguez.(2010).M ́etodootsu, Recuperado de: https://es.scribd.com/doc/45875911/Metodo.Otsu/.

Universidad Nacional de Quilmes. (2005).Segmentacio ́on por umbrali-zacio ́on - m ́etodo de otsu. Recuperado de: www.iaci.unq.edu.ar.

G. Jin-Sheng, y J. Wei-Sun. (2007). The Haar wavelets operational matrix of integration. International Journal of Systems Science, 27, pp.623-628.

J. Yang, D. Zhang y A.F. Frangi. (2004). Two-dimensional PCA: a new approach to appearance-based face representation and recognition. IEEE,DOI: 10.1109/TPAMI.2004.1261097.

M.Nunes.(2014).Package binhf, Recuperado de: https://cran.rproject.org/web/packages/binhf/binhf.pdf.

J. Johnson y A. Karpathy. (2016).Cs231n: Convolutional neural net-works for visual recognition. Recuperado de: http://cs231n.github.io/.

H2O Community. (2016). H2O Open Source Software Documen-tation.H2O and Sparkling Water Documentation. Recuperado de:http://docs.h2o.ai/h2o/latest-stable/index.html.

KDnuggets,(2016),Top 10 Deep Learning Projects on Github. Recuperado de: http://www.kdnuggets.com/2016/01/top-10-deeplearninggithub.html.

MXNetCommunity.(2016). MXNet Architecture. Flexible and Eficient Library for Deep Learning. Recuperado de: https://mxnet.readthedocs.io/en/latest/.

Published
2017-11-01
How to Cite
[1]
O. A. Ruiz-Vivanco, “Deep Learning as a predictive model to classify handwritten digits”, LAJC, vol. 4, no. 3, pp. 73-78, Nov. 2017.
Section
Research Articles for the Regular Issue