Comparative Study of Convolutional Neural Networks

International Journal of Electronics and Communication Engineering
© 2019 by SSRG - IJECE Journal
Volume 6 Issue 8
Year of Publication : 2019
Authors : Sagnik M, Ramaprasad P
pdf
How to Cite?

Sagnik M, Ramaprasad P, "Comparative Study of Convolutional Neural Networks," SSRG International Journal of Electronics and Communication Engineering, vol. 6,  no. 8, pp. 18-21, 2019. Crossref, https://doi.org/10.14445/23488549/IJECE-V6I8P103

Abstract:

Artificial Neural Networks have changed the way any computer works. Starting from machine learning, we have now come a long way where detection of faces, data representations. have now come up to a stage that an observer can exclaim that the machine is “perfect”. However, a field where we still have research going on and is becoming efficient daily is mobile phones. Researchers have targeted convolutional neural networks (CNN), a branch of deep learning, for building neural network models that can execute on mobile phones. An emerging player in this field is the MnasNet, which was developed by Google’s brain team.
In this review paper, details about MnasNet’s facilities shall be discussed, it is advantages, and also how it fairs against various other CNN models by Google, Apple. Furthermore lastly, where next to the researchers will be looking into and the areas where the MnasNet might have to improve.

Keywords:

Artificial Neural Network, Convolutional Neural Network, Machine Learning,

References:

[1] SWNS: “Americans check their phones 80 times a day: Study”, New York Post, November 8, 2017. Retrieved from: [nypost.com/2017/11/08/americans-check-their-phones-80-times-a-day-study/]
[2] Hui Li, “Machine Learning – What it is and why it matters”, SAS- The power to know, 2018. Retrieved from: [sas.com/en_ae/insights/analytics/machine-learning.html]
[3] Luke Dormehl, “What is an artificial neural network?”, Digital Trends, September 13, 2018. Retrieved from: [digitaltrends.com/cool-tech/what-is-an-artificial-neural-network/]
[4] Jason Brownlee, “What is Deep Learning”, Machine Learning Mastery, August 16, 2016. Retrieved from: [machinelearningmastery.com/what-is-deep-learning/]
[5] Karpathy, “Convolutional Neural Networks for Visual Recognition”, Github- Stanford edu. Retrieved from: [cs231n.github.io/convolutional-networks/]
[6] MingXing Tan, Bo Chen, Ruoming Pang, Vijay Vasudevan, Quoc V. Le, “MnasNet: Platform- Aware Neural Architecture Search for Mobile”, July 31, 2018.
[7] MatthjisHollemans, “MobileNet version 2”, Machinethink, April 22, 2018. Retrieved from: [machinethink.net/blog/mobilenet-v2/]
[8] George Seif, “Everything you need to know about AutoML and Neural Architecture Search”, Towards Data Science, August 21, 2018. Retrieved from: [towardsdatascience.com/everything-you-need-to-know-about-automl-and-neural-architecture-search-8db1863682bf]
[9] Vincent Fung, “An overview of ResNet and its Variants”, Towards Data Science, July 15, 2017. Retrieved from: [towardsdatascience.com/an-overview-of-resnet-and-its-variants-5281e2f56035]
[10] Jesus Rodriguez, “What is New in Deep Learning Research: Mobile Deep Learning with Google MnasNet”, Towards Data Science, August 13, 2018. Retrieved from: [towardsdatascience.com/whats-new-in-deep-learning-research-mobile-deep-learning-with-google-MnasNet-cf9844d30ae8]
[11] Tsung-Yi Lin, Michael Maire, Serge Belongie, LubomirBourdev, Ross Girshick, James Hays, Pietro Perona, Deva Ramanan, C. Lawrence Zitnick, PiotrDollár,“Microsoft COCO: Common Objects in Context”, May 1, 2014.
[12] Bo Chen, Quoc V. Le, Ruoming Pang, and Vijay Vasudevan, “MnasNet: Towards Automating the Design of Mobile Machine Learning Models”, August 08, 2018.
[13] Shweta Bhatt, “5 Things You Need to Know about Reinforcement Learning”, Kdnuggets, March 2018. Retrieved from: [kdnuggets.com/2018/03/5-things-reinforcement-learning.html]
[14] Anand Saha, “Decoding the ResNet architecture”, November 02, 2017.
[15] R. Vasudevan, "Neural Networks and Web Mining" SSRG International Journal of Electronics and Communication Engineering 1.1 (2014): 9-14.