NIPUNA: A Novel Optimizer Activation Function for Deep Neural Networks

Golla Madhu, Sandeep Kautish, Khalid Abdulaziz Alnowibet, Hossam M. Zawbaa, Ali Wagdy Mohamed

Research output: Contribution to journalArticlepeer-review

Abstract

In recent years, various deep neural networks with different learning paradigms have been widely employed in various applications, including medical diagnosis, image analysis, self-driving vehicles and others. The activation functions employed in deep neural networks have a huge impact on the training model and the reliability of the model. The Rectified Linear Unit (ReLU) has recently emerged as the most popular and extensively utilized activation function. ReLU has some flaws, such as the fact that it is only active when the units are positive during back-propagation and zero otherwise. This causes neurons to die (dying ReLU) and a shift in bias. However, unlike ReLU activation functions, Swish activation functions do not remain stable or move in a single direction. This research proposes a new activation function named NIPUNA for deep neural networks. We test this activation by training on customized convolutional neural networks (CCNN). On benchmark datasets (Fashion MNIST images of clothes, MNIST dataset of handwritten digits), the contributions are examined and compared to various activation functions. The proposed activation function can outperform traditional activation functions.

Original languageEnglish
Article number246
JournalAxioms
Volume12
Issue number3
DOIs
Publication statusPublished - Mar 2023
Externally publishedYes

Keywords

  • convolutional neural networks
  • deep neural networks
  • NIPUNA
  • periodic function

Fingerprint

Dive into the research topics of 'NIPUNA: A Novel Optimizer Activation Function for Deep Neural Networks'. Together they form a unique fingerprint.

Cite this