期刊文献+

Feature Representations Using the Reflected Rectified Linear Unit(RReLU) Activation 被引量:9

原文传递
导出
摘要 Deep Neural Networks(DNNs)have become the tool of choice for machine learning practitioners today.One important aspect of designing a neural network is the choice of the activation function to be used at the neurons of the different layers.In this work,we introduce a four-output activation function called the Reflected Rectified Linear Unit(RRe LU)activation which considers both a feature and its negation during computation.Our activation function is"sparse",in that only two of the four possible outputs are active at a given time.We test our activation function on the standard MNIST and CIFAR-10 datasets,which are classification problems,as well as on a novel Computational Fluid Dynamics(CFD)dataset which is posed as a regression problem.On the baseline network for the MNIST dataset,having two hidden layers,our activation function improves the validation accuracy from 0.09 to 0.97 compared to the well-known Re LU activation.For the CIFAR-10 dataset,we use a deep baseline network that achieves 0.78 validation accuracy with 20 epochs but overfits the data.Using the RRe LU activation,we can achieve the same accuracy without overfitting the data.For the CFD dataset,we show that the RRe LU activation can reduce the number of epochs from 100(using Re LU)to 10 while obtaining the same levels of performance.
出处 《Big Data Mining and Analytics》 2020年第2期102-120,共19页 大数据挖掘与分析(英文)
  • 相关文献

同被引文献76

引证文献9

二级引证文献37

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部