Pengembangan Stochastic Gradient Descent dengan Penambahan Variabel Tetap

Main Article Content

Adimas Tristan Nagara Hartono
Hindriyanto Dwi Purnomo

Abstract

Stochastic Gradient Descent (SGD) is one of the commonly used optimizers in deep learning. Therefore, in this work, we modify stochastic gradient descent (SGD) by adding a fixed variable. We will then look at the differences between standard stochastic gradient descent (SGD) and stochastic gradient descent (SGD) with additional variables. The phases performed in this study were: (1) optimization analysis, (2) fix design, (3) fix implementation, (4) fix test, (5) reporting. The results of this study aim to show the additional impact of fixed variables on the performance of stochastic gradient descent (SGD).

Downloads

Download data is not yet available.

Article Details

How to Cite
Hartono, A. T. N., & Purnomo, H. D. (2023). Pengembangan Stochastic Gradient Descent dengan Penambahan Variabel Tetap. Jurnal JTIK (Jurnal Teknologi Informasi Dan Komunikasi), 7(3), 359–367. https://doi.org/10.35870/jtik.v7i3.840
Section
Computer & Communication Science
Author Biographies

Adimas Tristan Nagara Hartono, Universitas Kristen Satya Wacana

Program Studi Teknik Informatika, Fakultas Teknologi Informasi, Universitas Kristen Satya Wacana, Kota Salatiga, Provinsi Jawa Tengah, Indonesia

Hindriyanto Dwi Purnomo, Universitas Kristen Satya Wacana

Program Studi Teknik Informatika, Fakultas Teknologi Informasi, Universitas Kristen Satya Wacana, Kota Salatiga, Provinsi Jawa Tengah, Indonesia

References

Dahria, M., 2008. Kecerdasan Buatan (Artificial Intelligence). Jurnal Saintikom, 5(2), pp.185-197.

Giri, S., 2020. Writing Custom Optimizer in TensorFlow Keras API, https://cloudxlab.com/blog/writing-custom-optimizer-in-tensorflow-and-keras/ [Accessed at 28 Oktober 2021].

Wright, L. and Demeure, N., 2021. Ranger21: a synergistic deep learning optimizer. arXiv preprint arXiv:2106.13731.