On The Performance Improvement Of Neural Networks: Using Lempel-Ziv (LZ77) Algorithm to Compress Input Data in an Activation-Normalised Network
محل انتشار: سومین کنفرانس سیستم های تصمیم گیری هوشمند
سال انتشار: 1397
نوع سند: مقاله کنفرانسی
زبان: انگلیسی
مشاهده: 557
فایل این مقاله در 16 صفحه با فرمت PDF قابل دریافت می باشد
- صدور گواهی نمایه سازی
- من نویسنده این مقاله هستم
استخراج به نرم افزارهای پژوهشی:
شناسه ملی سند علمی:
IDS03_071
تاریخ نمایه سازی: 31 اردیبهشت 1398
چکیده مقاله:
In this paper two complementary performance improval techniques are demonstrated in order to obtain better results in a particular neural network. We perform lossless data compression on the input data to reduce the data size as well as implementing layer activation normalisation to avoid internal covariate shift. Data compression is performed using the dictionary-based lossless data compression algorithm known as LZ77 which reduces the size of the input data efficiently by approximately the half of its original size. This allows us to reduce the computations involved with desired weights associated with the input layer, which furthermore results in less computational time in the backpropagation algorithm. Later on, in the feedforward process, the activations are normalised for better performance of the network. By normalising the activations, we obtain more reliable results. With having the two techniques combined, a fast and much more reliable approach intraining networks is proposed.
کلیدواژه ها:
نویسندگان
Kimia Banihashem
University Of Tabriz, School Of Civil Engineering
Kosar Shirinzadeh Dastgiri
University of Tabriz, School of Electrical & Computer Engineering