Search In this Thesis
   Search In this Thesis  
العنوان
A Stochastic Optimization Approach for Large Scale Machine Learning\
المؤلف
Abdelkhalek,Ahmed Mohamed
هيئة الاعداد
باحث / أحمد محمد عبد الخالق
مشرف / نيفين محمد بدرة
مشرف / عمار محمد عمار
مناقش / رضا أمين البرقوقى
تاريخ النشر
2024.
عدد الصفحات
80p:;
اللغة
الإنجليزية
الدرجة
الدكتوراه
التخصص
الهندسة (متفرقات)
تاريخ الإجازة
1/1/2024
مكان الإجازة
جامعة عين شمس - كلية الهندسة - فيزياء ورياضيات
الفهرس
Only 14 pages are availabe for public view

from 114

from 114

Abstract

Single objective global optimization algorithms have a basic role in machine learning models. Using gradient methods is computationally inefficient when convexity or differentiability is not guaranteed. On the other hand, meta-heuristic techniques such as genetic algorithms offer an effective approach for solving complex, non-convex, or non-differentiable optimization problems but these algorithms do not always guarantee convergence to the optimum point owing to the stochastic nature of its operators, and typically require more work to ensure convergence and improve performance. In this study, we develop an enhanced genetic algorithm that relies on directed crossover and normal mutation operators with dynamic parameters to increase the speed of convergence while preserving searchability.
Chapter 1: Proposes a brief introduction to machine learning and deep learning models. The role of optimization in such models is introduced besides the formulation of optimization problem in machine and deep learning. Challenges which face optimization algorithms at high dimensions are clarified. Finally, the importance of meta-heuristic algorithms is explained.
Chapter 2: introduces canonical genetic algorithm and explains each of its operators as well. We propose some of recent improvements of each operator and explain the concept of controlling parameters and their types.
Chapter 3: explains in detail the proposed genetic algorithm. The improvement in each operator is explained besides the proposed dynamic controlling parameters. The performance of the proposed algorithm is evaluated using two experiments. The first is to examine its convergence by using a set of 40 benchmark functions in 2 dimensions and 16 functions from the test set were tested at 10 and 100 dimensions. The second experiment is to test its ability in finding the solution by using CEC20 test suit. The evaluation results of the proposed algorithm are compared to the results of three modern optimization algorithms, (Whale optimization algorithm, Teacher-Learner based algorithm, and Covariance matrix adaptation evolution strategy). The results revealed that the proposed algorithm outperformed the conventional algorithms at lower dimensions in all test functions and showed a relatively better performance than the other algorithms at higher dimensions.
Chapter 4: proposes the applications of the enhanced genetic algorithms in optimizing AVR system performance with PID and PIDA controllers. Simulation results and comparative step response analysis show that EGA outperforms other optimization techniques.
Chapter 5: is the conclusion of this study and some suggestions for future work.