Abstract of Thesis presented at COPPE/UFRJ as a partial fulfillment of the requirements for the degree of Doctor of Science (D.Sc.)

On Using Gradient Systems for Solving Optimization Problems in Parallel Computers

Leonardo Valente Ferreira

August/2006

Advisor:  Eugenius Kaszkurewicz

Department: Eletrical Engineering

      Gradient systems designed to solve optimization problems are proposed and analysed. These systems are obtained by means of an exact penalty method, resulting in systems of ordinary differential equations with discontinuous righthand sides, which are neural network models with discontinuous activation functions. The global convergence of the proposed systems are proved by means of a Persidskii form of the gradient systems, and non-smooth Lure-Persidskii or diagonal type Lyapunov functions.
      The proposed gradient systems can be easily parallelized, which makes them suitable to solve large scale problems. Three applications are considered: the k-winners-take-all problem, formulated by means of a linear programming problem, training of support vector machines (SVMs) and image restoration. SVM training with large data sets and image restoration are usually large scale problems, and the gradient systems that solve those problems need to be parallelized.
      The gradient systems are solved by means of parallel numerical integration of these systems, and to use efficient to use efficient numerical integration methods with adaptive stepsize control, the nonlinearities are smoothed in a neighborhood of the corresponding discontinuity surface by means of the boundary layer technique. It is shown that gradient systems solve efficienty large scale optimization problems, when are implemented in parallel computers and/or solved by means of efficient numerical integration techniques.


Ver Resumo
Imprimir Abstract
Dados da tese na base Sigma