# Important information - change of location starting Wednesday

Due to an unforeseen closure of the university, the remaining lectures of the spring school will take place from tomorrow at the Paraninfo Universitario, which is located directly at the Plaza de Armas. The lectures start tomorrow, Wednesday, at 9:00, as usual. Please bring your badges to enter the building. We also ask you to inform all participants that have not bern present in todays practical exercises about the change. We are very sorry for the inconveniences and hope to see you tomorrow morning.

**4 Events in the Peruvian spring:**

- We organize an online Python course from July 14 - 16 and July 21 - 22
- We offer an online course on finite elements from August 4-6 and 11-13, 2022.
- From September 26 - September 30 the Spring School on Scientific Computing takes place in Cusco, Peru
- In the week following the spring school, the Universidad Nacional de San Antonio Abad del Cusco will host the Peruvian conference on Scientific Computing, October 3 to October 6

#### Content of the school

In the week ahead of the conference on scientific compiting, from September 26 to September 30, 2022, we will organize an Introductory Spring School. The school will start at 9 am on Monday, September 26 and end on Friday, September 30 at around 6pm. There will be theoretical and a computational parts. In the latter, we will apply numerical methods in a programming environment in Python. All lectures will be given in English.

The Spring School is directed to advanced students, lecturers and researchers in Mathematics, Physics and Engineering in Peru and from further Latin American countries. We will give an introduction to different fields of Scientific Computing and Numerical Analysis, including basic numerical methods to solve linear and nonlinear systems of equations, ordinary differential equations, problems of numerical optimization and Machine Learning.

As a participant you should have basic skills in programming with Python. In July we organize an online Python class. You are welcome to participate in this event.

#### Place and dates

School and registration take place at the Professional School of Mathematics (Escuela Profesional de Matemáticas), which is located inside the Science Faculty (Facultad de Ciencias, Pabellón C) of the Universidad Nacional de San Antonio Abad del Cusco. We recommend to enter the university from the Avenida Universitaria (Puerta 6). The school takes place from September to 26 to 30, 2022, each day from 9:00 - 18:00. Registration is from 8:00 to 8:30 on Monday, September 26, 2022.

#### Registration and Fees

Registration for the school and the conference is closed.

The fee for participation in the school is 100 Soles (to be paid on-site). Participants from the Comunidad Andina that also take part in the conference pay a reduced fee of 150 Soles for both events. The number of participants in this school is limited.

The deadline for registration is **05.08.2022.**

#### Program

26.09.2022 | 27.09.2022 | 28.09.2022 | 29.09.2022 | 30.09.2022 | |
---|---|---|---|---|---|

08:00 - 09:00 | Registration | ||||

09:00 - 10:30 | Linear Systems | Nonlinear Equations | Optimization | Ordinary Differential Equations | Neural Networks |

10:30 - 11:00 | Break | Break | Break | Break | Break |

11:00 - 12:00 | Linear Systems | Nonlinear Equations | Optimization | Ordinary Differential Equations | Neural Networks |

12:00 - 15:00 | Opening/Lunch | Lunch | Lunch | Lunch | Lunch |

15:00 - 18:00 | Python Exercises | Python Exercises | Python Exercises | Python Exercises | Python Exercises |

#### Practical Exercises in Python

To take part in the Python exercises please consider the information given on this site.

#### Content

**Linear systems (Kinnewig, Wick)**

This class is devoted to the efficient, robust, and accurate solution of large systems of linear equations (SLE). These arise through numerical discretizations (often from the finite element method) in numerous applications including modern research fields such as optics, building bridges, subsurface modelling, and biomedicine to name a few. Solving large SLE often comes with a high computational cost in terms of memory (up to MB and TB) and computing times up to hours, days and weeks. Therefore, we need suitable techniques to solve SLE as efficient as possible. In this course, we will discuss different techniques for solving large SLE. The main focus is on iterative solvers since they are computationally more attractive for solving large SLE and in addition, they may serve as so-called smoothers within multigrid methods.

We start our class with fixed-point methods such as Richardson, Jacobi and Gauß-Seidel. Therein, algorithmic concepts, as well as convergence properties, will be discussed. In the second part, we concentrate on descent methods such as gradient descent and the conjugate gradient method. Furthermore, we will discuss two important aspects of how to improve the performance of iterative solvers. The first is the use of preconditioners, which can lead to much faster convergence. The second is by the use of sparse matrices, which can reduce the memory consumption of the solver. In general, especially when a suitable preconditioner is chosen, an iterative solver often becomes less computationally expensive than a direct solver (such as Gaussian elimination or LU decomposition).

In the practical afternoon session, various matrices will be provided and the participants are asked to complement code snippets and run themselves different iterative solvers and preconditioners in order to compare their efficiencies and to learn how to interpret the final findings.

**Nonlinear Equations (Frei)**

We introduce and analyse numerical methods to solve nonlinear equations in one variable as well as systems of nonlinear equations in multiple variables. In particular, we investigate Newton's method and its convergence behavior depending on the starting value. Then, we introduce some variants of Newton's method to increase the method's convergence radius. Secondly, we study fixed point iterations and compare their convergence behavior with the previous methods. In the practical class in the afternoon, the methods are implemented in Python and applied to solve different nonlinear problems.

**Optimization (Braack)**

This introductory lecture is devoted to address numerical methods for solving restricted optimization problems.
To keep it easy we focus on finite dimensional problems arising for instance after discretization
of ordinary or partial differential equations. We start with the basic concept of Lagrange functionals
and derive the important Karush-Kuhn-Tucker condition and
Slater condition to describe the necessary condition of an optimization problem. We proceed with
the optimality condition of second order to have also a sufficient condition. Then, we will discuss
some numerical algorithms as active sets, penalty methods and barrier methods to find those
optimal solutions by an iterative method. We will terminate this lecture by presenting the perhaps
most prominent iterative methods as the Lagrange-Newton method and the
SQP algorithm. For the afternoon session we plan to realize the
upper mentioned concepts by practical Python exercises.

**Ordinary differential equations (Mehlmann)**

This class gives a very brief introduction to ordinary differential equations from the theoretical point of view and then quickly focusses on the numerical approximation of linear and nonlinear differential equations. Mainly, we will discuss the key aspects of convergence and stability of numerical solutions.

The lecture will be based on the previous ones: we will again have to solve linear systems of equations and nonlinear systems of equations. The practical session in the afternoon will be used for numerical experiments.

**Neural Networks (Richter)**

We introduce (deep) neural networks from an applied mathematics and scientific computing point of view. First, we introduce neural networks as a very general class of functions with many free parameters and bring together the concept of *learning*, as it is called in the machine learning context, with *parameter identification* which is a very classical approach in scientific computing. Here, we will build on the lecture on *optimization* taking place on Wednesday.

In a second step we describe some first approaches on how deep neural networks can be used to approximate ordinary or partial differential equations. The afternoon class is used for a practical realization. We will use Python and in particular the library PyTorch for easy experiments with neural networks for the solution of differential equations.

#### Lecturers

- Prof. Dr. Malte Braack, Christian-Albrechts University Kiel, Germany
- Dr. Stefan Frei, University of Konstanz, Germany
- MSc. Sebastian Kinnewig, Leibniz University Hannover, Germany
- Dr. Carolin Mehlmann, Max-Planck Institute for Meteorology Hamburg, Germany
- Prof. Dr. Thomas Richter, Otto-von-Guericke University Magdeburg, Germany
- Prof. Dr. Thomas Wick, Leibniz University Hannover, Germany

#### Contact

For all questions regarding this spring school do not hesitate to contact Thomas Richter.