Introduction

Course Outcomes

Syllabus

Course Code: AE844

Need and importance – Coupled systems – Analyser vs. Evaluator – Single vs. bi-level optimisation – Nested vs. simultaneous analysis/design – MDO architectures – Concurrent subspace, collaborative optimisation and BLISS – Sensitivity analysis – AD (forward and reverse mode) – Complex variable and hyperdual numbers – Gradient and Hessian – Uncertainty quantification – Moment methods – PDF and CDF – Uncertainty propagation – Monte Carlo methods – Surrogate modelling – Design of experiments – Robust, reliability based and multi-point optimisation formulations.

Course Outline

We will start the course by looking at solution methodology of for a scalar problem

\[\min_x f(x)\]

The above statement means that we are looking for a value of \(x\) that minimises the function \(f(x)\). The problem can be solved either using a gradient based method or a non-gradient based method. We will look at both methods by looking at the following algorithms:

  • Gradient methods
    • Steepest Descent
    • Newton’s Method
    • Marquardt’s Method
    • Quasi-Newton Methods
      • BFGS
  • Non-gradient methods
    • Simulated Annealing
    • Genetic Algorithms
    • Particle Swarm Optimisation
    • Nelder-Mead Simplex Method

We will also see code a few of these methods to a flavour of how they work.

Next we generalise the above problem to include constraints

\[\begin{align} &\min_x f(x)\\ \mbox{such that}\quad & g(x) \leq 0\\ & h(x) = 0 \end{align}\]

Under a set of assumptions (which we will discuss in detail), we can state the conditions under which a given \(x^*\) is a minimum of \(f(x)\). These are

  • first order optimality conditions or necessary conditions
  • second order optimality conditions or sufficient conditions

We will look at the following algorithms to solve the above problem

  • Penalty Function Method

Next we will explore the complexities involved in optimisation of implicit functions

\[\begin{align} &\min_{x} f(u(x))\\ \mbox{such that}\quad & g(x, u) \leq 0\\ & h(x,u) = 0 \end{align}\]

Next we will explore various methods of calculating the derivatives of a function. We will look at

  • Finite Difference Method
  • Complex Variable Method
  • Automatic Differentiation
  • Hyperdual Numbers

We will compare the performance of these algorithms from the point of view of accuracy and computational cost.

In engineering applications, \(f(x)\) (for example lift and drag over an aircraft) are calculated using a computer code (for example a CFD code). These codes are computationally expensive and hence cannot be used directly in an optimisation algorithm. Hence we will look at the following methods to approximate \(f(x)\)

  • Surrogate Modelling
    • Polynomial Regression
    • Response Surface Method
    • Kriging
    • Radial Basis Functions
    • Artificial Neural Networks
    • Support Vector Machines
    • Gaussian Process Regression

We will look at the following methods to design experiments to build surrogate models

  • Full Factorial Design
  • Fractional Factorial Design
  • Latin Hypercube Sampling
  • Orthogonal Array Sampling
  • Sobol Sequence

Next we will look at the problem of optimisation of a function with multiple objectives

\[\min_x f(x) = \left[f_1(x), f_2(x), \ldots, f_n(x)\right]\]

We will look at the following algorithms to solve the above problem

  • Weighted Sum Method
  • Epsilon Constraint Method
  • Goal Attainment Method
  • Multi-Objective Genetic Algorithms

Since real life as uncertainties, we will next look at how to propagate uncertainty from input to output. This means given that \(x \in N(\mu, \sigma)\), what is the distribution of \(f(x)\). On the way, we will introduce the concept of random variables, probability density functions, cumulative density functions, moments, etc.

We will look at the following algorithms for uncertainty propagation:

  • Monte Carlo Method
  • Moment Method

Once we know how to propagate uncertainty, we will look at how to use this information to solve the following problems

\[\min_x f(x)\ \mbox{subject to}\ x = x_0 + N(0,1)\]

where \(N(0,1)\) is a normal distribution with mean \(0\) and standard deviation \(1\). This area of optimisation is called as robust optimisation.

Next we will look at the problem of optimisation of a function with multiple objectives and multiple disciplines (or subsystems) coupled together in a complex manner (i.e. the output of one subsystem is the input of another subsystem). This is called as multi-disciplinary optimisation.

By now, you would appreciate that the number of optimisation algorithms is quite large. Hence we will look at a few of the popular algorithms that are used in practice. We will also look at the following software packages that are used in practice

  • Matlab’s fmincon function
  • OpenMDAO
  • Dakota

Please note that this is quite a large syllabus. We will not be able to cover all the topics in detail. However, we will focus on things that are of interest to the class. In each iteration of this course, I have ended up covering different topics. So, if you are interested in a particular topic, please let me know and I will try to cover it.

However, presently I am interested in the application of Machine Learning techniques for Aerospace Applications. Hence, I will be spending at least 20% of the time on discussing basics of Neural networks and different architectures for surrogate modelling. Rest of the portion is open for discussion.

Grading

We will have at least four programming and two presentations. For each presentation, you will asked to read a particular research paper (to be decided in consultation with you) and present it to the class. The presentation will be followed by a discussion. The presentation will be graded on the basis of the following criteria

  • Clarity of presentation
  • Ability to answer questions
  • Ability to identify limitations of a given work

Programming assignments will be graded as per usual. The weightage of each component:

  • Quiz 1 : 10%
  • Quiz 2 : 10%
  • Internal : 40%
  • End semester : 40%

References

Gradient Optimisation

  1. Jasbir Arora, “Introduction to Optimum Design” (entry level book with a lot of solved examples. Easy to read.)

  2. Kalyanmoy Deb, “Optimisation for engineering design” (also good intro to GA, SA)

  3. Ranjan Ganguli, Engineering Optimization: A Modern Approach, CRC Press (2012).

  4. Nocedal, J. and Wright, S, “Numerical optimization”(Covers numerical gradient optimisation in great detail with pointers to original theorems.)

  5. Garret N. Vanderplaats, “Numerical Optimization techniques for engineering design”, (Good first level book. Chapter 7 on Surrogate modelling. Chapter 10 on structural optimisation using FEM.)

Non-gradient Optimisation

  1. Xin-She Yang, Xing-Shi He, “Mathematical Foundations of Nature-inspired Algorithms”

Multi-Objective Optimisation

  1. Kaisa Miettinen, “Nonlinear Multiobjective Optimisation” (Great theoretical background. Exhaustive coverage of most gradient multiobjective algorithms.)

  2. Kalyanmoy Deb, “Multi-Objective Optimization using Evolutionary Algorithms” (Less rigorous. Good introduction to the theory followed by a short survey of classical (gradient based) methods. Most of the books delves into the newer population based evolutionary optimisation algorithms for multi-objective optimisation.)

Surrogate Modelling

  1. Alexander Forrester, Andras Sobester, Andy Keane, “Engineering Design via Surrogate Modelling: A Practical Guide”, Wiley (2008).

  2. Khuri, A. I. and Cornell, J. A., “Response Surfaces: Design and Analyses”, 2nd ed., Marcel Dekker (1996).

  3. Montgomery, D. C., “Design and Analysis of Experiments”, 8th ed., John Wiley (2012).

  4. Gramacy, Robert B., “Surrogates: Gaussian Process Modeling, Design, and Optimization for the Applied Sciences”, CRC Press (2020) (Book Website)

Automatic Differentiation

  1. Griewank, A. and Walther, A., Evaluating Derivatives: Principles and Techniques of Algo- rithmic Differentiation, 2nd ed., SIAM (2008).

Multi-disciplinary Optimisation

  1. J. R. R. A. Martins, and A. Ning, “Engineering Design Optimization”, 2020. pdf

  2. Kochenderfer, Mykel and Wheeler, Tim A., “Algorithms for Optimization”, 2019. pdf

  3. Keane, A. J. and Nair, P. B., Computational Approaches for Aerospace Design: The Pursuit of Excellence, Wiley (2005).

Other MDO courses

  1. Prof. Alonso, Aero, Stanford ( notes )

  2. Prof. Wilcox (and others), Aero, MIT ( Lecture Slides )

Resources

Test Problems (#sec-test-problems)

MDO Papers

Matlab Specific Information

  1. Tolerance of fmincon

  2. Algorithms in fmincon

Optimisation Software