Launch Vehicle Trajectory Optimisation and Design

Devendra Ghate

Motivation

  • LVM3 Launch

  • SpaceX Starship

  • Aircraft Climb

  • UAV reconnaissance mission

  • Missile interception

  • LEO to GEO transfer

  • Earth to Mars mission

  • Walking from this stage to the exit door…

Motivation (contd.)

  • What path (trajectory) did the launch vehicle follow?
  • What makes a trajectory feasible?
  • What are the desired properties of a trajectory?
  • What makes a trajectory optimal?

Optimisation Preliminaries

Simple Example

\[z = 3 (1-x)^2 \exp{(-(x^2) - (y+1)^2)} - 10 (x/5 - x^3 - y^5) \exp{(-x^2 - y^2)} - 1/3\exp{(-(x+1)^2 - y^2)}\]

Gradient Optimisation

  • Steepest Descent

\[\begin{align*} \mathbf{x}_{k+1} &= \mathbf{x}_k - \alpha \nabla f(\mathbf{x}_k) \end{align*}\]

where \(\alpha\) is the step size.

Gradient Descent Path

  • Newton’s Method

\[\begin{align*} \mathbf{x}_{k+1} &= \mathbf{x}_k - \alpha \left( \nabla^2 f(\mathbf{x}_k) \right)^{-1} \nabla f(\mathbf{x}_k) \end{align*}\]

Convex and Non-Convex Problems

  • A convex function is defined as

\[f(\lambda \mathbf{x} + (1-\lambda) \mathbf{y}) \leq \lambda f(\mathbf{x}) + (1-\lambda) f(\mathbf{y})\]

  • A convex space is a space where the line joining any two points in the space lies completely within the space

  • A convex optimisation problem is a problem where the objective function and the constraints are convex and the feasible design space is convex.

  • Convex problems have a unique global minimum

  • Most trajectory optimisation problems are non-convex

Gradient Optimisation

  • Convex/Non-Convex Nonlinear/Quadratic programming Local/Global Minimum
Feature Nonlinear Programming Quadratic Programming
Convexity Non-convex Convex
Algorithm Type Iterative Iterative
Convergence Guaranteed only local minimum Guaranteed global minimum
Sensitivity to Initial Guess Yes No
Computational Cost Expensive Efficient

Summary

  • Gradient optimisation is a powerful tool
  • We prefer convex problems
  • Original trajectory optimisation problems are non-convex but can be converted to convex problems by linearising the dynamics

Introduction

Introduction

  • State of a system is a set of variables that describe the system (completely) at a given time \(\mathbf{x}(t)\)
  • Trajectory is a path followed by a moving object following
    • Kinematics: \(\frac{d\mathbf{r}}{dt} = \mathbf{U}\)
    • Dynamics: \(\frac{d\mathbf{U}}{dt} = \frac{\mathbf{F}}{m}\)
  • Non-autonomous systems have time-varying dynamics usually controlled by external inputs \(\mathbf{u}(t)\).

Problem Statement

Bolza form of the trajetory optimisation problem is:

\[ \mathbf{u}^* = \mathop{\mathrm{arg\,min}}_{\mathbf{u}} J \]

where the cost function \(J\) is defined as

\[J = \ell_f(\mathbf{x}(t_f)) + \int_{t_0}^{t_f} \ell_p(\mathbf{x}(t), \mathbf{u}(t), t) \,dt\]

  • \(\ell_p(.)\) is the path objective (cost) function (fuel consumption)
  • \(\ell_f(.)\) is the boundary objective function (final velocity)

Problem Statement (more elaborate)

\[ \begin{align*} \min_{\mathbf{u}(\cdot)} \quad & \ell_f (\mathbf{x}(t_f)) + \int_{t_0}^{t_f} \ell_p(\mathbf{x}(t),\mathbf{u}(t), t)\, dt \\ \text{subject to}\quad & \dot{\mathbf{x}}(t) = \mathbf{f}(\mathbf{x}(t),\mathbf{u}(t)),\\ & \mathbf{x}(t_0) = \mathbf{x}_0, \\ & \mathbf{g}(\mathbf{x}(t), \mathbf{u}(t), t) \leq 0, \\ & \mathbf{h}(\mathbf{x}(t), \mathbf{u}(t), t) = 0 \quad \forall\, t\in[t_0, t_f] .\end{align*} \]

  • \(\mathbf{f}(.)\) is nonlinear dynamics
  • \(\mathbf{g}(.)\) are inequality constraints (e.g. thrust limits, aerodynamic limits)
  • \(\mathbf{h}(.)\) are equality constraints (e.g. position at a given time)

Optimal Control Vs. Trajectory Optimisation

Retro-Propulsion Landing

RPL Problem

\[\begin{align*} \frac{d r}{d t} &= U \sin\gamma \\ \frac{d s}{d t} &= U \cos\gamma \\ \frac{d U}{d t} &=-\left( \frac{T \cos\beta+D}{m}+\frac{\sin\gamma}{r^2 }\right) \\ \frac{d \gamma }{d t} &= - \left( \frac{T \sin\beta}{m U} +\frac{\cos\gamma}{r^2 U} \right) \\ \frac{d m}{d t} &= -\frac{T}{I_{sp}} \\ \end{align*}\]

(Point Mass, non-rotating Earth, 2D)

(\(\beta\) - Thrust vector angle, \(\theta\) - Flight path angle)

Source: Shashank Tomar (URSC)

RPL Problem

  • State variables (\(\mathbf{x}\)): Radius vector (\(r\)), Downrange distance (\(s\)), Velocity (\(U\)), Flight path angle (\(\gamma\)) and mass (\(m\))

  • Control variables (\(\mathbf{u}\)): Thrust magnitude (\(T\)) and thrust vectoring angle (\(\beta\))

  • Objective (\(J\)): Minimise fuel consumption (\(J = \min -m(t_f)\))

  • Inequality Constraints (\(\mathbf{g}\)): \(-10^o \leq \beta \leq 10^o\), \(0.3T_{max} \leq T \leq T_{max}\)

  • Equality Constraints (\(\mathbf{h}\)): \(r(t_f) = 1\), \(s(t_f) = s_f^*\), \(U(t_f) =U_f^*\), \(\gamma(t_f) =-\pi/2\), \(\beta(t_f) = 0\)

  • Initial conditions: \(r(t_0) = 1 + \frac{100}{R_{e}}\), \(s(t_0) = 0\), \(U(t_0) = 0\), \(\gamma(t_0) = 0\), \(m(t_0) = m_0\)

Constraints

  • Maximum dynamic pressure
    • Maximum dynamic pressure is a function of altitude (\(\rho = \rho_{SSL} e^{-h/H}\), \(H = 8.4\) km) and velocity

\[q = \frac{1}{2} \rho U^2\]

  • Maximum aerodynamic heating

\[\dot{q} = \frac{1}{2} \rho U^3 C_{\text{heat}}\]

where \(C_{\text{heat}}\) is the heat transfer coefficient.

  • Maximum acceleration

Solution Methods

Solution Methods

\[ \min_{u} J(u), \quad J(u) = u^2 + 2 \]

Optimal solution \(u^* = 0\) follows from the first order optimatity condition \(\frac{dJ}{du} = 0\).

In trajectory optimisation, \(\mathbf{u}(.)\) is a function and the problem is more complex.

Code
digraph G {
  variables[
    label = "Variables";
    shape = oval;
  ];
  kkt[
    label = "Karush-Kuhn-Tucker Conditions";
    shape = oval;
  ];

  functions[
    label = "Functions";
    shape = oval;
  ];
  pmp[
    label = "Pontryagin's Minimum Principle";
    shape = oval;
  ];

  variables -> kkt [ label = "Calculus" ];
  functions -> pmp [ label = "Calculus of Variations" ];
}

G variables Variables kkt Karush-Kuhn-Tucker Conditions variables->kkt Calculus functions Functions pmp Pontryagin's Minimum Principle functions->pmp Calculus of Variations

  • Analytical solutions possible for simple problems
  • For complex problems, numerical methods are used

Direct methods

  • Discretize the state vector and/or control input to form a nonlinear programming problem

  • Advantages:

    • Easy to implement
    • Can handle complex problems
  • Disadvantages:

    • Computationally expensive
    • May not converge to global minimum

Indirect methods

  • Solve the necessary conditions for optimality (Pontryagin’s Minimum Principle)
  • Advantages:
    • Converges to global minimum
    • Computationally efficient
  • Disadvantages:
    • Complex to implement
    • May not handle complex problems

Direct Transcription

Direct Transcription

Convert continuous functions (\(\mathbf{x}(t),\,\mathbf{u}(t)\)) to discrete variables (\(\mathbf{x}[n],\, \mathbf{u}[n]\))

For example, \(u(t) = sin(t)\) can be approximated

Code
# This program plots the sine function and a piecewise constant approximation
using Plots

x = 0:0.01:π
y = sin.(x)

x1 = 0:π/10:π
y1 = sin.(x1)

plot(x, y, label="", xlabel="t", ylabel="u(t)")
# Now we draw horizontal lines in each segment of the piecewise constant approximation
for i in 1:length(x1)-1
    plot!([x1[i], x1[i+1]], [y1[i], y1[i]], color="red", label="", linestyle=:dash, linewidth=2)
    plot!([x1[i+1], x1[i+1]], [y1[i], y1[i+1]], color="red", label="", linestyle=:dash, linewidth=2)
    plot!([x1[i], x1[i+1]], [y1[i+1], y1[i+1]], color="green", label="", linewidth=2, linestyle=:dot)
    plot!([x1[i], x1[i]], [y1[i], y1[i+1]], color="green", label="", linewidth=2, linestyle=:dot)
    # plot!([x1[i], x1[i+1]], [(y1[i]+y1[i+1])/2, (y1[i]+y1[i+1])/2], color="black", label="", linewidth=3, linestyle=:solid)
    # plot!([x1[i], x1[1]], [(y1[i]+y1[i+1])/2, (y1[i]+y1[i+1])/2], color="black", label="", linewidth=3, linestyle=:solid)
end
plot!()

Piecewise constant approximation of a sine function

Direct Transcription (Linear System)

\[ \begin{align*} \min_{\mathbf{u}(\cdot)} \quad & \ell_f (\mathbf{x}(t_f)) + \int_{t_0}^{t_f} \ell_p(\mathbf{x}(t),\mathbf{u}(t), t)\, dt \\ \text{subject to}\quad & \dot{\mathbf{x}}(t) = \mathbf{A}\mathbf{x}(t) + \mathbf{B}\mathbf{u}(t),\\ & \mathbf{x}(t_0) = \mathbf{x}_0, \\ & \mathbf{g}(\mathbf{x}(t), \mathbf{u}(t), t) \leq 0, \\ & \mathbf{h}(\mathbf{x}(t), \mathbf{u}(t), t) = 0 \quad \forall\, t\in[t_0, t_f] .\end{align*} \]

\[\begin{align*} \min_{\mathbf{x}[\cdot],\mathbf{u}[\cdot]} \quad & \ell_f(\mathbf{x}[N]) + \sum_{n=0}^{N-1} \ell_p(\mathbf{x}[n],\mathbf{u}[n]) \\ \text{subject to}\quad & \mathbf{x}[n+1] = \mathbf{C}\mathbf{x}[n] + \mathbf{D}\mathbf{u}[n] , \quad \forall\, n\in[0, N-1] \\ & \mathbf{x}[0] = \mathbf{x}_0 \\ & + \text{additional constraints}. \end{align*}\]

where \(\mathbf{C}= e^{\mathbf{A}\Delta t}\) and \(\mathbf{D}= (\mathbf{A}^{-1}e^{\mathbf{A}\Delta t} - \mathbf{I})\mathbf{B}\).

Direct Transcription

\[ \begin{align*} \min_{\mathbf{u}(\cdot)} \quad & \ell_f (\mathbf{x}(t_f)) + \int_{t_0}^{t_f} \ell_p(\mathbf{x}(t),\mathbf{u}(t), t)\, dt \\ \text{subject to}\quad & \dot{\mathbf{x}}(t) = \mathbf{f}(\mathbf{x}(t),\mathbf{u}(t)),\\ & \mathbf{x}(t_0) = \mathbf{x}_0, \\ & \mathbf{g}(\mathbf{x}(t), \mathbf{u}(t), t) \leq 0, \\ & \mathbf{h}(\mathbf{x}(t), \mathbf{u}(t), t) = 0 \quad \forall\, t\in[t_0, t_f] .\end{align*} \]

\[\begin{align*} \min_{\mathbf{x}[\cdot],\mathbf{u}[\cdot]} \quad & \ell_f(\mathbf{x}[N]) + \sum_{n=0}^{N-1} \ell_p(\mathbf{x}[n],\mathbf{u}[n]) \\ \text{subject to}\quad & \mathbf{x}[n+1] = \mathbf{x}[n] + \int_{t[n]}^{t[n+1]} \mathbf{f}(\mathbf{x}(t), \mathbf{u}(t)) dt, \quad \forall\, n\in[0, N-1] \\ & \mathbf{x}[0] = \mathbf{x}_0 \\ & + \text{additional constraints}. \end{align*}\]

Direct Transcription

So what has happened?

  • Infinite dimensional problem has been converted to a finite dimensional problem
  • But the nonlinear optimisation problem is still too large
    (\(N\times length(u) \times length(x)\) design variables)
  • It is a non-convex problem
  • It is a highly nonlinear problem
  • Mostly requires good initial guess to converge to the global minimum
  • Computationally expensive

Most importantly, it requires us to integrate the dynamics over the time interval. For sufficient accuracy, we need to use a high order integrator which is computationally expensive.

Direct Shooting (Linear System)

\[\begin{align*} \min_{\mathbf{u}[\cdot]} \quad & \ell_f(\mathbf{x}[N]) + \sum_{n=0}^{N-1} \ell_p(\mathbf{x}[n],\mathbf{u}[n]) \\ \text{subject to}\quad & \mathbf{x}[0] = \mathbf{x}_0 \\ & + \text{additional constraints}. \end{align*}\]

where,

\[\begin{align*} \mathbf{x}[1] =& \mathbf{C}\mathbf{x}[0] + \mathbf{D}\mathbf{u}[0] \\ \mathbf{x}[2] =& \mathbf{C}(\mathbf{C}\mathbf{x}[0] + \mathbf{D}\mathbf{u}[0]) + \mathbf{D}\mathbf{u}[1] \\ \mathbf{x}[n] =& \mathbf{C}^n\mathbf{x}[0] + \sum_{k=0}^{n-1} \mathbf{C}^{n-1-k}\mathbf{D}\mathbf{u}[k]. \end{align*}\]

Comparison

Table that shows the comparison between direct transcription and direct shooting methods.

Method Advantages Disadvantages
Direct transcription Easy to implement Computationally expensive
Direct shooting Computationally efficient Computationally ill Conditioned

Ill Conditioning

Consider the matrix

\[ A = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 3 \end{bmatrix} \]

Code
# This program defines a 3x3 matrix and calculates its exponential from [1, 10] times. It finally reports the
# condition number of the matrix and the exponential of the matrix. This will show that the exponential of the
# matrix is ill-conditioned.

using LinearAlgebra

A = [1 0 0; 0 2 0; 0 0 3]

println("Condition number of A: ", cond(A))

# Prints the condition number of the exponential of A
for i in 1:10
    println("Condition number of exp(A) at ", i, " times: ", cond(exp(A*i)))
end
Condition number of A: 3.0
Condition number of exp(A) at 1 times: 7.38905609893065
Condition number of exp(A) at 2 times: 54.598150033144236
Condition number of exp(A) at 3 times: 403.4287934927351
Condition number of exp(A) at 4 times: 2980.9579870417283
Condition number of exp(A) at 5 times: 22026.465794806718
Condition number of exp(A) at 6 times: 162754.79141900392
Condition number of exp(A) at 7 times: 1.2026042841647768e6
Condition number of exp(A) at 8 times: 8.886110520507872e6
Condition number of exp(A) at 9 times: 6.565996913733051e7
Condition number of exp(A) at 10 times: 4.851651954097903e8

Direct Local Collocation

Direct Local Collocation

  • Aim is to reduce the number of function evaluations (\(\mathbf{f}(.)\)) required per iteration
  • \(\mathbf{x}\) needs to be smoother than \(\mathbf{u}\)
  • A particularly useful approximation is piecewise linear approximation for \(\mathbf{u}\) and a cubic spline approximation for \(\mathbf{x}\)

Source: Russ Tedrake’s notes

Direct Local Collocation

\[\begin{align*} t_{c,k} &= \frac{1}{2}\left(t_k + t_{k+1}\right), \qquad h_k = t_{k+1} - t_k, \\ \mathbf{u}(t_{c,k}) &= \frac{1}{2}\left(\mathbf{u}(t_k) + \mathbf{u}(t_{k+1})\right), \\ \mathbf{x}(t_{c,k}) &= \frac{1}{2}\left(\mathbf{x}(t_k) + \mathbf{x}(t_{k+1})\right) + \frac{h_k}{8}\left(\dot{\mathbf{x}}(t_k) - \dot{\mathbf{x}}(t_{k+1})\right)\\ \dot{\mathbf{x}}(t_{c,k}) &= -\frac{3}{2h}\left(\mathbf{x}(t_k) - \mathbf{x}(t_{k+1})\right) - \frac{1}{4} \left(\dot{\mathbf{x}}(t_k) + \dot{\mathbf{x}}(t_{k+1})\right). \end{align*}\]

So the new optimisation problem becomes

\[ \begin{align*} \min_{\mathbf{x}[\cdot],\mathbf{u}[\cdot]} \quad & \ell_f(\mathbf{x}[N]) + \sum_{n=0}^{N-1} h_n \ell(\mathbf{x}[n],\mathbf{u}[n]) \\ \text{subject to}\quad & \dot{\mathbf{x}}(t_{c,n}) = f(\mathbf{x}(t_{c,n}), \mathbf{u}(t_{c,n})), & \forall n \in [0,N-1] \\ & \mathbf{x}[0] = \mathbf{x}_0 \\ & + \text{additional constraints}. \end{align*} \]

  • Integration is accurate to \(O(h^3)\) with only 2 function evaluations per segment
  • We could go for higher order approximations for both \(\mathbf{x}\) and \(\mathbf{u}\)
  • We could play with the number of collocation points

Direct Global Collocation

Global Polynomial Approximation

  • Till now we saw piecewise local approximations
  • But we could use global approximations

\[x(t) \approx p_x(t) = \sum_{i=0}^{N} a_i\phi_i(t) \]

where \(\phi_i(t)\) are basis functions and \(a_i\) are spectral coefficients.

Choices are basis functions are: - Legendre polynomials - Lagrange polynomials - Chebyshev polynomials

  • All these basis functions are orthogonal

\[ \int_{-1}^{1} \phi_i(t) \phi_j(t) dt = \delta_{ij} \]

\[ a_i = \int_{-1}^{1} x(t) \phi_i(t) dt \]

Global Chebyshev Approximation

  • Chebyshev polynomials are particularly useful for global approximation

\[ T_n(x) = \cos(n \cos^{-1}(x)) \]

  • They are orthogonal over \([-1,1]\)
Code
# This program plots the first 5 Chebyshev polynomials of the first kind

using Plots

# This function defines the chebyshev polynomial of the first kind
function chebyshev(n, x)
    if n == 0
        return 1
    elseif n == 1
        return x
    else
        return 2*x*chebyshev(n-1, x) - chebyshev(n-2, x)
    end
end

x = -1:0.01:1

plot(x, chebyshev.(0, x), label="T_0(x)", xlabel="x", ylabel="T_n(x)")
plot!(x, chebyshev.(1, x), label="T_1(x)")
plot!(x, chebyshev.(2, x), label="T_2(x)")
plot!(x, chebyshev.(3, x), label="T_3(x)")
plot!(x, chebyshev.(4, x), label="T_4(x)")

Chebyshev polynomials of the first kind

Global Polynomial Approximation

Pseudospectral Methods

So the dynamics \(\dot{x}(t) = f(x(t), u(t))\) can be approximated by calculating the residual

\[ \epsilon(t) = x(t) - f(x(t), u(t)) \approx p_x(t) - f(p_x(t), p_u(t)) \]

Tau and Galerkin Methods

  • Set Residual orthogonal to the basis functions

Advantages

  • Near Spectral accuracy with few design variables
  • Fast convergence
  • Elegant mathematical formulation
  • hp adaptive pseudospectral methods

RPL Problem Solutions

How to handle uncertainties?

Uncertainties

  • Many uncertainties in life
    • Winds
    • Uncertainty in the inertial and aerodynamic properties
    • Uncertainty in the control input
    • Uncertainty in the initial conditions
  • How to handle this?
    • Offline mode: Perform robust trajectory optimisation
    • Online mode: Perform trajectory optimisation in real-time

Robust Trajectory Optimisation

  • Consider the uncertainties as a part of the optimisation problem
  • For example, consider the wind as a disturbance in the dynamics
  • The optimisation problem becomes

\[\begin{align*} \min_{\mathbf{x}[\cdot],\mathbf{u}[\cdot]} \quad & \ell_f(\mathbf{x}[N]) + \sum_{n=0}^{N-1} \ell_p(\mathbf{x}[n],\mathbf{u}[n]) \\ \text{subject to}\quad & \mathbf{x}[n+1] = \mathbf{x}[n] + \int_{t[n]}^{t[n+1]} \mathbf{f}(\mathbf{x}(t), \mathbf{u}(t)) dt + \boldsymbol{\omega}, \quad \forall\, n\in[0, N-1] \\ & \mathbf{x}[0] = \mathbf{x}_0 \\ & + \text{additional constraints}. \end{align*}\]

where \(\boldsymbol{\omega}\) is the disturbance.

Online Trajectory Optimisation

  • Use the measurements from the sensors to update the trajectory
  • The optimisation problem is solved in real-time
  • It has to be (very) fast
  • So simplify the nonlinear program into a series of quadratic programs

Online Trajectory Optimisation

  • Application of deep neural networks has been shown to be effective in solving the trajectory optimisation problem in real-time

  • The neural network is trained offline using a large dataset

  • The neural network is used to predict the control input for a given state

  • The neural network is updated online using the measurements from the sensors

Code
digraph G {
    rankdir=LR;
    node [shape=box];
    x [shape=plaintext, label=< <b>x</b> >];
    u [shape=plaintext, label=< <b>u</b> >];
    box [label="Neural Network"];

    x -> box [label="input"];
    box -> u [label="output"];
}

G x x box Neural Network x->box input u u box->u output

  • Other approaches are Reinforcement Learning, model predictive control, etc.

Summary

Summary

  • Trajectory optimisation essential for optimising the performance of a launch vehicle

  • Classical as well as modern methods are available

  • Direct transcription, direct shooting, direct local collocation, global collocation, etc.

  • Uncertainties can be handled using robust trajectory optimisation or online trajectory optimisation

  • Online trajectory optimisation is the future

References

References

  1. John T. Betts, Practical Methods for Optimal Control and Estimation Using Nonlinear Programming, SIAM, 2010

  2. Mathew Kelly, “An Introduction to Trajectory Optimization: How to do it and why it matters”, 2017

  3. Russ Tedrake, “Underactuated Robotics: Optimal Control and Estimation”, Chapter 10, 2024. link

Questions?