Video length is 16:07

Everything You Need to Know About Control Theory

From the series: Control Systems in Practice

Brian Douglas

Control theory is a mathematical framework that gives us the tools to develop autonomous systems. Walk through all the different aspects of control theory that you need to know.

Some of the concepts that are covered include:

  • The difference between open-loop and closed-loop control
  • How we use system identification to develop a model of the system
  • Why feedforward control is a straightforward method to control a system
  • How feedback control affects system stability
  • An overview of other control methods including adaptive control, optimal control, predictive control, and reinforcement learning
  • Why path planning is an essential part of control design
  • How statistical estimators like Kalman filters are used to observe system state
  • Why modeling and simulation is required for almost all control engineering

Published: 4 Oct 2022

This is where I’m leaving this video for now. If you don’t want to miss any other future Tech Talk videos don’t forget to subscribe to this channel.  And you want to check out my channel, control system lectures, I cover more control theory topics there as well.  Thanks watching, and I’ll see you next time.An important question that has to be answered when you’re designing an autonomous system, is how do you get that system to do what you want? How do you get a car to drive on its own? How do you manage the temperature of a building? Or how do you separate liquids into their component parts efficiently with a distillation column? And to answer those questions we need control theory. Control theory is a mathematical framework that gives us the tools to develop autonomous systems. In this video, I want to walk through everything you need to know about control theory. I hope you stick around for it. I’m Brian, and welcome to a MATLAB Tech Talk.

We can understand all of control theory using a simple diagram. And to begin, let’s just start with a single dynamical system. This system is the thing that we want to automatically control, like a building, or a distillation column, or a car. It can be really anything. But the important thing is that the system can be affected by external inputs.

And in general we can think of the inputs as coming from two different sources. There are control inputs, u, that we intentionally use to affect the system. For a car, these are things like moving the steering wheel, hitting the brake, or pressing on the accelerator pedal. And there are unintentional inputs. These are the disturbances, d, and they are forces that we don’t want affecting the system but they do anyway. These are things like wind and bumps in the road.

The inputs enter the system, interact with the internal dynamics, and then the system state, x, changes over time. So, for a car, we move the steering wheel and press the pedals which turn the wheels and revs the engine, producing forces and torques on the vehicle, and combined with the forces and torques from the disturbances the car changes its speed, position, and direction. Now, if we want to automate this process, that is, we want the car to drive without a person determining the inputs, where do we go from here?

And the first question is, can an algorithm determine the necessary control inputs without constantly having to know the current state of the system? Or maybe a better way of putting it is do you need to measure where the car and how fast it’s going in order to successfully drive the car with good control inputs. The answer is actually no, we can control a system with an open loop controller, also known as a feedforward controller. A feedforward controller takes in what you want the system to do, called the reference, r, and generates the control signal, without every needing to measure the actual state. In this way, the signal from the reference is fed forward through the controller and then through the system, never looping back, hence the name feedforward.

For example, let’s say we want the car to autonomously drive in a straight line, at some arbitrary constant speed. If the car is controllable, which means that we have the ability to actually affect the speed and direction of the car, then we could design a feedforward controller that accomplishes this. The reference drive straight means that the steering wheel should be held fixed at 0 degrees and drive a constant speed means depress the accelerator pedal some non-zero amount. The car would accelerate to a constant speed and drive straight, exactly as we want.

However, let’s say we want the car to reach a specific speed, like 30 mph. We can actually still do it with a feedforward controller. But now the controller needs to know how much to depress the accelerator pedal in order to reach that specific speed. This requires knowledge about the dynamics of the system and this knowledge can be captured in the form of a mathematical model. Developing a model can be done using physics and first principles, where the mathematical equations are written out based of your understanding of the system dynamics, or by using data and fitting a model to that data with a process called system identification. Both of these modeling techniques are important concepts to understand because as we’ll get into, models are required for almost all aspects of control theory.

As an example of system identification, we could test the real car and record the speed it reaches given different pedal positions and then fit a mathematical model to that data; basically speed is a function of pedal position. Now for the feedforward controller itself, we could use the inverse of that model to get pedal position as a function of speed. So given a reference speed, the feedforward controller would be able to calculate the necessary control input.

Feedforward controllers are a straight forward way to control a system. However, as we can see it requires a really good understanding of the system dynamics since you have to invert them in the controller and any error in that inversion process will result in error in the system state. Also, even if you know your system really well, the environment the system is operating in should have predictable behavior, you know so there aren’t a lot of unknown disturbances entering the system that you’re not accounting for in the controller.

Of course it doesn’t take much imagination to see that feedforward control breaks down for systems that aren’t robust to disturbances and uncertainty. I mean imagine wanting to autonomously drive a car across a city with feedforward control. Theoretically, you could map the city well enough and know your car well enough that you could essentially preprogram in all of the steering wheel and pedal commands and just let it go. If you had perfect knowledge ahead of time, then the car would execute those commands and make its way across the city unharmed.

Obviously, though, this is unrealistic. Not only are other cars and pedestrians impossible to predict perfectly, but even the smallest errors in the position and speed of your car, will build over time and eventually deviate much too far from the intended path.

This is where feedback control, or closed loop comes to the rescue. In feedback control, the controller uses both the reference and the current state of the system to determine the appropriate control inputs. That is, the output is fed back, making a closed loop, hence the name. In this way, if the system state starts to deviate from the reference, either because of disturbances, or because of errors in our understanding of the system, then the controller can recognize those deviations, those errors, and adjust the control inputs accordingly. So, feedback control is a self-correcting mechanism.

I like to think of feedback as a hack that we have to employ due to our inability to perfectly understand the system and its environment. We don’t want to use feedback control, but we have to.

Feedback control, is a lot more powerful but more dangerous than feedforward control. The reason for this is that feedforward changes the way we operate a system, but feedback changes the dynamics of the system; it changes its underlying behavior. This is because with feedback, the controller changes the system state, x dot, as a function of the current state, x, and that relationship produces new dynamics.

Changing dynamics means that we have the ability to change the stability of the system. On the plus side, we can take an unstable or marginally stable system and make it more stable with feedback control. But, on the negative side, we can also make a system less stable and even unstable.

This is why a lot of control theory is focused on designing and, importantly, analyzing feedback controllers, because if you do it wrong, you can do more harm than good. And since feedback control exists in many different types of systems, the control community over the years has developed many different types of feedback controllers. There are linear controllers like PID, and full state feedback that assume the general behavior of the system being controlled is linear in nature. If that’s not the case, there are nonlinear controllers like on-off controllers, sliding mode controllers, and gain scheduling. Now often, thinking in terms of linear versus nonlinear isn’t the best way to choose a controller. So, we define them in other ways as well. For example, there are robust controllers like mu synthesis and active disturbance rejection control, which focus on meeting requirements even in the face of uncertainty in the plant and environment. So, we can guarantee that they are robust to a certain amount of uncertainty.

There are adaptive controllers like extremum-seeking and model reference adaptive control, that adapt to changes in the system over time. There are optimal controllers like LQR where a cost function is created and the controller tries to balance performance and effort by minimizing the total cost.

There are predictive controllers like model predictive control that use a model of the system in the controller to simulate what the future state will be and therefore what the optimal control inputs should be in order to have the future state match the reference.

There are intelligent controllers like a fuzzy controller or reinforcement learning that rely on data to learn the best controller.

And there are many others. The point here isn’t to list every control method, I just wanted to highlight the fact that feedback control isn’t just a single algorithm, but a family of algorithms. And choosing which controller to use and how to set it up, depends largely on what system you are controlling and what you want it to do.

So, what do you want your system to do? What state do you want the system to be in, what is the reference that you want it to follow? This might seem like a simple question if we’re balancing an inverted pendulum or designing a simple cruise controller for a car. The reference for the pendulum is vertical and for the car it’s the speed that the driver sets. However, for many systems, understanding what it should do takes some effort, and this is where planning comes in.

The control system can’t follow a reference if one doesn’t exist and so planning is a very important aspect of designing a control system. With a self driving car, planning has to figure out a path to the destination, while avoiding obstacles and it has to follow the rules of the road. Plus, it has to come up with a plan that the car is physically able to follow, that is, it doesn’t accelerate too fast, or doesn’t turn too quickly. And if there are passengers, then planning has to account for their comfort and safety. And only after the plan has been created, can the controller generate the commands to follow it.

An example of two different graph-based planning methods are rapidly expanding random trees, RRT, and A*. Once again there are too many different algorithms to name but the important thing is that you understand you have to develop a plan that your controller will try to follow.

Ok, so once you know what you want the system to do, and you have a feedback controller to do it, now you need to actually execute this plan. And as we know for feedback controllers this requires knowledge of the state of the system. That is, after all what we are feeding back. And the problem is that we don’t know the state unless we measure it, and measuring it with a sensor introduces noise. So, for our car example, we’re not feeding back the true speed of the car, we’re feeding back a noisy measurement of the speed, y, and now our controller reacts to that noise. In this way, noise, in a feedback system, actually affects the true state of the system, so this is one additional problem that we have to tackle with feedback control.

A second problem is that of observability. In order to feedback the state of the system, we have to be able to observe the state of the system. This requires sensors in enough places that every state that is fed back can be observed. Now, it’s important to note that we don’t have to measure every state directly, we just need to be able to observe every state. For example, if our car only has a speedometer, we can still observe acceleration by taking the derivative of the speed.

So, there are two things here. We need to reduce measurement noise and manipulate the measurements in a way that allow us to accurately estimate the state of the system. State estimation is, therefore, another important area of control theory. And for this, we can use algorithms like the Kalman filter, particle filter, or even just a simple running average. Choosing an algorithm depends on which states you are directly measuring and how much noise and what type of noise is present in those measurements.

Now the last major part of control theory is responsible for ensuring the system we just designed works - it meets the requirements that we set for it. This comes down to analysis, simulation, and test. For this, we can plot data in different formats like with a Bode diagram, a Nichols chart, or a Nyquist diagram. We could check for stability, and performance margins. We could simulate the system using MATLAB and Simulink. All of these tools can be used to ensure the system will function as intended.

And this diagram, I think represents everything you need to know about control theory. You have to know about different control methods, both feedforward and feedback depending on the system you’re controlling. You have to know about state estimation, so that you can take all of those noisy measurements and be able to feed back an estimate of system state. You have to know about about planning so that you can create the reference that you want your controller to follow. You have to know how to analyze your system to ensure that it’s meeting requirements. And finally, and possibly most importantly, you have to know about building mathematical models of your system because models are often used for every part we just covered. They are used for controller design, state estimation, planning, and analysis.

Now, I’m being a little disingenuous by saying that you need to know all of this, because nobody knows all of this stuff perfectly. But I think it’s important that you are aware of all of these different aspects of control theory because even if you are working on a large project, where different teams are responsible for each of these aspects of the design, you should still understand the big picture and how what you’re doing fits into it.

Alright, I always leave links below to other resources and references, and this video is no exception. And there are a bunch for this video since I mentioned so many different topics. And something I think is nice is that we have MATLAB Tech Talks for almost every topic I mentioned. We have feedforward, PID, Gain scheduling, fuzzy logic, Kalman filters, Particle filters, planning algorithms, system identification and more. So, if there is an area of control theory that you want to learn more about, I hope you check out the links below.

And to make it easier to browse through all of them, I put together a journey at resourcium.org that organizes all of the references in this video. Again link to that is below as well.

This is where I’m leaving this video for now. If you don’t want to miss any other future Tech Talk videos don’t forget to subscribe to this channel. And you want to check out my channel, control system lectures, I cover more control theory topics there as well. Thanks watching, and I’ll see you next time.