Sensor Fusion

What's Wrong With Just Reading the Sensors?

To fly the drone, we’ll need to calculate the altitude and attitude of the drone- to keep it level, and at a controlled height.

 

Throughout this workshop, all variables you’ll need to use have already been declared. Use the functions to turn the data into an attitude estimate, then use the quaternion visualizer to observe the performance of the system.

 

Many sensors have flaws which prevent them from being reliable without support. To summarize:

 

Attitude

  • The accelerometer can’t distinguish between gravity and acceleration- so it’s almost perfectly accurate until you start moving. It also can’t tell us anything about the yaw direction of the drone

  • The gyroscope is almost perfectly accurate in the short term but will drift over time. It also can’t tell what angle the drone started at, so the drone must be perfectly level at take-off.

Altitude:

  • The LiDAR can create odd measurements in direct sunlight, and has limited range

  • The barometer can be disturbed by changing pressure- wind, doors opening or (worst of all) the drone’s own thrust when near the ground

  • The accelerometer can also tell us the height by double integrating acceleration, but we need to eliminate gravity. On top of that, the drift which makes the gyro unreliable is present here- except instead of increasing linearly with time, this will increase quadratically. Even top-of-the-line IMUs costing 5-6 figures will begin to have velocity errors in the range of "supersonic" within an hour of operation.

As per usual, github for supporting code and tools.

Step 1: Applying Angular Velocities

The most important concepts to understand here are how we can use vectors and quaternions to represent different things:

Vectors:

  • A direction- this is pretty intuitive

  • The speed at which something is rotating

Quaternions

  • The angle in space something is at

To help you understand how a quaternion represents the orientation of the drone, the example sketch SensorFusion can be used with QuaternionVisualiser to show you, visualize even, a quaternion. Upload the sketch, and open QuaternionVisualiser (make sure you aren’t reading the serial output in the Arduino IDE or it won’t work). You should see three spheres at right angles to one another- these represent the X, Y and Z axis of the flight controller. Currently, it’s not doing very much.

 

To get things moving, set one of the components of angularVelocity to something other than zero. Upload the code again, and open the visualizer- the axis should now be spinning around a line which represents the angular velocity vector.

 

Experiment with various values in angularVelocity to get used to how this works. Back in my day, we considered this “fun”.

Step 2: Integration

We’ll start with the gyroscope for attitude estimation, since the gyro can do it all albeit with limitations. The simplest way to do this is to just used the applyW (W representing omega, angular velocity) to make the “attitude” quaternion spin at the same speed at the physical board. In a perfect world, we’d be done here- but there’s some problems. Instead of setting angularVelocity to random numbers, set it to the gyro data (there should be a comment showing how to do this).

quaternion rotatedQuaternion = applyW(quaternion quaternionToRotate, vector angularVelocity, float timestep)

 

Rotates quaternionToRotate by the amount it will rotate in [timestep] time, given it is roatating at [angularVelocity] rotational speed. In this case, both the input and output quaternion should be the same- so it will update each time the loop runs.

The issue with this is that as the IMU rotates, the XYZ of the IMU no longer align- for example if the IMU is on it’s side, the XYZ axis are no longer the directions they were when it was level.

 

To see this problem, rotate the air unit by twisting the USB-C cable (so that it turns about the axis of the cable) and you’ll see it rotates around the green sphere’s line- the green sphere being where the USB-C cable comes out. Now turn it so that the USB-C is pointing vertically, and rotate it around the USB-C cable again. It should still rotate about the green axis, but instead it will rotate about the blue or red axis.

Step 3: Integration if it was Actually Good

Let’s make some improvements to this. We’ll use the calibration function to make the gyro more accurate, and we’ll use the last best estimate to rotate the gyro measurements. Put the calibrateGyro method of the IMU class in the setup function and call the rotateVector on angularVelocity and attitude before we use applyW.

 

Once you’ve done this, run the visualizer again. Rotate the board around and see how well it tracks. Try moving the board very sharply - how does this affect the estimate?

bool calibrateGyro(float targetVariance, int calibrationSamples)

 

This function can be used to calibrate the gyro. It returns 1 if the target variance was hit, or 0 if not. Variance is a measure of how noisy data is- if it's very noisy, this suggests that the UAV is not stable and therefore the calibration was invalid. If you don't care about this, just set it to a very big number.

 

vector rotateVector(class vector vin, quaternion qIn)

 

Returns a vector which is the vector provided rotated by the provided quaternion.

The estimator should track the board much better now, but if it starts to drift over time there’s no way to correct it. What’s worse is that if the angular velocity gets too high, it will induce a permanent error- like when the board is moved violently. This shouldn’t happen too much in practice, but it’s still good to have systems in place to account for this. Last off, the board can’t correct for a non-level starting position.

Step 4: Finding Where Gravity Wants You to go, and Not Doing That

Personally, this is my favourite thing drones do. Using the accelerometer, we can guess where gravity is and therefore where “down” should be by assuming it only measures gravity and not acceleration. We can compare this to where “down” is in our attitude estimate, and compare the difference between them. Hopefully, if the drone isn’t accelerating, this will tell us how big the error is.

In order to smoothly reduce this difference, we need to rotate the attitude estimate in the direction which will align the measured and estimated gravity vector.

 

To find the axis to rotate it around, we can use the cross product- represented by & here- on the two estimates. We’ll then normalize this vector, so that the length of it is always 1. This will give us the axis we need to rotate around to align the two gravity estimates.

We still need to determine the speed at which to rotate- the easiest way to do this is using angleBetweenVectors(vector vin1, vector  vin2) which will tell us, in radians, the angle between these two vectors.

 

Since we know which direction to rotate in and how fast, we can now combine them by just multiplying the angle between the vectors and the error axis, which will give us a vector  representing the angular velocity we should rotate attitude at.

applyW() to apply this angular velocity vector to attitude, and we should be able to start fixing drift.

 

The easiest way to see if you your solution works is to start with the board at an angle and then open the viewer. If it’s working, you should be able to see that the estimate is at the same angle as the physical board. If you’ve made a mistake, it will probably start to gyrate or maybe go completely crazy.

 

This stage is fairly difficult compared to most. Use the print tools and the visualizer to see the errors, and how you're affecting them. Don't be afraid to ask committee for help if you get stuck!

First set accelerationWorldFrame to rotateVector(acceleration, attitude)

 

Next set errorAxis to acceleration & gravity.

 

Then normalize the vector.

 

Set errorAxis to itself times vectorAngleBetween acceleration and gravity

 

Finally use applyW to spin attitude by errorAxis. Use the same timestep as before.