Date:7/3-2014 (round 1), 11/3-2014 (round 2)
Duration of activity:10.15 - 16.30 (round 1), 9.30 - 13.00 (round 2)
Group members participating:
Benjamin, Christina, Levi
Goal for the lab session:
Complete all the exercises for this weeks lab session[1].
Plan for the activities:Go through the exercises one by one, completing them to the extent possible.
Results obtained during lab session:
Self-balancing robot with light sensor
We made the changes to the car, as the lesson states, by using the images linked to on the lesson website, as inspiration.
  
There are two differences between the car on the images and ours. As our car has a larger battery-pack we needed to prolong the parts holding the cars sides together.
The other thing is that we added a “leg” to the back of the car, so that it wouldn’t tip all the way over.
With the car built, we loaded it with the given code from the lesson website. We then proceeded to try the cars new balancing skills. According to Hurbains[2] conditions for the balancing cars optimal performance we tested in both dark and bright areas.
Furthermore it also seemed that sometimes when we picked the NXT up high from the table, the measured offset could fail when put back on the table, and thus the device would tip to a side instantly.
Choice of parameters on-the-fly
We extended the PCcarController program and customized it in order to send our own parameters. When we connect to the NXT, we first retrieve the current P, I, D, Scale and Offset values in the program, which gives us a baseline for modification. We can then set new values for the mentioned parameters, and send them to the NXT which will instantly react.
Waiting for connection
Connected
A thing we learned from this was always check if there’s an actionlistener attached to your newly created button. Always.
Setting up the bluetooth connection with instant action made our life much easier as developers, since we no longer have to re-upload the whole program when we’re just tweaking small numbers. A video showing the car in action, while we try to find the best values is shown here:
We found some values that seemed to work, P = 35, I = 50, D = 30, Scale = 25 and Offset = 496. We implemented a DataLogger on the balancing robot, collecting measured light values.
The chart for the values is shown here:
The start and the very end of the measurements are affected by our hand when first holding and letting go of the robot, and then again when holding the robot just before stopping it.
From about the time of 3800 ms, the robot is balancing freely.
It is clear that the values of the measured light oscillates between ‘low’ and ‘high’; about 470 to 530 because it is trying to balance at around 496. This matches the behaviour of the robot as it is balancing by ‘leaning’ forwards then backward and continuing to do this to stay upright.
The robot feel backwards at the end, and the robot was then stopped. We suspect that this time is about 5800 ms, but we don’t know for sure. Maybe it is first at the time of 7000 ms, as there is still an oscillation of measurements of light after 5800 ms to about 7000 ms.
When experimenting with the robot and different sets of values, we also tried to place the light sensor about 5 mm lower on the robot to get it closer to the table, as we suspected the ambient light to affect it a great deal and that we could maybe minimize the effect by doing this.
A video that shows how the robot is balancing (with the values as seen below in the screenshot) is here:
Values for the program run on the robot
We found that changing some parameters also had an effect on the others, so in order to get the car to balance all parameters had to be changed relative to each other. This is the reason that the offset is 542(as pictured above) in this setup, as opposed to the previous assignment, where that offset value was 496. It seemed to us that the offset wasn't a stationary value, as it had to be different for every time we had a trial run.
Self-balancing robots with color sensor
We tried replacing the light sensor with the color sensor, using all the same setup. In the code we read the color sensors normalized light value (cs.readNormalizedLightValue()) in stead of the light sensors readNormalizedValue(). 
A video that shows the robot not balancing at all is here:
Later we realized that we maybe should have used the color sensors method getRawLightValue(), instead, because the light sensors readNormalizedValue() returns a raw light value.
Unfortunately when we realized this and wanted to test it, we had already dissembled the robot to try other things.
We therefore will not pursue getting the robot to balance with the color sensor.
Balancing robot with light sensor - wheels connected
We built the car according to the schematics[3], but had no access to a third motor, so we only built the lower part of the car.
We then tested it with the light sensor and tried to find variables that would make it ‘stand’ but were unable to find good enough values for this to happen. As such, we were unable to pursue this any further.
Self-balancing robots with gyro sensor
We did not have a gyro sensor available, so we could not go through with this exercise. However we could imagine that using a gyroscope instead of a light-based sensor it would remove the context based measuring, making it able to balance on any color surface without having to calibrate between surfaces. Furthermore we would also be able to make the robot balance on uneven surfaces, for example on a hill.
Conclusion
We didn’t get to do the exercise using a gyroscope, because we didn’t have access to a gyroscope.
We didn’t get the robot to balance perfectly, but tried experimenting with the values for the different parameters to make it almost balance.
References
[1] Lesson 5 - http://legolab.cs.au.dk/DigitalControl.dir/NXT/Lesson5.dir/Lesson.html
[2] http://www.philohome.com/nxtway/nxtway.htm
[2] http://www.philohome.com/nxtway/nxtway.htm
[3] http://www.nxtprograms.com/NXT2/segway/steps.html
 
Ingen kommentarer:
Send en kommentar