tirsdag den 18. marts 2014

Week 7 - Lesson 6

Date:

14/3-2014 (round 1), 17/3-2014 (round 2)

Duration of activity:9.30-13.00 (round 1), 9.30 - 17.00 (round 2)

Group members participating:,

Benjamin, Christina, Levi

Goal for the lab session:

Complete all the exercises for this weeks lab session [1].

Plan for the activities:Go through the exercises one by one, completing them to the extent possible.



Results obtained during lab session:


Vehicle 1
We tried different approaches to map the raw values to the motors. First we tried our own custom approach, where we used the raw value to measure the percentage of the maximum possible value( (value/1023)*100 ). Then we simply added a minPower to this percentage, so the car would drive a little even though there were no noise.


A video that shows the car driving forward, and slowing down as it reaches a loud sound source (and eventually stopping) is seen below.

Furthermore we tried this: instead of going from -100 to 100 in motorpower, we instead asked if the value was below or above 50, and then the further it was above or below 50, the faster it would drive forward or backward. So if there were no noise, it would calmly drive forward, however if there were loud noises it would drive backwards (like a little scared girl).

We then switched to Breitenbergs[2] approach and tried to use his code. It went fine when mapping values between 0 and 100 to the motors, however when we tried to go to -100, and thus the car should go backwards, we noticed that the engines would simply make too much noise for the value to be low enough to make the motor drive backward.


Vehicle 2
Light sensors
We mounted two light sensors on the front of the car to experiment with the vehicles 2a and 2b, where vehicle 2a has an exibitory connection and 2b has an inhibitory connection.

With the exhibitory connection the car is to follow the light, as it should follow the mantra ‘the more light, the more motor power’. The code for this can be found here [3].
The normalize function used in the program defines the mapping between sensor range and motor power, and in our case, this is from the read raw values of 0 - 1023 from the sensor mapped to the motor power from 0 - 100.

The way of normalizing this is as follows, and can be found here [4].


A video that shows the behavior of the exhibitory connection can be seen below. It shows how the car will turn towards the side where the light is.
A video that shows the behavior of the inhibitory connection can be seen below. It shows the car driving on a white surface, slowly turning left towards a box with a dark cavity. The car continues to drive deeper into the box as it gets darker further into the box. When the car reaches the bottom wall of the box it drives more slowly, but doesn’t stop driving. This is because the motors will always run with at least the ‘minPower’ value.



Moving light source

We didn’t have more than one vehicle to test the the program ‘LightFollower’ on, but as the video (fig.5 in the lab session notes) shows, vehicles with exhibitory connections will follow the light and if these lights are placed on other vehicles that move, the light-following vehicle will follow the light source, and therefore the other vehicle which has the lamp placed on it.


Light condition samples
As we could not understand Deans approach [2] (MAX_LIGHT and MIN_LIGHT will always be the same value according to his code, and if we try to normalize using those two values using his previous code, we will end up dividing with 0, and we can’t have that), we decided to make our own approach.
In our approach we take a sample size of 100 on each sensor. In each sample set we find the min and max value, and then we take the lesser and the greater of the two min and max values, and make that our respectively MAX_LIGHT and MIN_LIGHT. That way our algorithm will take the lighting context into consideration.
This however means that sometimes the MAX_LIGHT and MIN_LIGHT would be very close to each other, which meant that small variations in the sensors raw values could make a large difference in the power, making the car drive in odd directions.


We did not make any attempt to optimize this.
By making the min and max values change depending on the measured light, we make the robot take its lighting context into consideration. It may not be important to this project, but if we wish a more precise movement no matter if it’s dark or bright, and movement between darkness and brightness, this is a way to do it.

Below is a video in darkness, where we open the door to a bright room, and the robot has to take the new context into consideration. We do not know why the robot actually stops again after it has measured its new, brighter context, however we chose not to pursue this further.


Ultrasonic sensors
In the video [5] a car with two ultrasonic sensors mounted in the front, with some distance in between them, is driving around on a floor, avoiding objects as the car comes closer to them. It seems to work by driving to the left when the sensor on the right senses an obstacle, and vice versa. At a point it drives into an object as it seems that the car doesn’t turn enough to avoid hitting the object with the mounted sensor. When it hits something it keeps on driving. At another point it drives towards a wall and doesn’t correct itself by turning, and it therefore drives into the wall.It could seem that the reason it bumps into things and doesn’t stop driving when it runs into something, is that the sensors aren’t correctly registering at all times, the distance to objects for each of the sensors. Another reason could simply be that the robot is taking both sensors into consideration when measuring distance, and when one of the sensors does not detect any obstacles, it would still drive forward somehow. A possible way of improving this could be to improve upon the distance between the sensors, placing them closer together, and see how this affects the behavior. We have built the car from the video, and implementing code by the mantra of the inhibitory connection; ‘less distance to object from one sensor means more power to the motor (right sensor connected to the left motor)’. First we start with approximately the same distance between the sensors as in the video. A picture of our construction is seen here:
The code for the program that the car runs using two ultrasonic sensors can be found here [6].
A video that shows the behaviour of the car is shown below.
We tried placing the sensors more towards the middle/towards each other (so they were approximately 5 cm closer together) but that didn’t seem to affect the car much.
From the video it can be seen that the car stops at some places, were we suspect the distance for both of the sensors to the obstacle the ultrasonic signal reflects on, is so low that the motor doesn’t run. However if there is just a slight difference in the readings for the two sensors, adjusting the minPower variable så that it just have enough power to turn around will make the car go away from the obstacle and drive in another direction. This behaviour (with minPower set up from 30 to 50), can be seen in the video below:
At this stage we already feel our program on the car runs better than that from the video, and we won’t improve further on it.

Vehicle 3
We mounted two ultrasonic sensors and two light sensors on the car, as the picture below shows.

In order to be able to compare the two very different value ranges, we calculated their percentages rather than their “raw” values. The reason we write “raw” with quotes is that we were unsuccessful in getting the raw data from the ultrasonic sensor, so we had to use the getDistance() method instead, which returns a distance from 0 to 255 cm. In order to get the percentage we simply divided the value with the maximum possible value (which was either 255 or 1023). In this way the value output would always be somewhere between 0 and 100 (as it is percentage).
Then we toyed a bit around with the values to see what would make the car avoid obstacles at best.
We tried to append the higher value of the light sensor and the ultrasound sensor to the motor. However that turned out bad, as the car kept driving into things, as you can see in the video below:

Then we tried a bit more complicated thing, where we let the values from the light sensor start from 100 and going up to 0, and then let the ultrasonic sensor start from 0 and go down to 100. We then deducted values from the first pair of sensors from the second pair of sensors, and appended the results to the motors. This turned out to be more useful since the car avoided most obstacles, as shown in the video below:
It is worth noting however that most obstacles in our testing area was thin and glossy table legs, which can be difficult to measure properly for the sensors.

Conclusion We have gone through all the exercises and experimented with different sensors and set-ups. 

References

tirsdag den 11. marts 2014

Week 6 - Lesson 5

Date:7/3-2014 (round 1), 11/3-2014 (round 2)

Duration of activity:10.15 - 16.30 (round 1), 9.30 - 13.00 (round 2)

Group members participating:

Benjamin, Christina, Levi

Goal for the lab session:

Complete all the exercises for this weeks lab session[1].

Plan for the activities:Go through the exercises one by one, completing them to the extent possible.

Results obtained during lab session:



Self-balancing robot with light sensor


We made the changes to the car, as the lesson states, by using the images linked to on the lesson website, as inspiration.
IMG_2334.JPG
There are two differences between the car on the images and ours. As our car has a larger battery-pack we needed to prolong the parts holding the cars sides together.

IMG_2336.JPG

The other thing is that we added a “leg” to the back of the car, so that it wouldn’t tip all the way over.
IMG_2339.JPG

With the car built, we loaded it with the given code from the lesson website. We then proceeded to try the cars new balancing skills. According to Hurbains[2] conditions for the balancing cars optimal performance we tested in both dark and bright areas.

It was observed that the car performed the same in the dark room, than it did in the bright one.
Furthermore it also seemed that sometimes when we picked the NXT up high from the table, the measured offset could fail when put back on the table, and thus the device would tip to a side instantly.


Choice of parameters on-the-fly
We extended the PCcarController program and customized it in order to send our own parameters. When we connect to the NXT, we first retrieve the current P, I, D, Scale and Offset values in the program, which gives us a baseline for modification. We can then set new values for the mentioned parameters, and send them to the NXT which will instantly react.Before.PNG
Waiting for connection

Capture.PNG

Connected


A thing we learned from this was always check if there’s an actionlistener attached to your newly created button. Always.


Setting up the bluetooth connection with instant action made our life much easier as developers, since we no longer have to re-upload the whole program when we’re just tweaking small numbers. A video showing the car in action, while we try to find the best values is shown here:

We found some values that seemed to work, P = 35, I = 50, D = 30, Scale = 25 and Offset = 496. We implemented a DataLogger on the balancing robot, collecting measured light values.

The chart for the values is shown here:
The start and the very end of the measurements are affected by our hand when first holding and letting go of the robot, and then again when holding the robot just before stopping it.
From about the time of 3800 ms, the robot is balancing freely.
It is clear that the values of the measured light oscillates between ‘low’ and ‘high’; about 470 to 530 because it is trying to balance at around 496. This matches the behaviour of the robot as it is balancing by ‘leaning’ forwards then backward and continuing to do this to stay upright.
The robot feel backwards at the end, and the robot was then stopped. We suspect that this time is about 5800 ms, but we don’t know for sure. Maybe it is first at the time of 7000 ms, as there is still an oscillation of measurements of light after 5800 ms to about 7000 ms.


When experimenting with the robot and different sets of values, we also tried to place the light sensor about 5 mm lower on the robot to get it closer to the table, as we suspected the ambient light to affect it a great deal and that we could maybe minimize the effect by doing this.
A video that shows how the robot is balancing (with the values as seen below in the screenshot) is here:


Values for the program run on the robot

We found that changing some parameters also had an effect on the others, so in order to get the car to balance all parameters had to be changed relative to each other. This is the reason that the offset is 542(as pictured above) in this setup, as opposed to the previous assignment, where that offset value was 496. It seemed to us that the offset wasn't a stationary value, as it had to be different for every time we had a trial run.



Self-balancing robots with color sensor


We tried replacing the light sensor with the color sensor, using all the same setup. In the code we read the color sensors normalized light value (cs.readNormalizedLightValue()) in stead of the light sensors readNormalizedValue().
A video that shows the robot not balancing at all is here:



Later we realized that we maybe should have used the color sensors method getRawLightValue(), instead, because the light sensors readNormalizedValue() returns a raw light value.
Unfortunately when we realized this and wanted to test it, we had already dissembled the robot to try other things.
We therefore will not pursue getting the robot to balance with the color sensor.


Balancing robot with light sensor - wheels connected
We built the car according to the schematics[3], but had no access to a third motor, so we only built the lower part of the car.

We then tested it with the light sensor and tried to find variables that would make it ‘stand’ but were unable to find good enough values for this to happen. As such, we were unable to pursue this any further.


Self-balancing robots with gyro sensor
We did not have a gyro sensor available, so we could not go through with this exercise. However we could imagine that using a gyroscope instead of a light-based sensor it would remove the context based measuring, making it able to balance on any color surface without having to calibrate between surfaces. Furthermore we would also be able to make the robot balance on uneven surfaces, for example on a hill.




Conclusion


We didn’t get to do the exercise using a gyroscope, because we didn’t have access to a gyroscope.
We didn’t get the robot to balance perfectly, but tried experimenting with the values for the different parameters to make it almost balance.


References

[1] Lesson 5 - http://legolab.cs.au.dk/DigitalControl.dir/NXT/Lesson5.dir/Lesson.html
[2] http://www.philohome.com/nxtway/nxtway.htm
[3] http://www.nxtprograms.com/NXT2/segway/steps.html

tirsdag den 4. marts 2014

Week 5 - Lesson 4

Date: 28/2-2014 (round 1), 4/3-2014 (round 2)

Duration of activity: 10.15 - 16.30 (round 1), 10.30 - 13.00 (round 2)


Group members participating:

Benjamin, Christina, Levi


Goal for the lab session:

Complete all the exercises for this weeks lab session [0].


Plan for the activities: Go through the exercises one by one, completing them to the extent possible.


Results obtained during lab session:

Black White Detection


We have made a program that use and test the class BlackWhiteSensor.java. After the car had been calibrated, we placed the car with the light sensor over different areas on a board with black, white, green and blue colors.


When placing the car around the board and measuring on the colors black, white and green, we got very different readings depending on the flatness of the paper board and maybe also depending on how dirty the paper board was. The span of readings for each of the colors are as follows:

Black: 34-40,

Green: 44-50

White: 57-60



A video that shows the calibration and testing of the program is here:


Line Follower with Calibration
The program LineFollowerCal.java is an an application of the BlackWhiteSensor class.
The behavior when running the calibrated LineFollower is shown in the video below:

ThreeColorSensor with Calibration


We have made a program ThreeColorSensor [1] that can detect three colors: black, green and white.
As shown in the black white exercise, the three colors have a threshold range distant from each other. Furthermore we also know that the green threshold range is between black and white. In the code we utilize this knowledge, by instead of calibrating all three colors, we only calibrate on the green color. This way for the green color we define a threshold range by measuring a value in the calibration, and then set the top threshold to measured value + 2, and bottom threshold to measured value - 2. When the value is above the top threshold, the color is white, and when it is below the bottom threshold, the color is black.



The program is able to detect the color green

Line Follower that stops in a Goal Zone

We copied LineFollowerCal, made it use our custom ThreeColorSensor, and added an else if statement that would check if there was a green value measured. If so, the car would drive forward with 0 speed, and thus 'float' to stop (instead of instant braking). This way there is a chance that the car will float forward to a new color, in case it would measure a value within the green threshold range on the edge between the black and the white area.



A video that shows the program running with it first being calibrated to the green color in the goal zone is shown here:




In the video it can be seen that the car stops within about 1 sec after driving into the light green goal area. When the car stops the light sensor is about 5 cm into the green zone area. This is because the car makes the ‘float stop’ instead an immediate stop, which makes the car come to a stop a bit later.


A video that shows the car running with the same program, except for the car being stopped (with Car.stop()) when reaching green lights is shown here:

It is clear that the car stops much faster when reaching the green zone area than with the float stop.
Another influence on the running of the car when making a full stop instead of a ‘float stop’ is that the car runs less smooth on the black line on the white area, because is sometimes in the border between black and white gets a reading that the program interprets as having gone over a green area. When this happens the car makes a full stop, before driving a bit forward again because it got a new reading of either black or white. This makes the driving a more jagged and run less smooth. A video that shows that the cars runs jagged is here:

Sometimes though, no new reading says that the car has reached a black or white area before the car actually stops fully, causing the car to stand still, and not beginning to drive again, if not someone gently redirects the car away from the edge.
A video that shows this is here:


We tried putting the car on another track with a black line and some green zones, but the green color on this track was different from the light green that the car had been calibrated with.
A video of this is here:
The video shows that the car doesn't stop when reaching a green area. As we suspected, is doesn't react to this type of green because it is calibrated to another type of green.

The problem is for sure not in general the type of green being calibrated against, as the program ran fine (and the car stopped nicely upon encountering a green area) if the program was just calibrated with the green color on the track that is was going to run on. A video of this is here:

PID Line Follower


We implemented a PID regulator to make the car run more smooth. The code was greatly inspired by the psuedocode for a PID controller, as described in this weeks lesson litterature. We used trial and error with different numbers for pGain, iGain and dGain in order to get a satisfying result.
Below is the code for the program:

public class LineFollowerPID { public static void main (String[] aArg) throws Exception { float pGain = 5.0f, iGain = 0.4f, dGain = 110.0f, offset, integral = 0, derivative, error; int lastError = 0, light, tp = 100, power; BlackWhiteSensor sensor = new BlackWhiteSensor(SensorPort.S3); sensor.calibrate(); offset = sensor.getThreshold(); LCD.clear(); LCD.drawString("Light: ", 0, 2); while (! Button.ESCAPE.isDown()) { light = sensor.light(); error = light - offset; integral = integral + error; derivative = error - lastError; power = (int) (pGain*error + dGain*derivative + iGain*integral); Car.forward(tp+power, tp-power); lastError = (int) error; LCD.drawInt(sensor.light(),4,10,2); LCD.refresh(); Thread.sleep(10); } Car.stop(); LCD.clear(); LCD.drawString("Program stopped", 0, 0); LCD.refresh(); } }


A video that shows how the car drives can be seen here.

Color Sensor

The three color sensor measures some values and then uses these values to output which color the color sensor believes it is measuring.

Some pictures that show the sensor measuring true values are shown below.
 

The color sensor is pretty sensitive to reflections from the paper. This means that small irregularities with regards to height, in the paper that the car is driving on, means that the sensor registers black when it should be registrering white and vice versa, at some points on the paper. When many measures in a row of the color are wrong, the car starts taking of in the wrong direction, and can’t correct itself back to the line.
A video that shows this is here:
We thought that lowering the power with which the car runs might help, because the error wouldn’t have the opportunity to accumulate as much before getting the correct measurement of color. This proved to be somewhat true, because the car this way could actually follow the line for about 5 cm before taking off in the wrong direction, not following the line anymore.
The video shows that the car seems to start going in the wrong direction at approximately the same point on the paper. This may be due to a height change in the paper at this point.

We did try to switch out our color sensor with one from a different group, but instead of registrering white as black, as described above, it often registered white as green, making the car come to a full stop.We haven't tried further to make the car follow a line with the three color sensor.

Conclusion
We have completed the exercises for the week and played around with the three color sensor.


References