I began week 6 by planning different ways that I could make a robot follow a line. The most obvious was to have the light sensor measure colour, and steer different directions based upon which colour it was detecting (Black or White). However, after some experimentation, I could not find a way in which this worked successfully. My second idea was to use the built-in ‘Reflected Light Intensity’ measurements that could be made by the robot. So I began creating a program utilizing this idea. By myself, as usual (despite asking for some of the others to help me numerous times).
The problem I encountered straight away was that the measurements taken seemed to range wildly. The returned value on the white paper seemed to vary between 40 when sensor was in the robot’s shadow, and 70 when the robot was in direct light. The value on the black line was consistently around 8 or so. The value on the edge of the line (the important measurement) varied between 20 and 40 (again based on whether it was in the robots shadow or not). The basic idea for the program was to make the robot turn one direction (say right) when the light intensity matched white, and the other direction (left) when the robot’s reading matched black. When the reading was that of the edge of the line, the robot would continue in a straight line. Having no idea how to implement this smoothly, I looked for similar programs on online forums. One idea (from this video) was to connect the readings directly to the speed of each wheel motor. However, this required some modification of values, as the current values meant that the robot drove way to quickly for it to measure consistently. So following the program demonstrated in the video, I added the required mathematical processes (inverting and halving the values to accommodate for our robots motors (which due to the design, are backwards)) and the robot now seems to follow the line. The first iteration of this code is pictured below.
The custom block at the start is a calibration module which calibrates the light sensors values to be 0 on black, and ~100 on white. Unfortunately, this needs to be done every time the program is run. The Calibration program is pictured below.
After much deliberation over the state of the program, I made the executive decision to change the basis of the program. The current program’s reliance on the IR sensor made it unreliable; the IR sensor often returned a variation of different values when measuring it’s proximity to an object, even when it hadn’t moved since it’s previous measurement. This meant that the robot fairly regularly failed to run, and during the execution of the program, simply drove into the wall (and due to the tall, top heavy design of the robot, fell over).
This issue, combined with the fact that the robot had no way of making precise 90 degree turns, meant the current plan and robot would not work together. What’s more, we had jury-rigged the light sensor onto one of the robot’s legs. This meant parts of the robots tracks and cables interfered with the reading often, and the sensor itself was decentralized. As a culmination of all these problems, we made the decision to rebuild the robot into a more usable design; one that positioned the light sensor more fittingly. The new design is the EV3MEG design. This design still allows us access to the IR sensor, but gives us a more stable and usable light sensor armature and has movable arms.
In other news, no more progress has been made on the youtube videos, and the group member who rarely ever shows up, didn’t show all week.
At the start of Week 4, we began by allocating rough “roles” to group members. Without naming any names, 2 members were assigned to filming/editing the youtube videos, 2 members were assigned to develop upon the idea for the game, and me and another member were too improve upon the current program (i.e make it simpler, more efficient, etc.). Upon completion of these tasks, alongside the tasks set on moodle, we would then reconvene in order to create the next part of the program. Or at least that was the plan.
I began by testing the program from the week before. I tested the colour sensor with a simple coloured piece of paper which the robot would drive over and proceed to follow it’s designated instructions. The test of this can be seen in the second section of the following video:
The next part of the program was the maze navigation via the IR sensor. The general idea was that if the robot detected a wall, it would rotate to the right by 90 degrees (as demonstrated in the video above). If it detected another wall immediately after, it would then rotate 180 degrees therefore going left from it’s original starting direction. This worked fine in concept, however the EV3 software is extremely limiting and so there was no easy way to make the robot turn exactly 90 degrees. The robot could either turn based on wheel rotations, motor degrees, or for a set time period & set speed. This meant to force it to turn exactly 90 degrees, I would have to measure the exact number of wheel rotations, motor degrees, or seconds in which it took to turn 90 degrees. I did a few tests, modified the number a little, and eventually got a rough estimate. I simplified the program for the test, removing any extra functions so that we could ensure that the IR sensor’s program worked as intended. The test program can be seen below.
At the end of the week, I checked in to see how much the others had done. The team assigned to creating the youtube videos (one of whom didn’t show up to either session this week) seemed more content on adding music to the current videos and creating “quirky” names for the upcoming videos rather than actually filming anything useful. The planning team had drawn up a plan for a maze, which was useful, although based of the size of the robot, the maze would be about 10ftx20ft. I had to remind them we probably only had about a quarter of that at most.
The first task for week 3 was to produce a flow chart for our game, to aid us in programming the game within the software. We began first with making a simple flowchart detailing all the processes which the robot would need to go through in order to complete the game, however we did not include individual programming paths for each process; instead I plan to split each process up into it’s own flowchart, and treat it as it’s own program which can be run separate from the main program. The hope is that this will make each process easier to program and potentially easier to implement, although it may cause complications when attempting to assemble the entire game as a whole program.
I began by creating a flowchart for a simple program which would utilize the robot’s colour sensor. The program would make the robot move in the absence of colour being detected, but upon detecting blue beneath the colour sensor, it would fire it’s weapon, or upon detecting red it would stop. I also wanted the program to be able to be rerun by a press of the touch sensor, as the current system meant that the program would need to be re-selected on the main brick display, and being able to press the touch sensor allowed a simplified method of input.
The image above depicts the first, simplified iteration of the process. It is design so that it functions as a single ‘module’, in the sense it can be copied and pasted into other programs and should function the same as it functions on it’s own. We uploaded the program to the robot, and tested it. It worked as intended, however there was a small issue in which the colour sensor would not detect other colours (i.e Red) and run their function whilst running the function following when it detects another colour (i.e Blue). This led to the robot simply skipping colours if it had already passed them whilst executing the functions following another detection of another colour. We recorded a video of the test, and it is to be uploaded to the youtube channel.
In the second session, I expanded upon the program, adding in an operation allowing the robot to count the shot’s it has fired. This is to be used to count the remaining ammunition. I also added a function in which when the touch sensor is pressed while the program is running, the program stops, and if it is out of ammunition, the program also stops. This second case is temporary, as there is no currently designated path or area for the robot to take once it has run out of ammunition, but the path is there which can be added on to.
Above is the second iteration of the program. Due to a team member’s absence, I have been unable to test this program as I have not had access to the robot.
We began by modifying the robot’s design to more suit our needs; we moved the colour sensor from the ‘shoulder’ area of the robot down to the leg. We did this to allow the robot to read colour prompts that are laid out on the floor of the maze. We also removed several aesthetic pieces in order to make the main brick of the robot more accessible. With the modifications to the robot complete, we were then able to begin learning how the software controls the robot, and start planning out our game based around the modules available inside the software. However, we may be required to modify the robot in the future.
I began learning about the Lego Mindstorms development environment by reading through the EV3 Help page development guides for detailed tutorials on how to use the basic modules to operate the robot’s sensor’s and display. Alongside these modules were modules for basic math and logic, most of which I understood through what I had learnt in the 121COM and 124MS modules.
I had hoped that other members of my team would make an effort to also learn the program, even just to get a better understanding of the limitations of the software. These hopes seem to have gone unanswered however as of the beginning of week 3 none of the other team members seem to have learnt any of the program. This is evident in their design plans, as several elements of the design to not appear at first to be possible given the constraints present when using the Lego software, as opposed to using an actual programming language.
After a brief introduction to the first ALL project: Lego Robot Game, we began by reading through the Assessment Specifications to get a good idea of the level of complexity our game required in order to achieve high marks.
The first Activity Brief detailed some of the requirements which would make the game complex. We discussed these, and formulated a basic plan for a game based around some of these requirements. The general basis of the game is that the Robot utilizes an Infrared sensor and a Colour sensor to navigate it’s way around a maze. Colour sensor cues would indicate targets and the robot would fire it’s weapon at them, keeping record of it’s ammunition count, and returning to a set point in order to collect more ammunition. With this in mind, we selected the EV3RSTORM Lego Mindstorms EV3 robot design with which to base our robot on. Upon receiving the Mindstorms Robot kit we proceeded to assemble the robot along with the EV3RSTORM design.