Difference between revisions of "Projects (USIU)"
(Created page with "=== [[|Project: Pi Based Cluster]] === Course: APT 4030 - Parallel Computing [[|About:]] In this project, we will build a cluster of identical raspberry pi nodes, networked...") |
|||
Line 1: | Line 1: | ||
− | === [[ | + | === [[Raspberry Pi Min-Grant project:Pi Based Cluster (USIU)|Project: Pi Based Cluster]] === |
Course: APT 4030 - Parallel Computing | Course: APT 4030 - Parallel Computing | ||
Line 22: | Line 22: | ||
Computer hardware - | Computer hardware - | ||
− | Raspberry Pi Model B | + | *Raspberry Pi Model B |
− | + | *Storage - SD cards | |
− | Storage - SD cards | + | *Ethernet cables |
− | + | *Power supply | |
− | Ethernet cables | + | *Linux OS (Wheezy) |
− | + | *There is an array of different possibilities. | |
− | Power supply | + | *MPI library |
− | + | *MPICH or OpenMPI | |
− | Linux OS | + | *Ethernet switch and Router |
− | + | *A switch that we can use without interruption during the lab works. | |
− | There is an array of different possibilities. | ||
− | |||
− | MPI library | ||
− | |||
− | MPICH or OpenMPI | ||
− | |||
− | Ethernet switch and Router | ||
− | |||
− | A switch that we can use without interruption during the lab works. | ||
[[Implementation: (USIU)|Implementation:]] | [[Implementation: (USIU)|Implementation:]] | ||
Line 91: | Line 82: | ||
Refresh update repo packages and update the software | Refresh update repo packages and update the software | ||
− | + | [[|#sudo apt-get update && sudo apt-get upgrade -y]] | |
Choose your poison: I prefer having my emacs | Choose your poison: I prefer having my emacs | ||
− | + | [[|#sudo apt-get install emacs]] | |
(:( you only get version 23, will have to compile to use version 24) | (:( you only get version 23, will have to compile to use version 24) | ||
Line 101: | Line 92: | ||
#Refresh and update the softwares, if not done yet. | #Refresh and update the softwares, if not done yet. | ||
− | + | [[|#sudo apt-get update && sudo apt-get upgrade -y]] | |
#Get Fortran, (strange but we need it) | #Get Fortran, (strange but we need it) | ||
− | + | [[|#sudo apt-get install gfortran]] | |
#Before getting Argonne MPICH | #Before getting Argonne MPICH | ||
Line 113: | Line 104: | ||
The have great resources not just the the MPICH library | The have great resources not just the the MPICH library | ||
− | + | [[|#mkdir /home/pi/mpich3 $ cd ~/mpich3]] | |
#Get MPICH sources from Argonne | #Get MPICH sources from Argonne | ||
Line 127: | Line 118: | ||
#Planning a clean place for install | #Planning a clean place for install | ||
− | + | [[|#sudo mkdir /home/rpimpi/]] | |
#sudo mkdir /home/rpimpi/mpich3-install | #sudo mkdir /home/rpimpi/mpich3-install | ||
Line 133: | Line 124: | ||
#Make a build directory, and go to the build | #Make a build directory, and go to the build | ||
− | + | [[|#mkdir /home/pi/mpich_build $ cd /home/pi/mpich_build]] | |
#Configure the build | #Configure the build | ||
Line 139: | Line 130: | ||
This will take a while, you can get the cards ready, you can play while the configurations takes place. | This will take a while, you can get the cards ready, you can play while the configurations takes place. | ||
− | + | [[|#sudo /home/pi/mpich3/mpichXXX/configure -prefix=/home/rpimpi/mpich3- install]] | |
#Make | #Make | ||
Line 145: | Line 136: | ||
What ever it took the last, you can +1 | What ever it took the last, you can +1 | ||
− | + | [[|# ]]sudo make | |
#Install the files | #Install the files | ||
Line 163: | Line 154: | ||
and add lines below: | and add lines below: | ||
− | # Add MPI to path (This is just a comment for later) PATH="$PATH:/home/rpimpi/mpich3-install/bin" | + | #Add MPI to path (This is just a comment for later) PATH="$PATH:/home/rpimpi/mpich3-install/bin" |
#Verify if the install were succesful | #Verify if the install were succesful | ||
− | + | [[|#which mpicc]] | |
/home/rpimpi/mpich3-install/mpicc $ which mpiexec | /home/rpimpi/mpich3-install/mpicc $ which mpiexec | ||
Line 173: | Line 164: | ||
#Go /home and set a place for your first test | #Go /home and set a place for your first test | ||
− | + | [[|#cd ~]] | |
#mkdir mpi_first_test $ cd mpi_first_test | #mkdir mpi_first_test $ cd mpi_first_test | ||
Line 179: | Line 170: | ||
#Now testing MPI on single node | #Now testing MPI on single node | ||
− | + | [[|#mpiexec -f machinefile -n <number> hostname]] | |
where machine file contains a list of IP addresses (in this case just one) for the machines | where machine file contains a list of IP addresses (in this case just one) for the machines | ||
Line 188: | Line 179: | ||
</ol> | </ol> | ||
− | + | [[|#ifconfig]] | |
<ol style="list-style-type:lower-alpha;"> | <ol style="list-style-type:lower-alpha;"> | ||
<li>Put this into a single file called machinefile</li> | <li>Put this into a single file called machinefile</li> | ||
</ol> | </ol> | ||
− | + | [[|#emacs machinefile]] | |
<ol style="list-style-type:lower-alpha;"> | <ol style="list-style-type:lower-alpha;"> | ||
<li>Add this line:</li> | <li>Add this line:</li> | ||
Line 202: | Line 193: | ||
#Now test if the machinefile | #Now test if the machinefile | ||
− | + | [[|# mpiexec -f machinefile ~n 1 hostname]] | |
Output should be: node1 ('hostname') | Output should be: node1 ('hostname') | ||
Line 210: | Line 201: | ||
Don't worry, we shall not write the c code our selves, but MPICH has some example codes we can run. | Don't worry, we shall not write the c code our selves, but MPICH has some example codes we can run. | ||
− | + | [[|# cd /home/pi/mpi_fist_test]] | |
− | + | [[|# mpiexec -f machinefile -n 2 /home/pi/mpich3/examples/cpi]] | |
Output is should be | Output is should be | ||
Line 228: | Line 219: | ||
#[[Shut down (USIU)|Shut down]] | #[[Shut down (USIU)|Shut down]] | ||
− | + | [[|# sudo poweroff]] |
Revision as of 22:23, 28 July 2015
Project: Pi Based Cluster
Course: APT 4030 - Parallel Computing
In this project, we will build a cluster of identical raspberry pi nodes, networked together and running parallel processing software that allows each node in the cluster to share data and computation.
Building a cluster computer powered by raspberry Pi that could be used to develop and run parallel and distributed programs. In doing so, the following goals could be achieved.
- Practical understanding of building parallel systems.
- Experiment with different configurations to achieve better performance.
- Familiarity with MPI (Message Passing Interface) API for parallel programming.
- Familiarity with raspberry pi micro computers useful for rapid hardware prototyping.
Computer hardware -
- Raspberry Pi Model B
- Storage - SD cards
- Ethernet cables
- Power supply
- Linux OS (Wheezy)
- There is an array of different possibilities.
- MPI library
- MPICH or OpenMPI
- Ethernet switch and Router
- A switch that we can use without interruption during the lab works.
Ideally the project will be implemented in stages, starting with configuring the first two nodes and then scaling to add all other additional nodes.
There are many resources that we will use including Raspberry Pi Foundationhttps://www.raspberrypi.org/
Prof Simon Cox, Making a Raspberry pi super computer, University of Southampton http://coen.boisestate.edu/ece/raspberry-pi/
Configuring the nodes, step by step guide
First, configure the first node
When this is done, its easier to clone as many nodes as wanted.
- Get the OS image raspberrypi.org/download
After many false starts, we were content to just use Rasbian Weezy 5.5
- Get image into a the sdcard
On Linux:
- dd if=/media/yourMachine/Images/2015-05-05-raspbian-wheezy.img of=/dev/sdb bs=512 conv=noerror,sync
- Boot the Pi
Great suspense it anything did not go as planned.
If there is an error with the card or any other thing, nothing will show up on the screen, if the Pi is overwhelmed it will take forever to boot. Rasbian is ideal because it has been tested and has a descent first boot time.
- Configuration on first boot
These configurations can be done later with raspi-config or ideally done on the first boot.
- Expand image to fill card
- Change the password
laxmi
- Change hostname (node1 or nodex)
- Re boot
user: pi (has root priviledges) password: laxmi
Refresh update repo packages and update the software
[[|#sudo apt-get update && sudo apt-get upgrade -y]]
Choose your poison: I prefer having my emacs
[[|#sudo apt-get install emacs]]
(:( you only get version 23, will have to compile to use version 24)
- Refresh and update the softwares, if not done yet.
[[|#sudo apt-get update && sudo apt-get upgrade -y]]
- Get Fortran, (strange but we need it)
[[|#sudo apt-get install gfortran]]
- Before getting Argonne MPICH
Resource:http://www.mpich.org/documentation/guides/
The have great resources not just the the MPICH library
[[|#mkdir /home/pi/mpich3 $ cd ~/mpich3]]
- Get MPICH sources from Argonne
Resource: http://www.mpich.org/downloads - Get latest stable
- Unpack them.
- tar xfz mpichXXX.tar.gz
- Planning a clean place for install
[[|#sudo mkdir /home/rpimpi/]]
- sudo mkdir /home/rpimpi/mpich3-install
- Make a build directory, and go to the build
[[|#mkdir /home/pi/mpich_build $ cd /home/pi/mpich_build]]
- Configure the build
This will take a while, you can get the cards ready, you can play while the configurations takes place.
[[|#sudo /home/pi/mpich3/mpichXXX/configure -prefix=/home/rpimpi/mpich3- install]]
- Make
What ever it took the last, you can +1
[[|# ]]sudo make
- Install the files
It can take a bit of time, but not any way close to the last two stages.
- Add the place that you put the install to your PATH
$ export PATH=$PATH:/home/rpimpi/mpich3-mstall/bin
- Or Note to permanently put this on the PATH you will need to edit .profile
and add lines below:
- Add MPI to path (This is just a comment for later) PATH="$PATH:/home/rpimpi/mpich3-install/bin"
- Verify if the install were succesful
[[|#which mpicc]]
/home/rpimpi/mpich3-install/mpicc $ which mpiexec
- Go /home and set a place for your first test
[[|#cd ~]]
- mkdir mpi_first_test $ cd mpi_first_test
- Now testing MPI on single node
[[|#mpiexec -f machinefile -n <number> hostname]]
where machine file contains a list of IP addresses (in this case just one) for the machines
- How this supposed to be done
- Get your IP address
[[|#ifconfig]]
- Put this into a single file called machinefile
[[|#emacs machinefile]]
- Add this line:
192.168.1.161 [or the ip is ... ]
- Now test if the machinefile
[[|# mpiexec -f machinefile ~n 1 hostname]]
Output should be: node1 ('hostname')
- Little C code using MPI on Pi to calculate Pi
Don't worry, we shall not write the c code our selves, but MPICH has some example codes we can run.
[[|# cd /home/pi/mpi_fist_test]]
[[|# mpiexec -f machinefile -n 2 /home/pi/mpich3/examples/cpi]]
Output is should be
Process 0 of 2 is on raspberrypi Process 1 of 2 is on raspberrypi
pi is approximately 3.141 5926544231318, Error is 0.0000000008333387
This calls for a celebration! (Seriously!)
Order a bottle from Bourgogne and celebrate (and clone the node).
[[|# sudo poweroff]]