Quick Notes : getting started with the examples
===============================================
+..:: What you need ::..
+
+- a platform file describing the environment. You can go to
+ the Platform Description Archive (http://pda.gforge.inria.fr/) to
+ get an existing one or generate your own platform with the
+ SIMULACRUM tool (see 'Download' section there).
+
+- a hostfile. Like in almost all MPI distributions, the hostfile
+ list the hosts which the processes will be mapped on. At present,
+ the format is one hostname per line. The hostnames must be present
+ in the platform file.
+
+ Note: the mapping of MPI processes (ranks) follows the order of the
+ hostfile. Rank 0 is mapped to first hostname in hostfile, Rank 1
+ on second hostname, etc. If n (where -np n) is greater than the
+ number l of lines in hostfile, the mapping is done round-robin.
+
+
+..:: Try the examples ::..
+
Go to :
-# cd simgrid/src/smpi/sample
+# cd simgrid/examples/smpi
To compile an example :
-# ../smpicc bcast.c -o bcast
+# ../../src/smpi/smpicc bcast.c -o bcast
Use 'smpirun' to use it then:
To run it :
-# ../smpirun -np 3 ./bcast
+# ../../src/smpi/smpirun -np 3 ./bcast
node 0 has value 17
node 2 has value 3
node 1 has value 3
To run it with a specific platform:
-# ../smpirun -np 3 -platform platform.xml -hostfile hostfile ./bcast
+# ../../src/smpi/smpirun -np 3 -platform platform.xml -hostfile hostfile ./bcast
+
+Note that by default, the examples use the installed version of
+simgrid. So please install it before playing with the examples, or set
+a LD_LIBRARY_PATH variable pointing to src/.libs
+
+
+
+
+What's implemented
+==================
+
+As a proof of concept, and due to lack of time, the implementation is far from complete
+with respect to a MPI-1.2 specification. Here is what is implemented so far. Please update
+if you can.
+
+
+* MPI_Datatypes:
+MPI_BYTE
+MPI_CHAR
+MPI_INT
+MPI_FLOAT
+MPI_DOUBLE
+* MPI_Op:
+MPI_LAND
+MPI_SUM
+MPI_PROD
+MPI_MIN
+MPI_MAX
+*primitives:
+MPI_Init
+MPI_Finalize
+MPI_Abort
+MPI_Comm_size
+MPI_Comm_split
+MPI_Comm_rank
+MPI_Type_size
+MPI_Barrier
+MPI_Irecv
+MPI_Recv
+MPI_Isend
+MPI_Send
+MPI_Sendrecv
+MPI_Bcast
+MPI_Wait
+MPI_Waitall
+MPI_Waitany
+MPI_Wtime
+MPI_Reduce
+MPI_Allreduce
+MPI_Scatter
+MPI_Alltoall