examples/simdag/dot/simulate_dot
teshsuite/simdag/platforms/graphicator
-#########################################""
-## tutorial files
-doc/gtut-files/01-bones_client
-doc/gtut-files/01-bones_server
-doc/gtut-files/02-simple_client
-doc/gtut-files/02-simple_server
-doc/gtut-files/03-args_client
-doc/gtut-files/03-args_server
-doc/gtut-files/04-callback_client
-doc/gtut-files/04-callback_server
-doc/gtut-files/05-globals_client
-doc/gtut-files/05-globals_server
-doc/gtut-files/06-logs_client
-doc/gtut-files/06-logs_server
-doc/gtut-files/07-timers_client
-doc/gtut-files/07-timers_server
-doc/gtut-files/08-exceptions_client
-doc/gtut-files/08-exceptions_server
-doc/gtut-files/09-simpledata_client
-doc/gtut-files/09-simpledata_server
-doc/gtut-files/10-rpc_client
-doc/gtut-files/10-rpc_server
-doc/gtut-files/11-explicitwait_client
-doc/gtut-files/11-explicitwait_server
+#########################################
+## touched files to track the dependencies of java examples
+examples/java/async/java_async_compiled
+examples/java/bittorrent/java_bittorrent_compiled
+examples/java/chord/java_chord_compiled
+examples/java/cloud/java_cloud_compiled
+examples/java/commTime/java_commTime_compiled
+examples/java/io/java_io_compiled
+examples/java/kademlia/java_kademlia_compiled
+examples/java/master_slave_bypass/java_master_slave_bypass_compiled
+examples/java/master_slave_kill/java_master_slave_kill_compiled
+examples/java/masterslave/java_masterslave_compiled
+examples/java/migration/java_migration_compiled
+examples/java/mutualExclusion/java_mutualExclusion_compiled
+examples/java/pingPong/java_pingPong_compiled
+examples/java/priority/java_priority_compiled
+examples/java/startKillTime/java_startKillTime_compiled
+examples/java/suspend/java_suspend_compiled
+examples/java/tracing/java_tracing_compiled
SimGrid (3.10) NOT RELEASED; urgency=low
XBT:
- * Our own implementation of getline is renamed xbt_getline.
+ * Our own implementation of getline is renamed xbt_getline, and gets
+ used even if the OS provide a getline(). This should reduce the
+ configuration complexity by using the same code on all platforms.
Java:
* Reintegrate Java to the main archive as desynchronizing these
* Bugfix: Task.setDataSize() only changed the C world, not the value
cached in the Java world
+ SMPI:
+ * Improvements of the SMPI replay tool:
+ - Most of the collective communications are now rooted in the same process as
+ in the original application.
+ - Traces now rely on the same MPI datatype as the application (MPI_BYTE was
+ used until now). Multiple datatypes can now be used in a trace.
+ - The replay tool now supports traces produce either by TAU or a modified
+ version of MPE.
+ - Bug Fix: the compute part of the reduce action is now taken into account.
+
-- $date Da SimGrid team <simgrid-devel@lists.gforge.inria.fr>
SimGrid (3.9) stable; urgency=low
TRACING:
* Transfer the tracing files into the corresponding modules.
- -- Tue Jan 29 19:38:56 CET 2013 Da SimGrid team <simgrid-devel@lists.gforge.inria.fr>
+ -- Tue Feb 5 11:31:43 CET 2013 Da SimGrid team <simgrid-devel@lists.gforge.inria.fr>
SimGrid (3.8.1) stable; urgency=low
\ \ / / _ \ '__/ __| |/ _ \| '_ \ |_ \ (_) |
\ V / __/ | \__ \ | (_) | | | | ___) \__, |
\_/ \___|_| |___/_|\___/|_| |_| |____(_)/_/
- Jan 29 2013
+ Feb 5 2013
The "Grasgory" release. Major changes:
-
Welcome to the SimGrid project!
-Up-to-date documentation about installation and how to use SimGrid is available
-online at http://simgrid.gforge.inria.fr/
+SimGrid is a scientific instrument to study the behavior of
+large-scale distributed systems such as Grids, Clouds, HPC or P2P
+systems. It can be used to evaluate heuristics, prototype applications
+or even assess legacy MPI applications.
-The documentation is also included in the archive you downloaded: Check
-doc/html/index.html for more information.
+More documentation is included in this archive (doc/html/index.html)
+or online at http://simgrid.gforge.inria.fr/
In any case, you may want to subscribe to the user mailing list
(http://lists.gforge.inria.fr/mailman/listinfo/simgrid-user). There,
doing the same kind of research than you do, in an active and friendly
community.
-Thanks for downloading our software,
+Thanks for using our software. Please do great things with it and tell
+the world about it. Tell us, too, because we love to have positive
+feedback.
Cheers,
Da SimGrid Team.
ADD_TEST(tesh-simdag-mxn-3 ${CMAKE_BINARY_DIR}/bin/tesh ${TESH_OPTION} --setenv srcdir=${CMAKE_HOME_DIRECTORY}/teshsuite --cd ${CMAKE_BINARY_DIR}/teshsuite ${CMAKE_HOME_DIRECTORY}/teshsuite/simdag/network/mxn/test_intra_scatter.tesh)
ADD_TEST(tesh-simdag-par-1 ${CMAKE_BINARY_DIR}/bin/tesh ${TESH_OPTION} --setenv srcdir=${CMAKE_HOME_DIRECTORY}/teshsuite --cd ${CMAKE_BINARY_DIR}/teshsuite ${CMAKE_HOME_DIRECTORY}/teshsuite/simdag/partask/test_comp_only_seq.tesh)
ADD_TEST(tesh-simdag-par-2 ${CMAKE_BINARY_DIR}/bin/tesh ${TESH_OPTION} --setenv srcdir=${CMAKE_HOME_DIRECTORY}/teshsuite --cd ${CMAKE_BINARY_DIR}/teshsuite ${CMAKE_HOME_DIRECTORY}/teshsuite/simdag/partask/test_comp_only_par.tesh)
+ ADD_TEST(tesh-simdag-availability ${CMAKE_BINARY_DIR}/bin/tesh ${TESH_OPTION} --setenv srcdir=${CMAKE_HOME_DIRECTORY}/teshsuite --cd ${CMAKE_BINARY_DIR}/teshsuite ${CMAKE_HOME_DIRECTORY}/teshsuite/simdag/availability/availability_test.tesh)
# MSG examples
ADD_TEST(msg-file ${CMAKE_BINARY_DIR}/bin/tesh ${TESH_OPTION} --setenv bindir=${CMAKE_BINARY_DIR}/examples/msg/ --setenv srcdir=${CMAKE_HOME_DIRECTORY}/ --cd ${CMAKE_HOME_DIRECTORY}/examples/ ${CMAKE_HOME_DIRECTORY}/examples/msg/io/io.tesh)
ADD_TEST(graphicator ${CMAKE_BINARY_DIR}/bin/tesh ${TESH_OPTION} --setenv srcdir=${CMAKE_HOME_DIRECTORY} --setenv bindir=${CMAKE_BINARY_DIR}/bin --cd ${CMAKE_HOME_DIRECTORY}/tools/graphicator graphicator.tesh)
ENDIF()
- # Java examples
- set(TESH_CLASSPATH "${CMAKE_BINARY_DIR}/examples/java/:${SIMGRID_JAR}")
- if(enable_java)
- ADD_TEST(java-async ${CMAKE_BINARY_DIR}/bin/tesh ${TESH_OPTION} --setenv srcdir=${CMAKE_HOME_DIRECTORY}/examples/java --setenv classpath=${TESH_CLASSPATH} --cd ${CMAKE_BINARY_DIR}/examples/java ${CMAKE_HOME_DIRECTORY}/examples/java/async/async.tesh)
- ADD_TEST(java-bittorrent ${CMAKE_BINARY_DIR}/bin/tesh ${TESH_OPTION} --setenv srcdir=${CMAKE_HOME_DIRECTORY}/examples/java --setenv classpath=${TESH_CLASSPATH} --cd ${CMAKE_BINARY_DIR}/examples/java ${CMAKE_HOME_DIRECTORY}/examples/java/bittorrent/bittorrent.tesh)
- ADD_TEST(java-bypass ${CMAKE_BINARY_DIR}/bin/tesh ${TESH_OPTION} --setenv srcdir=${CMAKE_HOME_DIRECTORY}/examples/java --setenv classpath=${TESH_CLASSPATH} --cd ${CMAKE_BINARY_DIR}/examples/java ${CMAKE_HOME_DIRECTORY}/examples/java/master_slave_bypass/bypass.tesh)
- ADD_TEST(java-chord ${CMAKE_BINARY_DIR}/bin/tesh ${TESH_OPTION} --setenv srcdir=${CMAKE_HOME_DIRECTORY}/examples/java --setenv classpath=${TESH_CLASSPATH} --cd ${CMAKE_BINARY_DIR}/examples/java ${CMAKE_HOME_DIRECTORY}/examples/java/chord/chord.tesh)
- ADD_TEST(java-cloud ${CMAKE_BINARY_DIR}/bin/tesh ${TESH_OPTION} --setenv srcdir=${CMAKE_HOME_DIRECTORY}/examples/java --setenv classpath=${TESH_CLASSPATH} --cd ${CMAKE_BINARY_DIR}/examples/java ${CMAKE_HOME_DIRECTORY}/examples/java/cloud/cloud.tesh)
- ADD_TEST(java-commTime ${CMAKE_BINARY_DIR}/bin/tesh ${TESH_OPTION} --setenv srcdir=${CMAKE_HOME_DIRECTORY}/examples/java --setenv classpath=${TESH_CLASSPATH} --cd ${CMAKE_BINARY_DIR}/examples/java ${CMAKE_HOME_DIRECTORY}/examples/java/commTime/commtime.tesh)
- ADD_TEST(java-kademlia ${CMAKE_BINARY_DIR}/bin/tesh ${TESH_OPTION} --setenv srcdir=${CMAKE_HOME_DIRECTORY}/examples/java --setenv classpath=${TESH_CLASSPATH} --cd ${CMAKE_BINARY_DIR}/examples/java ${CMAKE_HOME_DIRECTORY}/examples/java/kademlia/kademlia.tesh)
- ADD_TEST(java-kill ${CMAKE_BINARY_DIR}/bin/tesh ${TESH_OPTION} --setenv srcdir=${CMAKE_HOME_DIRECTORY}/examples/java --setenv classpath=${TESH_CLASSPATH} --cd ${CMAKE_BINARY_DIR}/examples/java ${CMAKE_HOME_DIRECTORY}/examples/java/master_slave_kill/kill.tesh)
- ADD_TEST(java-masterslave ${CMAKE_BINARY_DIR}/bin/tesh ${TESH_OPTION} --setenv srcdir=${CMAKE_HOME_DIRECTORY}/examples/java --setenv classpath=${TESH_CLASSPATH} --cd ${CMAKE_BINARY_DIR}/examples/java ${CMAKE_HOME_DIRECTORY}/examples/java/masterslave/masterslave.tesh)
- ADD_TEST(java-migration ${CMAKE_BINARY_DIR}/bin/tesh ${TESH_OPTION} --setenv srcdir=${CMAKE_HOME_DIRECTORY}/examples/java --setenv classpath=${TESH_CLASSPATH} --cd ${CMAKE_BINARY_DIR}/examples/java ${CMAKE_HOME_DIRECTORY}/examples/java/migration/migration.tesh)
- ADD_TEST(java-mutualExclusion ${CMAKE_BINARY_DIR}/bin/tesh ${TESH_OPTION} --setenv srcdir=${CMAKE_HOME_DIRECTORY}/examples/java --setenv classpath=${TESH_CLASSPATH} --cd ${CMAKE_BINARY_DIR}/examples/java ${CMAKE_HOME_DIRECTORY}/examples/java/mutualExclusion/mutualexclusion.tesh)
- ADD_TEST(java-pingPong ${CMAKE_BINARY_DIR}/bin/tesh ${TESH_OPTION} --setenv srcdir=${CMAKE_HOME_DIRECTORY}/examples/java --setenv classpath=${TESH_CLASSPATH} --cd ${CMAKE_BINARY_DIR}/examples/java ${CMAKE_HOME_DIRECTORY}/examples/java/pingPong/pingpong.tesh)
- ADD_TEST(java-priority ${CMAKE_BINARY_DIR}/bin/tesh ${TESH_OPTION} --setenv srcdir=${CMAKE_HOME_DIRECTORY}/examples/java --setenv classpath=${TESH_CLASSPATH} --cd ${CMAKE_BINARY_DIR}/examples/java ${CMAKE_HOME_DIRECTORY}/examples/java/priority/priority.tesh)
- ADD_TEST(java-startKillTime ${CMAKE_BINARY_DIR}/bin/tesh ${TESH_OPTION} --setenv srcdir=${CMAKE_HOME_DIRECTORY}/examples/java --setenv classpath=${TESH_CLASSPATH} --cd ${CMAKE_BINARY_DIR}/examples/java ${CMAKE_HOME_DIRECTORY}/examples/java/startKillTime/startKillTime.tesh)
- ADD_TEST(java-suspend ${CMAKE_BINARY_DIR}/bin/tesh ${TESH_OPTION} --setenv srcdir=${CMAKE_HOME_DIRECTORY}/examples/java --setenv classpath=${TESH_CLASSPATH} --cd ${CMAKE_BINARY_DIR}/examples/java ${CMAKE_HOME_DIRECTORY}/examples/java/suspend/suspend.tesh)
- if(HAVE_TRACING)
- ADD_TEST(java-tracing ${CMAKE_BINARY_DIR}/bin/tesh ${TESH_OPTION} --setenv srcdir=${CMAKE_HOME_DIRECTORY}/examples/java --setenv classpath=${TESH_CLASSPATH} --cd ${CMAKE_BINARY_DIR}/examples/java ${CMAKE_HOME_DIRECTORY}/examples/java/tracing/tracingPingPong.tesh)
- endif()
- endif()
-
# Lua examples
if(HAVE_LUA)
ADD_TEST(lua-duplicated-globals ${CMAKE_BINARY_DIR}/bin/tesh ${TESH_OPTION} --cd ${CMAKE_HOME_DIRECTORY}/examples/lua/state_cloner duplicated_globals.tesh)
endif()
endif()
+ # Java examples
+ if(enable_java)
+ set(TESH_CLASSPATH "${CMAKE_BINARY_DIR}/examples/java/:${SIMGRID_JAR}")
+ ADD_TEST(java-async ${CMAKE_BINARY_DIR}/bin/tesh ${TESH_OPTION} --setenv srcdir=${CMAKE_HOME_DIRECTORY}/examples/java --setenv classpath=${TESH_CLASSPATH} --cd ${CMAKE_BINARY_DIR}/examples/java ${CMAKE_HOME_DIRECTORY}/examples/java/async/async.tesh)
+ ADD_TEST(java-bittorrent ${CMAKE_BINARY_DIR}/bin/tesh ${TESH_OPTION} --setenv srcdir=${CMAKE_HOME_DIRECTORY}/examples/java --setenv classpath=${TESH_CLASSPATH} --cd ${CMAKE_BINARY_DIR}/examples/java ${CMAKE_HOME_DIRECTORY}/examples/java/bittorrent/bittorrent.tesh)
+ ADD_TEST(java-bypass ${CMAKE_BINARY_DIR}/bin/tesh ${TESH_OPTION} --setenv srcdir=${CMAKE_HOME_DIRECTORY}/examples/java --setenv classpath=${TESH_CLASSPATH} --cd ${CMAKE_BINARY_DIR}/examples/java ${CMAKE_HOME_DIRECTORY}/examples/java/master_slave_bypass/bypass.tesh)
+ ADD_TEST(java-chord ${CMAKE_BINARY_DIR}/bin/tesh ${TESH_OPTION} --setenv srcdir=${CMAKE_HOME_DIRECTORY}/examples/java --setenv classpath=${TESH_CLASSPATH} --cd ${CMAKE_BINARY_DIR}/examples/java ${CMAKE_HOME_DIRECTORY}/examples/java/chord/chord.tesh)
+ ADD_TEST(java-cloud ${CMAKE_BINARY_DIR}/bin/tesh ${TESH_OPTION} --setenv srcdir=${CMAKE_HOME_DIRECTORY}/examples/java --setenv classpath=${TESH_CLASSPATH} --cd ${CMAKE_BINARY_DIR}/examples/java ${CMAKE_HOME_DIRECTORY}/examples/java/cloud/cloud.tesh)
+ ADD_TEST(java-commTime ${CMAKE_BINARY_DIR}/bin/tesh ${TESH_OPTION} --setenv srcdir=${CMAKE_HOME_DIRECTORY}/examples/java --setenv classpath=${TESH_CLASSPATH} --cd ${CMAKE_BINARY_DIR}/examples/java ${CMAKE_HOME_DIRECTORY}/examples/java/commTime/commtime.tesh)
+ ADD_TEST(java-kademlia ${CMAKE_BINARY_DIR}/bin/tesh ${TESH_OPTION} --setenv srcdir=${CMAKE_HOME_DIRECTORY}/examples/java --setenv classpath=${TESH_CLASSPATH} --cd ${CMAKE_BINARY_DIR}/examples/java ${CMAKE_HOME_DIRECTORY}/examples/java/kademlia/kademlia.tesh)
+ ADD_TEST(java-kill ${CMAKE_BINARY_DIR}/bin/tesh ${TESH_OPTION} --setenv srcdir=${CMAKE_HOME_DIRECTORY}/examples/java --setenv classpath=${TESH_CLASSPATH} --cd ${CMAKE_BINARY_DIR}/examples/java ${CMAKE_HOME_DIRECTORY}/examples/java/master_slave_kill/kill.tesh)
+ ADD_TEST(java-masterslave ${CMAKE_BINARY_DIR}/bin/tesh ${TESH_OPTION} --setenv srcdir=${CMAKE_HOME_DIRECTORY}/examples/java --setenv classpath=${TESH_CLASSPATH} --cd ${CMAKE_BINARY_DIR}/examples/java ${CMAKE_HOME_DIRECTORY}/examples/java/masterslave/masterslave.tesh)
+ ADD_TEST(java-migration ${CMAKE_BINARY_DIR}/bin/tesh ${TESH_OPTION} --setenv srcdir=${CMAKE_HOME_DIRECTORY}/examples/java --setenv classpath=${TESH_CLASSPATH} --cd ${CMAKE_BINARY_DIR}/examples/java ${CMAKE_HOME_DIRECTORY}/examples/java/migration/migration.tesh)
+ ADD_TEST(java-mutualExclusion ${CMAKE_BINARY_DIR}/bin/tesh ${TESH_OPTION} --setenv srcdir=${CMAKE_HOME_DIRECTORY}/examples/java --setenv classpath=${TESH_CLASSPATH} --cd ${CMAKE_BINARY_DIR}/examples/java ${CMAKE_HOME_DIRECTORY}/examples/java/mutualExclusion/mutualexclusion.tesh)
+ ADD_TEST(java-pingPong ${CMAKE_BINARY_DIR}/bin/tesh ${TESH_OPTION} --setenv srcdir=${CMAKE_HOME_DIRECTORY}/examples/java --setenv classpath=${TESH_CLASSPATH} --cd ${CMAKE_BINARY_DIR}/examples/java ${CMAKE_HOME_DIRECTORY}/examples/java/pingPong/pingpong.tesh)
+ ADD_TEST(java-priority ${CMAKE_BINARY_DIR}/bin/tesh ${TESH_OPTION} --setenv srcdir=${CMAKE_HOME_DIRECTORY}/examples/java --setenv classpath=${TESH_CLASSPATH} --cd ${CMAKE_BINARY_DIR}/examples/java ${CMAKE_HOME_DIRECTORY}/examples/java/priority/priority.tesh)
+ ADD_TEST(java-startKillTime ${CMAKE_BINARY_DIR}/bin/tesh ${TESH_OPTION} --setenv srcdir=${CMAKE_HOME_DIRECTORY}/examples/java --setenv classpath=${TESH_CLASSPATH} --cd ${CMAKE_BINARY_DIR}/examples/java ${CMAKE_HOME_DIRECTORY}/examples/java/startKillTime/startKillTime.tesh)
+ ADD_TEST(java-suspend ${CMAKE_BINARY_DIR}/bin/tesh ${TESH_OPTION} --setenv srcdir=${CMAKE_HOME_DIRECTORY}/examples/java --setenv classpath=${TESH_CLASSPATH} --cd ${CMAKE_BINARY_DIR}/examples/java ${CMAKE_HOME_DIRECTORY}/examples/java/suspend/suspend.tesh)
+ if(HAVE_TRACING)
+ ADD_TEST(java-tracing ${CMAKE_BINARY_DIR}/bin/tesh ${TESH_OPTION} --setenv srcdir=${CMAKE_HOME_DIRECTORY}/examples/java --setenv classpath=${TESH_CLASSPATH} --cd ${CMAKE_BINARY_DIR}/examples/java ${CMAKE_HOME_DIRECTORY}/examples/java/tracing/tracingPingPong.tesh)
+ endif()
+ endif()
+
# examples/msg/mc
if(HAVE_MC)
ADD_TEST(mc-bugged1-thread ${CMAKE_BINARY_DIR}/bin/tesh ${TESH_OPTION} --cfg contexts/factory:thread --setenv bindir=${CMAKE_BINARY_DIR}/examples/msg/mc --cd ${CMAKE_HOME_DIRECTORY}/examples/msg/mc bugged1.tesh)
set_tests_properties(mc-centralized-raw PROPERTIES WILL_FAIL true)
endif()
endif()
- set_tests_properties(msg-masterslave-virtual-machines PROPERTIES WILL_FAIL true)
set_tests_properties(msg-bittorrent-thread-parallel PROPERTIES ENVIRONMENT SG_TEST_EXENV=true WILL_FAIL true)
if(CONTEXT_UCONTEXT)
set_tests_properties(msg-bittorrent-ucontext-parallel PROPERTIES ENVIRONMENT SG_TEST_EXENV=true WILL_FAIL true)
endforeach(fct ${diff_va})
#--------------------------------------------------------------------------------------------------
-### check for getline
-try_compile(COMPILE_RESULT_VAR
- ${CMAKE_BINARY_DIR}
- ${CMAKE_HOME_DIRECTORY}/buildtools/Cmake/test_prog/prog_getline.c
- )
-
-if(NOT COMPILE_RESULT_VAR)
- SET(need_getline "#define SIMGRID_NEED_GETLINE 1")
- SET(SIMGRID_NEED_GETLINE 1)
-else()
- SET(need_getline "")
- SET(SIMGRID_NEED_GETLINE 0)
-endif()
-
### check for a working snprintf
if(HAVE_SNPRINTF AND HAVE_VSNPRINTF OR WIN32)
if(WIN32)
configure_file(${CMAKE_HOME_DIRECTORY}/examples/smpi/replay/actions_barrier.txt ${CMAKE_BINARY_DIR}/examples/smpi/replay/actions_barrier.txt COPYONLY)
configure_file(${CMAKE_HOME_DIRECTORY}/examples/smpi/replay/actions_bcast.txt ${CMAKE_BINARY_DIR}/examples/smpi/replay/actions_bcast.txt COPYONLY)
configure_file(${CMAKE_HOME_DIRECTORY}/examples/smpi/replay/actions_with_isend.txt ${CMAKE_BINARY_DIR}/examples/smpi/replay/actions_with_isend.txt COPYONLY)
+ configure_file(${CMAKE_HOME_DIRECTORY}/examples/smpi/replay/actions_alltoall.txt ${CMAKE_BINARY_DIR}/examples/smpi/replay/actions_alltoall.txt COPYONLY)
+ configure_file(${CMAKE_HOME_DIRECTORY}/examples/smpi/replay/actions_alltoallv.txt ${CMAKE_BINARY_DIR}/examples/smpi/replay/actions_alltoallv.txt COPYONLY)
+ configure_file(${CMAKE_HOME_DIRECTORY}/examples/smpi/replay/actions_waitall.txt ${CMAKE_BINARY_DIR}/examples/smpi/replay/actions_waitall.txt COPYONLY)
configure_file(${CMAKE_HOME_DIRECTORY}/teshsuite/smpi/hostfile ${CMAKE_BINARY_DIR}/teshsuite/smpi/hostfile COPYONLY)
set(generated_files_to_clean
${CMAKE_BINARY_DIR}/examples/smpi/replay/actions_barrier.txt
${CMAKE_BINARY_DIR}/examples/smpi/replay/actions_bcast.txt
${CMAKE_BINARY_DIR}/examples/smpi/replay/actions_with_isend.txt
+ ${CMAKE_BINARY_DIR}/examples/smpi/replay/actions_alltoall.txt
+ ${CMAKE_BINARY_DIR}/examples/smpi/replay/actions_alltoallv.txt
+ ${CMAKE_BINARY_DIR}/examples/smpi/replay/actions_waitall.txt
${CMAKE_BINARY_DIR}/teshsuite/smpi/hostfile
)
endif()
teshsuite/simdag/network/p2p/CMakeLists.txt
teshsuite/simdag/partask/CMakeLists.txt
teshsuite/simdag/platforms/CMakeLists.txt
+ teshsuite/simdag/availability/CMakeLists.txt
teshsuite/xbt/CMakeLists.txt
teshsuite/smpi/CMakeLists.txt
teshsuite/smpi/mpich-test/CMakeLists.txt
buildtools/Cmake/src/internal_config.h.in
buildtools/Cmake/src/simgrid.nsi.in
buildtools/Cmake/test_prog/prog_AC_CHECK_MCSC.c
- buildtools/Cmake/test_prog/prog_getline.c
buildtools/Cmake/test_prog/prog_gnu_dynlinker.c
buildtools/Cmake/test_prog/prog_gtnets.cpp
buildtools/Cmake/test_prog/prog_mutex_timedlock.c
add_subdirectory(${CMAKE_HOME_DIRECTORY}/teshsuite/simdag/network/mxn)
add_subdirectory(${CMAKE_HOME_DIRECTORY}/teshsuite/simdag/partask)
add_subdirectory(${CMAKE_HOME_DIRECTORY}/teshsuite/simdag/platforms)
+add_subdirectory(${CMAKE_HOME_DIRECTORY}/teshsuite/simdag/availability)
add_subdirectory(${CMAKE_HOME_DIRECTORY}/teshsuite/smpi)
add_subdirectory(${CMAKE_HOME_DIRECTORY}/teshsuite/smpi/mpich-test)
add_subdirectory(${CMAKE_HOME_DIRECTORY}/teshsuite/smpi/mpich-test/env)
message("")
message("SIZEOF_MAX ..................: ${SIZEOF_MAX}")
message("PTH_STACKGROWTH .............: ${PTH_STACKGROWTH}")
- message("need_getline ................: ${need_getline}")
message("need_asprintf ...............: ${simgrid_need_asprintf}")
message("need_vasprintf ..............: ${simgrid_need_vasprintf}")
message("PREFER_PORTABLE_SNPRINTF ....: ${PREFER_PORTABLE_SNPRINTF}")
sub var_subst {
my ($text, $name, $value) = @_;
if ($value) {
- $text =~ s/\${$name(?::=[^}]*)?}/$value/g;
+ $text =~ s/\${$name(?::[=-][^}]*)?}/$value/g;
$text =~ s/\$$name(\W|$)/$value$1/g;
}
else {
my ($line);
my ($path);
my ($dump) = 0;
-my ($srcdir);
-my ($bindir);
+my (%environ);
my ($tesh_file);
my ($config_var);
my ($name_test);
if ($dump) {
$line =~ s/^ //;
if ( $line =~ /^\s*ADD_TEST\(\S+\s+\S*\/tesh\s/ ) {
- $srcdir = "";
- $bindir = "";
+ undef %environ;
$config_var = "";
$path = "";
$nb_test++;
}
while ( $line =~ /--setenv\s+(\S+)\=(\S+)/g ) {
my ( $env_var, $value_var ) = ( $1, $2 );
- if ( $env_var =~ /srcdir/ ) {
- $srcdir = $value_var;
- }
- elsif ( $env_var =~ /bindir/ ) {
- $bindir = $value_var;
- }
+ $environ{$env_var} = $value_var;
}
if ( $line =~ /(\S+)\)$/ ) {
$tesh_file = $1;
if (0) {
print "test_name = $name_test\n";
- print "$config_var\n";
+ print "config_var = $config_var\n";
print "path = $path\n";
- print "srcdir=$srcdir\n";
- print "bindir=$bindir\n";
+ foreach my $key (keys %environ) {
+ print "$key = $environ{$key}\n";
+ }
print "tesh_file = $tesh_file\n";
print "\n\n";
}
}
if ( $l =~ /^\$ (.*)$/ ) {
my ($command) = $1;
- $command = var_subst($command, "srcdir", $srcdir);
- $command = var_subst($command, "bindir", $bindir);
+ foreach my $key (keys %environ) {
+ $command = var_subst($command, $key, $environ{$key});
+ }
+ # substitute remaining known variables, if any
+ $command = var_subst($command, "srcdir", "");
+ $command = var_subst($command, "bindir", "");
$command = var_subst($command, "EXEEXT", "");
$command = var_subst($command, "SG_TEST_EXENV", "");
$command = var_subst($command, "SG_TEST_ENV", "");
my @argv = ("valgrind");
my $count = 0;
-while (my $arg = shift) {
+while (defined(my $arg = shift)) {
print "arg($count)$arg\n";
if($arg eq "--cd"){
$arg = shift;
/* define for stack growth */
#cmakedefine PTH_STACKGROWTH @PTH_STACKGROWTH@
-/* enable the getline replacement */
-#cmakedefine SIMGRID_NEED_GETLINE @SIMGRID_NEED_GETLINE@
-
/* The maximal size of any scalar on this arch */
#cmakedefine SIZEOF_MAX @SIZEOF_MAX@
+++ /dev/null
-/* Copyright (c) 2010. The SimGrid Team.
- * All rights reserved. */
-
-/* This program is free software; you can redistribute it and/or modify it
- * under the terms of the license (GNU LGPL) which comes with this package. */
-
-#define _GNU_SOURCE
-#include <stdio.h>
-int main(void)
-{
- FILE *fp;
- char *line = NULL;
- size_t len = 0;
- getline(&line, &len, fp);
-}
export LD_LIBRARY_PATH=`pwd`/lib
export DYLD_LIBRARY_PATH=`pwd`/lib #for mac
-cd ../
-git clone git://scm.gforge.inria.fr/simgrid/simgrid-java.git simgrid-java --quiet
-cd simgrid-java
-export DYLD_LIBRARY_PATH=$DYLD_LIBRARY_PATH:`pwd`/lib
-export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:`pwd`/lib #for mac
-
-cmake .
-ctest -D ExperimentalStart
-ctest -D ExperimentalConfigure
-ctest -D ExperimentalBuild
-ctest -D ExperimentalTest
-ctest -D ExperimentalSubmit
-
cd ../
git clone git://scm.gforge.inria.fr/simgrid/simgrid-ruby.git simgrid-ruby --quiet
cd simgrid-ruby
ctest -D ExperimentalConfigure
ctest -D ExperimentalBuild
ctest -D ExperimentalTest
-ctest -D ExperimentalSubmit
\ No newline at end of file
+ctest -D ExperimentalSubmit
cd ./pipol/$PIPOL_HOST
export GIT_SSL_NO_VERIFY=1
-git clone https://gforge.inria.fr/git/simgrid/simgrid.git
+git clone git://scm.gforge.inria.fr/simgrid/simgrid.git
cd simgrid
perl ./buildtools/pipol/cmake.pl
#mem-check
cmake \
-Denable_lua=off \
+-Denable_tracing=on \
+-Denable_smpi=on \
-Denable_compile_optimizations=off \
-Denable_compile_warnings=on \
-Denable_lib_static=off \
-Denable_latency_bound_tracking=off \
-Denable_gtnets=off \
-Denable_jedule=off \
--Drelease=on \
+-Denable_mallocators=off \
-Denable_memcheck=on ./
ctest -D ExperimentalStart
ctest -D ExperimentalConfigure
cmake \
-Denable_coverage=on \
-Denable_model-checking=on \
+-Denable_java=on \
-Denable_lua=on \
-Denable_compile_optimizations=off .
ctest -D NightlyStart
export SIMGRID_ROOT=`pwd`
export LD_LIBRARY_PATH=`pwd`/lib
export DYLD_LIBRARY_PATH=`pwd`/lib #for mac
-cd ..
-
-git clone git://scm.gforge.inria.fr/simgrid/simgrid-java.git simgrid-java --quiet
-cd simgrid-java
-export DYLD_LIBRARY_PATH=$DYLD_LIBRARY_PATH:`pwd`/lib
-export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:`pwd`/lib #for mac
-
-cmake .
-ctest -D NightlyStart
-ctest -D NightlyConfigure
-ctest -D NightlyBuild
-ctest -D NightlyTest
-ctest -D NightlySubmit
cd ../
git clone git://scm.gforge.inria.fr/simgrid/simgrid-ruby.git simgrid-ruby --quiet
#!/bin/bash
sudo aptitude update
-sudo aptitude -y install make
+
+sudo aptitude -y install cmake
+sudo aptitude -y install default-jdk
+sudo aptitude -y install f2c
+sudo aptitude -y install g++
+sudo aptitude -y install gcc
sudo aptitude -y install git
sudo aptitude -y install git-core
-sudo aptitude -y install openjdk-6-jdk
-sudo aptitude -y install valgrind
-sudo aptitude -y install f2c
-sudo aptitude -y install gcc-4.6
-sudo aptitude -y install g++-4.6
sudo aptitude -y install graphviz-dev graphviz
sudo aptitude -y install liblua5.1-dev lua5.1
sudo aptitude -y install libpcre3-dev
-sudo aptitude -y install cmake
sudo aptitude -y install libunwind7-dev
+sudo aptitude -y install make
+sudo aptitude -y install valgrind
which_svn=`which svn` #svn necessary
which_gcc=`which gcc` #gcc gcc necessary
sudo apt-get update
-sudo apt-get -y -qq install gcc
-sudo apt-get -y -qq install g++
-sudo apt-get -y -qq install make
-sudo apt-get -y -qq install openjdk-6-jdk
-sudo apt-get -y -qq install liblua5.1-dev lua5.1
-sudo apt-get -y -qq install unzip
sudo apt-get -y -qq install cmake
-sudo apt-get -y -qq install wget
-sudo apt-get -y -qq install perl
-sudo apt-get -y -qq install graphviz-dev graphviz
-sudo apt-get -y -qq install libpcre3-dev
+sudo apt-get -y -qq install default-jdk
sudo apt-get -y -qq install f2c
-sudo apt-get -y -qq install valgrind
+sudo apt-get -y -qq install g++
+sudo apt-get -y -qq install gcc
sudo apt-get -y -qq install git-core
+sudo apt-get -y -qq install graphviz-dev graphviz
+sudo apt-get -y -qq install liblua5.1-dev lua5.1
+sudo apt-get -y -qq install libpcre3-dev
sudo apt-get -y -qq install libunwind7-dev
+sudo apt-get -y -qq install make
+sudo apt-get -y -qq install perl
+sudo apt-get -y -qq install unzip
+sudo apt-get -y -qq install valgrind
+sudo apt-get -y -qq install wget
if [ $PIPOL_IMAGE == "i386-linux-ubuntu-lucid.dd.gz" ]; then
wget http://mirror.ovh.net/ubuntu//pool/universe/libu/libunwind/libunwind7_0.99-0.3ubuntu1_i386.deb
# will result in a user-defined paragraph with heading "Side Effects:".
# You can put \n's in the value part of an alias to insert newlines.
-ALIASES =
+ALIASES = SimGridRelease="SimGrid-@release_version@"
# This tag can be used to specify a number of word-keyword mappings (TCL only).
# A mapping has the form "name=value". For example adding
\endverbatim
Then, you have to follow these steps:
-\li Add the following line to <project/directory>/buildtools/Cmake/MakeExeLib.cmake:
+\li Add the following line to <project/directory>/buildtools/Cmake/MakeExe.cmake:
\verbatim
add_subdirectory(${CMAKE_HOME_DIRECTORY}/<path_where_is_CMakeList.txt>)
\endverbatim
version numbers that were used.
- The" make distcheck" target works (testing that every files needed
to build and install are included in the archive)
- - The version number provided to download in the examples of
- doc/doxygen/install.doc is accurate (we should maybe generate this
- file to avoid issues, but some inaccuracies are less painful than
- editing the cmake files to make this happen, sorry).
+ - The URL provided to download in the examples of
+ doc/doxygen/install.doc is accurate. Note that updating the
+ version number is not enough as it only impacts the name that will
+ be given to the downloaded file. The real identifier is the number
+ before, between /s. This makes this part very difficult to
+ generate automatically.
- Tests
- All tests pass on a reasonable amount of platforms (typically,
everything on cdash)
Recompiling an official archive is not much more complex, actually.
SimGrid has very few dependencies and rely only on very standard
-tools. Recompiling the archive should be done in a few lines:
+tools. First, download the *@SimGridRelease.tar.gz* archive
+from [the download page](https://gforge.inria.fr/frs/?group_id=12).
+Then, recompiling the archive should be done in a few lines:
-@verbatim
-wget https://gforge.inria.fr/frs/download.php/28674/SimGrid-3.9.tar.gz
-tar xf SimGrid-3.9.tar.gz
-cd SimGrid-3.9
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.sh}
+tar xf @SimGridRelease.tar.gz
+cd @SimGridRelease
cmake -DCMAKE_INSTALL_PREFIX=/opt/simgrid .
make
make install
-@endverbatim
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-If you want to stay on the blending edge, you should get the latest
+If you want to stay on the bleeding edge, you should get the latest
git version, and recompile it as you would do for an official archive.
Depending on the files you change in the source tree, some extra
tools may be needed.
xbt_dynar_foreach(q, iter, comm) {
empty = 1;
if (MSG_comm_test(comm)) {
- MSG_comm_destroy(comm);
status = MSG_comm_get_status(comm);
+ MSG_comm_destroy(comm);
xbt_assert(status == MSG_OK, "process_pending_connections() failed");
xbt_dynar_cursor_rm(q, &iter);
empty = 0;
MSG_task_send(finalize, mailbox_buffer);
}
+ XBT_INFO("Wait a while before effective shutdown.");
+ MSG_process_sleep(2);
+
xbt_dynar_foreach(vms,i,vm) {
MSG_vm_shutdown(vm);
MSG_vm_destroy(vm);
> [ 1000.000000] (20:Slave 18@Jean_Yves) Slave listenning on 18
> [ 1000.000000] (21:Slave 19@Fafard) Slave listenning on 19
> [ 1000.000000] (22:Slave 0@Jacquelin) Slave listenning on 0
-> [ 1000.000000] (23:Slave 1@Intel) Slave listenning on 1
-> [ 1000.000000] (24:Slave 2@Provost) Slave listenning on 2
-> [ 1000.000000] (25:Slave 3@Fernand) Slave listenning on 3
-> [ 1000.000000] (26:Slave 4@Bescherelle) Slave listenning on 4
-> [ 1000.000000] (27:Slave 5@Ethernet) Slave listenning on 5
-> [ 1000.000000] (28:Slave 6@Kuenning) Slave listenning on 6
-> [ 1000.000000] (29:Slave 7@Dodge) Slave listenning on 7
-> [ 1000.000000] (30:Slave 8@Jean_Yves) Slave listenning on 8
-> [ 1000.000000] (31:Slave 9@Fafard) Slave listenning on 9
+> [ 1000.000000] (23:Slave 10@Jacquelin) Slave listenning on 10
+> [ 1000.000000] (24:Slave 1@Intel) Slave listenning on 1
+> [ 1000.000000] (25:Slave 11@Intel) Slave listenning on 11
+> [ 1000.000000] (26:Slave 2@Provost) Slave listenning on 2
+> [ 1000.000000] (27:Slave 12@Provost) Slave listenning on 12
+> [ 1000.000000] (28:Slave 3@Fernand) Slave listenning on 3
+> [ 1000.000000] (29:Slave 13@Fernand) Slave listenning on 13
+> [ 1000.000000] (30:Slave 4@Bescherelle) Slave listenning on 4
+> [ 1000.000000] (31:Slave 14@Bescherelle) Slave listenning on 14
+> [ 1000.000000] (32:Slave 5@Ethernet) Slave listenning on 5
+> [ 1000.000000] (33:Slave 15@Ethernet) Slave listenning on 15
+> [ 1000.000000] (34:Slave 6@Kuenning) Slave listenning on 6
+> [ 1000.000000] (35:Slave 16@Kuenning) Slave listenning on 16
+> [ 1000.000000] (36:Slave 7@Dodge) Slave listenning on 7
+> [ 1000.000000] (37:Slave 17@Dodge) Slave listenning on 17
+> [ 1000.000000] (38:Slave 8@Jean_Yves) Slave listenning on 8
+> [ 1000.000000] (39:Slave 18@Jean_Yves) Slave listenning on 18
+> [ 1000.000000] (40:Slave 9@Fafard) Slave listenning on 9
+> [ 1000.000000] (41:Slave 19@Fafard) Slave listenning on 19
> [ 1000.020275] (1:master@Jacquelin) Sending "Task_1" to "Slave_1"
> [ 1000.020275] (22:Slave 0@Jacquelin) Received "Task_0" from mailbox Slave_0
> [ 1000.093091] (22:Slave 0@Jacquelin) "Task_0" done
> [ 1023.866678] (1:master@Jacquelin) Sending "Task_2" to "Slave_2"
-> [ 1023.866678] (23:Slave 1@Intel) Received "Task_1" from mailbox Slave_1
-> [ 1023.939494] (23:Slave 1@Intel) "Task_1" done
+> [ 1023.866678] (24:Slave 1@Intel) Received "Task_1" from mailbox Slave_1
+> [ 1023.939494] (24:Slave 1@Intel) "Task_1" done
> [ 1048.674036] (1:master@Jacquelin) Sending "Task_3" to "Slave_3"
-> [ 1048.674036] (24:Slave 2@Provost) Received "Task_2" from mailbox Slave_2
-> [ 1048.746852] (24:Slave 2@Provost) "Task_2" done
+> [ 1048.674036] (26:Slave 2@Provost) Received "Task_2" from mailbox Slave_2
+> [ 1048.746852] (26:Slave 2@Provost) "Task_2" done
> [ 1056.325710] (1:master@Jacquelin) Sending "Task_4" to "Slave_4"
-> [ 1056.325710] (25:Slave 3@Fernand) Received "Task_3" from mailbox Slave_3
-> [ 1056.777157] (25:Slave 3@Fernand) "Task_3" done
+> [ 1056.325710] (28:Slave 3@Fernand) Received "Task_3" from mailbox Slave_3
+> [ 1056.777157] (28:Slave 3@Fernand) "Task_3" done
> [ 1064.574878] (1:master@Jacquelin) Sending "Task_5" to "Slave_5"
-> [ 1064.574878] (26:Slave 4@Bescherelle) Received "Task_4" from mailbox Slave_4
-> [ 1064.647694] (26:Slave 4@Bescherelle) "Task_4" done
+> [ 1064.574878] (30:Slave 4@Bescherelle) Received "Task_4" from mailbox Slave_4
+> [ 1064.647694] (30:Slave 4@Bescherelle) "Task_4" done
> [ 1073.010762] (1:master@Jacquelin) Sending "Task_6" to "Slave_6"
-> [ 1073.010762] (27:Slave 5@Ethernet) Received "Task_5" from mailbox Slave_5
-> [ 1073.112704] (27:Slave 5@Ethernet) "Task_5" done
+> [ 1073.010762] (32:Slave 5@Ethernet) Received "Task_5" from mailbox Slave_5
+> [ 1073.112704] (32:Slave 5@Ethernet) "Task_5" done
> [ 1081.730603] (1:master@Jacquelin) Sending "Task_7" to "Slave_7"
-> [ 1081.730603] (28:Slave 6@Kuenning) Received "Task_6" from mailbox Slave_6
-> [ 1081.847108] (28:Slave 6@Kuenning) "Task_6" done
+> [ 1081.730603] (34:Slave 6@Kuenning) Received "Task_6" from mailbox Slave_6
+> [ 1081.847108] (34:Slave 6@Kuenning) "Task_6" done
> [ 1126.150095] (1:master@Jacquelin) Sending "Task_8" to "Slave_8"
-> [ 1126.150095] (29:Slave 7@Dodge) Received "Task_7" from mailbox Slave_7
-> [ 1126.237474] (29:Slave 7@Dodge) "Task_7" done
+> [ 1126.150095] (36:Slave 7@Dodge) Received "Task_7" from mailbox Slave_7
+> [ 1126.237474] (36:Slave 7@Dodge) "Task_7" done
> [ 1169.839597] (1:master@Jacquelin) Sending "Task_9" to "Slave_9"
-> [ 1169.839597] (30:Slave 8@Jean_Yves) Received "Task_8" from mailbox Slave_8
-> [ 1169.941539] (30:Slave 8@Jean_Yves) "Task_8" done
+> [ 1169.839597] (38:Slave 8@Jean_Yves) Received "Task_8" from mailbox Slave_8
+> [ 1169.941539] (38:Slave 8@Jean_Yves) "Task_8" done
> [ 1176.014409] (1:master@Jacquelin) Sending "Task_10" to "Slave_10"
-> [ 1176.014409] (31:Slave 9@Fafard) Received "Task_9" from mailbox Slave_9
-> [ 1176.034684] (12:Slave 10@Jacquelin) Received "Task_10" from mailbox Slave_10
+> [ 1176.014409] (40:Slave 9@Fafard) Received "Task_9" from mailbox Slave_9
> [ 1176.034684] (1:master@Jacquelin) Sending "Task_11" to "Slave_11"
-> [ 1176.087225] (31:Slave 9@Fafard) "Task_9" done
-> [ 1176.107500] (12:Slave 10@Jacquelin) "Task_10" done
-> [ 1199.881087] (13:Slave 11@Intel) Received "Task_11" from mailbox Slave_11
+> [ 1176.034684] (23:Slave 10@Jacquelin) Received "Task_10" from mailbox Slave_10
+> [ 1176.087225] (40:Slave 9@Fafard) "Task_9" done
+> [ 1176.107500] (23:Slave 10@Jacquelin) "Task_10" done
> [ 1199.881087] (1:master@Jacquelin) Sending "Task_12" to "Slave_12"
-> [ 1199.953902] (13:Slave 11@Intel) "Task_11" done
-> [ 1224.688445] (14:Slave 12@Provost) Received "Task_12" from mailbox Slave_12
+> [ 1199.881087] (25:Slave 11@Intel) Received "Task_11" from mailbox Slave_11
+> [ 1199.953902] (25:Slave 11@Intel) "Task_11" done
> [ 1224.688445] (1:master@Jacquelin) Sending "Task_13" to "Slave_13"
-> [ 1224.761260] (14:Slave 12@Provost) "Task_12" done
-> [ 1232.340119] (15:Slave 13@Fernand) Received "Task_13" from mailbox Slave_13
+> [ 1224.688445] (27:Slave 12@Provost) Received "Task_12" from mailbox Slave_12
+> [ 1224.761260] (27:Slave 12@Provost) "Task_12" done
> [ 1232.340119] (1:master@Jacquelin) Sending "Task_14" to "Slave_14"
-> [ 1232.791566] (15:Slave 13@Fernand) "Task_13" done
-> [ 1240.589287] (16:Slave 14@Bescherelle) Received "Task_14" from mailbox Slave_14
+> [ 1232.340119] (29:Slave 13@Fernand) Received "Task_13" from mailbox Slave_13
+> [ 1232.791566] (29:Slave 13@Fernand) "Task_13" done
> [ 1240.589287] (1:master@Jacquelin) Sending "Task_15" to "Slave_15"
-> [ 1240.662103] (16:Slave 14@Bescherelle) "Task_14" done
-> [ 1249.025171] (17:Slave 15@Ethernet) Received "Task_15" from mailbox Slave_15
+> [ 1240.589287] (31:Slave 14@Bescherelle) Received "Task_14" from mailbox Slave_14
+> [ 1240.662103] (31:Slave 14@Bescherelle) "Task_14" done
> [ 1249.025171] (1:master@Jacquelin) Sending "Task_16" to "Slave_16"
-> [ 1249.127113] (17:Slave 15@Ethernet) "Task_15" done
-> [ 1257.745012] (18:Slave 16@Kuenning) Received "Task_16" from mailbox Slave_16
+> [ 1249.025171] (33:Slave 15@Ethernet) Received "Task_15" from mailbox Slave_15
+> [ 1249.127113] (33:Slave 15@Ethernet) "Task_15" done
> [ 1257.745012] (1:master@Jacquelin) Sending "Task_17" to "Slave_17"
-> [ 1257.861517] (18:Slave 16@Kuenning) "Task_16" done
-> [ 1302.164504] (19:Slave 17@Dodge) Received "Task_17" from mailbox Slave_17
+> [ 1257.745012] (35:Slave 16@Kuenning) Received "Task_16" from mailbox Slave_16
+> [ 1257.861517] (35:Slave 16@Kuenning) "Task_16" done
> [ 1302.164504] (1:master@Jacquelin) Sending "Task_18" to "Slave_18"
-> [ 1302.251883] (19:Slave 17@Dodge) "Task_17" done
+> [ 1302.164504] (37:Slave 17@Dodge) Received "Task_17" from mailbox Slave_17
+> [ 1302.251883] (37:Slave 17@Dodge) "Task_17" done
> [ 1345.854006] (1:master@Jacquelin) Sending "Task_19" to "Slave_19"
-> [ 1345.854006] (20:Slave 18@Jean_Yves) Received "Task_18" from mailbox Slave_18
-> [ 1345.955948] (20:Slave 18@Jean_Yves) "Task_18" done
+> [ 1345.854006] (39:Slave 18@Jean_Yves) Received "Task_18" from mailbox Slave_18
+> [ 1345.955948] (39:Slave 18@Jean_Yves) "Task_18" done
> [ 1352.028818] (1:master@Jacquelin) Migrate everyone to the second host.
> [ 1352.028818] (1:master@Jacquelin) Suspend everyone, move them to the third host, and resume them.
> [ 1352.028818] (1:master@Jacquelin) Let's shut down the simulation. 10 first processes will be shut down cleanly while the second half will forcefully get killed
-> [ 1352.028818] (21:Slave 19@Fafard) Received "Task_19" from mailbox Slave_19
+> [ 1352.028818] (41:Slave 19@Fafard) Received "Task_19" from mailbox Slave_19
> [ 1352.029013] (22:Slave 0@Provost) Received "finalize" from mailbox Slave_0
-> [ 1352.101633] (21:Slave 19@Provost) "Task_19" done
-> [ 1352.947711] (23:Slave 1@Provost) Received "finalize" from mailbox Slave_1
-> [ 1354.827365] (24:Slave 2@Provost) Received "finalize" from mailbox Slave_2
-> [ 1356.653021] (25:Slave 3@Provost) Received "finalize" from mailbox Slave_3
-> [ 1357.515808] (26:Slave 4@Provost) Received "finalize" from mailbox Slave_4
-> [ 1358.576004] (27:Slave 5@Provost) Received "finalize" from mailbox Slave_5
-> [ 1359.433313] (28:Slave 6@Provost) Received "finalize" from mailbox Slave_6
-> [ 1360.833461] (29:Slave 7@Provost) Received "finalize" from mailbox Slave_7
-> [ 1361.758549] (30:Slave 8@Provost) Received "finalize" from mailbox Slave_8
-> [ 1363.743206] (0:@) Simulation time 1363.74
-> [ 1363.743206] (1:master@Jacquelin) Goodbye now!
-> [ 1363.743206] (31:Slave 9@Provost) Received "finalize" from mailbox Slave_9
+> [ 1352.101633] (41:Slave 19@Provost) "Task_19" done
+> [ 1352.947711] (24:Slave 1@Provost) Received "finalize" from mailbox Slave_1
+> [ 1354.827365] (26:Slave 2@Provost) Received "finalize" from mailbox Slave_2
+> [ 1356.653021] (28:Slave 3@Provost) Received "finalize" from mailbox Slave_3
+> [ 1357.515808] (30:Slave 4@Provost) Received "finalize" from mailbox Slave_4
+> [ 1358.576004] (32:Slave 5@Provost) Received "finalize" from mailbox Slave_5
+> [ 1359.433313] (34:Slave 6@Provost) Received "finalize" from mailbox Slave_6
+> [ 1360.833461] (36:Slave 7@Provost) Received "finalize" from mailbox Slave_7
+> [ 1361.758549] (38:Slave 8@Provost) Received "finalize" from mailbox Slave_8
+> [ 1363.743206] (1:master@Jacquelin) Wait a while before effective shutdown.
+> [ 1363.743206] (40:Slave 9@Provost) Received "finalize" from mailbox Slave_9
+> [ 1365.743206] (0:@) Simulation time 1365.74
+> [ 1365.743206] (1:master@Jacquelin) Goodbye now!
#! ./tesh
$ ${bindir:=.}/io/file ${srcdir:=.}/examples/platforms/storage.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
-> [ 0.000000] (0:@) Configuration change: Set 'path' to '../examples/platforms/'
> [ 0.000000] (0:@) Number of host '4'
> [ 0.000000] (1:0@denise) Open file './doc/simgrid/examples/platforms/g5k.xml'
> [ 0.000000] (2:1@alice) Open file './doc/simgrid/examples/platforms/One_cluster_no_backbone.xml'
> [ 0.004786] (0:@) Simulation time 0.00478623
$ ${bindir:=.}/io/file_unlink ${srcdir:=.}/examples/platforms/storage.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
-> [ 0.000000] (0:@) Configuration change: Set 'path' to '../examples/platforms/'
> [ 0.000000] (0:@) Number of host '4'
> [ 0.000000] (1:0@denise) Open file './doc/simgrid/examples/platforms/g5k.xml'
> [ 0.000000] (1:0@denise) File stat ./doc/simgrid/examples/platforms/g5k.xml Size 17028.0
digraph G {
-end [size="10000000129.452715" performer="100" order="1"];
+end [size="10000000129.452715"];
0 [size="10000000129.452715" category="taskA" performer="1" order="1"];
1 [size="10000000131.133657" category="taskA" performer="0"];
2 [size="10000000121.12487" category="taskA" performer="1" order="1"];
SD_workstation_t *wsl = SD_task_get_workstation_list(task);
switch (kind) {
case SD_TASK_COMP_SEQ:
- fprintf(out, "[%f] %s compute %f # %s\n",
- SD_task_get_start_time(task),
- SD_workstation_get_name(wsl[0]), SD_task_get_amount(task),
- SD_task_get_name(task));
+ fprintf(out, "[%f->%f] %s compute %f flops # %s\n",
+ SD_task_get_start_time(task),
+ SD_task_get_finish_time(task),
+ SD_workstation_get_name(wsl[0]), SD_task_get_amount(task),
+ SD_task_get_name(task));
break;
case SD_TASK_COMM_E2E:
- fprintf(out, "[%f] %s send %s %f # %s\n",
- SD_task_get_start_time(task),
- SD_workstation_get_name(wsl[0]),
- SD_workstation_get_name(wsl[1]), SD_task_get_amount(task),
- SD_task_get_name(task));
- fprintf(out, "[%f] %s recv %s %f # %s\n",
- SD_task_get_finish_time(task),
- SD_workstation_get_name(wsl[1]),
- SD_workstation_get_name(wsl[0]), SD_task_get_amount(task),
- SD_task_get_name(task));
+ fprintf(out, "[%f -> %f] %s -> %s transfer of %.0f bytes # %s\n",
+ SD_task_get_start_time(task),
+ SD_task_get_finish_time(task),
+ SD_workstation_get_name(wsl[0]),
+ SD_workstation_get_name(wsl[1]), SD_task_get_amount(task),
+ SD_task_get_name(task));
break;
default:
xbt_die("Task %s is of unknown kind %d", SD_task_get_name(task),
#include <libgen.h>
XBT_LOG_NEW_DEFAULT_CATEGORY(test,
- "Logging specific to this SimDag example");
+ "Logging specific to this SimDag example");
int main(int argc, char **argv)
{
unsigned int cursor;
SD_task_t task;
- /* initialisation of SD */
+ /* initialization of SD */
SD_init(&argc, argv);
/* Check our arguments */
tracefilename =
bprintf("%.*s.trace",
- (int) (last == NULL ? strlen(argv[2]) : last - argv[2]),
- argv[2]);
+ (int) (last == NULL ? strlen(argv[2]) : last - argv[2]),
+ argv[2]);
} else {
tracefilename = xbt_strdup(argv[3]);
}
/* Display all the tasks */
XBT_INFO
- ("------------------- Display all tasks of the loaded DAG ---------------------------");
+ ("------------------- Display all tasks of the loaded DAG ---------------------------");
xbt_dynar_foreach(dot, cursor, task) {
SD_task_dump(task);
}
fclose(dotout);
XBT_INFO
- ("------------------- Run the schedule ---------------------------");
+ ("------------------- Run the schedule ---------------------------");
changed = SD_simulate(-1);
xbt_dynar_free_container(&changed);
XBT_INFO
- ("------------------- Produce the trace file---------------------------");
+ ("------------------- Produce the trace file---------------------------");
XBT_INFO("Producing the trace of the run into %s", basename(tracefilename));
FILE *out = fopen(tracefilename, "w");
xbt_assert(out, "Cannot write to %s", tracefilename);
SD_workstation_t *wsl = SD_task_get_workstation_list(task);
switch (kind) {
case SD_TASK_COMP_SEQ:
- fprintf(out, "[%f] %s compute %f # %s\n",
- SD_task_get_start_time(task),
- SD_workstation_get_name(wsl[0]), SD_task_get_amount(task),
- SD_task_get_name(task));
+ fprintf(out, "[%f->%f] %s compute %f flops # %s\n",
+ SD_task_get_start_time(task),
+ SD_task_get_finish_time(task),
+ SD_workstation_get_name(wsl[0]), SD_task_get_amount(task),
+ SD_task_get_name(task));
break;
case SD_TASK_COMM_E2E:
- fprintf(out, "[%f] %s send %s %f # %s\n",
- SD_task_get_start_time(task),
- SD_workstation_get_name(wsl[0]),
- SD_workstation_get_name(wsl[1]), SD_task_get_amount(task),
- SD_task_get_name(task));
- fprintf(out, "[%f] %s recv %s %f # %s\n",
- SD_task_get_finish_time(task),
- SD_workstation_get_name(wsl[1]),
- SD_workstation_get_name(wsl[0]), SD_task_get_amount(task),
- SD_task_get_name(task));
+ fprintf(out, "[%f -> %f] %s -> %s transfer of %.0f bytes # %s\n",
+ SD_task_get_start_time(task),
+ SD_task_get_finish_time(task),
+ SD_workstation_get_name(wsl[0]),
+ SD_workstation_get_name(wsl[1]), SD_task_get_amount(task),
+ SD_task_get_name(task));
break;
default:
xbt_die("Task %s is of unknown kind %d", SD_task_get_name(task),
- SD_task_get_kind(task));
+ SD_task_get_kind(task));
}
SD_task_destroy(task);
}
$ $SG_TEST_EXENV ./dot_test --log=no_loc ${srcdir:=.}/../2clusters.xml ${srcdir:=.}/dag.dot
> [0.000000] [surf_workstation/INFO] surf_workstation_model_init_ptask_L07
-> [0.000000] [sd_dotparse/WARNING] 'end' node is explicitly declared in the DOT file. Update it
-> [0.000000] [sd_dotparse/WARNING] 0->1 already exists
-> [0.000000] [sd_dotparse/WARNING] 1->2 already exists
-> [0.000000] [sd_dotparse/WARNING] 2->3 already exists
-> [0.000000] [sd_dotparse/WARNING] 4->5 already exists
-> [0.000000] [sd_dotparse/WARNING] 6->7 already exists
-> [0.000000] [sd_dotparse/WARNING] 7->end already exists
-> [0.000000] [sd_dotparse/WARNING] 7->8 already exists
-> [0.000000] [sd_dotparse/WARNING] 'root' node is explicitly declared in the DOT file. Update it
-> [0.000000] [sd_dotparse/WARNING] root->5 already exists
> [0.000000] [test/INFO] ------------------- Display all tasks of the loaded DAG ---------------------------
> [0.000000] [sd_task/INFO] Displaying task root
> [0.000000] [sd_task/INFO] - state: schedulable not runnable
> [0.000000] [sd_task/INFO] - amount: 0
> [0.000000] [sd_task/INFO] - Dependencies to satisfy: 0
> [0.000000] [sd_task/INFO] - post-dependencies:
-> [0.000000] [sd_task/INFO] 0
> [0.000000] [sd_task/INFO] root->5
+> [0.000000] [sd_task/INFO] 0
> [0.000000] [sd_task/INFO] Displaying task 0
> [0.000000] [sd_task/INFO] - state: not scheduled not runnable
> [0.000000] [sd_task/INFO] - kind: sequential computation
> [0.000000] [sd_task/INFO] 8
> [0.000000] [sd_task/INFO] - post-dependencies:
> [0.000000] [sd_task/INFO] end
-> [0.000000] [sd_task/INFO] Displaying task 2->3
+> [0.000000] [sd_task/INFO] Displaying task 0->1
> [0.000000] [sd_task/INFO] - state: not scheduled not runnable
> [0.000000] [sd_task/INFO] - kind: end-to-end communication
-> [0.000000] [sd_task/INFO] - tracing category: COMM_E2E
-> [0.000000] [sd_task/INFO] - amount: 10002
+> [0.000000] [sd_task/INFO] - tracing category: taskA
+> [0.000000] [sd_task/INFO] - amount: 10001
> [0.000000] [sd_task/INFO] - Dependencies to satisfy: 1
> [0.000000] [sd_task/INFO] - pre-dependencies:
-> [0.000000] [sd_task/INFO] 2
+> [0.000000] [sd_task/INFO] 0
> [0.000000] [sd_task/INFO] - post-dependencies:
-> [0.000000] [sd_task/INFO] 3
-> [0.000000] [sd_task/INFO] Displaying task 6->7
+> [0.000000] [sd_task/INFO] 1
+> [0.000000] [sd_task/INFO] Displaying task 1->2
> [0.000000] [sd_task/INFO] - state: not scheduled not runnable
> [0.000000] [sd_task/INFO] - kind: end-to-end communication
-> [0.000000] [sd_task/INFO] - tracing category: COMM_E2E
-> [0.000000] [sd_task/INFO] - amount: 10005
+> [0.000000] [sd_task/INFO] - tracing category: taskA
+> [0.000000] [sd_task/INFO] - amount: 10004
> [0.000000] [sd_task/INFO] - Dependencies to satisfy: 1
> [0.000000] [sd_task/INFO] - pre-dependencies:
-> [0.000000] [sd_task/INFO] 6
+> [0.000000] [sd_task/INFO] 1
> [0.000000] [sd_task/INFO] - post-dependencies:
-> [0.000000] [sd_task/INFO] 7
-> [0.000000] [sd_task/INFO] Displaying task root->5
+> [0.000000] [sd_task/INFO] 2
+> [0.000000] [sd_task/INFO] Displaying task 2->3
> [0.000000] [sd_task/INFO] - state: not scheduled not runnable
> [0.000000] [sd_task/INFO] - kind: end-to-end communication
-> [0.000000] [sd_task/INFO] - tracing category: COMM_E2E
-> [0.000000] [sd_task/INFO] - amount: 10014000
+> [0.000000] [sd_task/INFO] - tracing category: taskA
+> [0.000000] [sd_task/INFO] - amount: 10002
> [0.000000] [sd_task/INFO] - Dependencies to satisfy: 1
> [0.000000] [sd_task/INFO] - pre-dependencies:
-> [0.000000] [sd_task/INFO] root
+> [0.000000] [sd_task/INFO] 2
> [0.000000] [sd_task/INFO] - post-dependencies:
-> [0.000000] [sd_task/INFO] 5
-> [0.000000] [sd_task/INFO] Displaying task 1->2
+> [0.000000] [sd_task/INFO] 3
+> [0.000000] [sd_task/INFO] Displaying task 4->5
> [0.000000] [sd_task/INFO] - state: not scheduled not runnable
> [0.000000] [sd_task/INFO] - kind: end-to-end communication
-> [0.000000] [sd_task/INFO] - tracing category: COMM_E2E
-> [0.000000] [sd_task/INFO] - amount: 10004
+> [0.000000] [sd_task/INFO] - tracing category: taskB
+> [0.000000] [sd_task/INFO] - amount: 10029
> [0.000000] [sd_task/INFO] - Dependencies to satisfy: 1
> [0.000000] [sd_task/INFO] - pre-dependencies:
-> [0.000000] [sd_task/INFO] 1
+> [0.000000] [sd_task/INFO] 4
> [0.000000] [sd_task/INFO] - post-dependencies:
-> [0.000000] [sd_task/INFO] 2
-> [0.000000] [sd_task/INFO] Displaying task 7->end
+> [0.000000] [sd_task/INFO] 5
+> [0.000000] [sd_task/INFO] Displaying task 6->7
> [0.000000] [sd_task/INFO] - state: not scheduled not runnable
> [0.000000] [sd_task/INFO] - kind: end-to-end communication
-> [0.000000] [sd_task/INFO] - tracing category: COMM_E2E
-> [0.000000] [sd_task/INFO] - amount: 10014000
+> [0.000000] [sd_task/INFO] - tracing category: taskB
+> [0.000000] [sd_task/INFO] - amount: 10005
> [0.000000] [sd_task/INFO] - Dependencies to satisfy: 1
> [0.000000] [sd_task/INFO] - pre-dependencies:
-> [0.000000] [sd_task/INFO] 7
+> [0.000000] [sd_task/INFO] 6
> [0.000000] [sd_task/INFO] - post-dependencies:
-> [0.000000] [sd_task/INFO] end
-> [0.000000] [sd_task/INFO] Displaying task 0->1
+> [0.000000] [sd_task/INFO] 7
+> [0.000000] [sd_task/INFO] Displaying task 7->8
> [0.000000] [sd_task/INFO] - state: not scheduled not runnable
> [0.000000] [sd_task/INFO] - kind: end-to-end communication
-> [0.000000] [sd_task/INFO] - tracing category: oi
-> [0.000000] [sd_task/INFO] - amount: 10001
+> [0.000000] [sd_task/INFO] - tracing category: taskB
+> [0.000000] [sd_task/INFO] - amount: 10000
> [0.000000] [sd_task/INFO] - Dependencies to satisfy: 1
> [0.000000] [sd_task/INFO] - pre-dependencies:
-> [0.000000] [sd_task/INFO] 0
+> [0.000000] [sd_task/INFO] 7
> [0.000000] [sd_task/INFO] - post-dependencies:
-> [0.000000] [sd_task/INFO] 1
-> [0.000000] [sd_task/INFO] Displaying task 4->5
+> [0.000000] [sd_task/INFO] 8
+> [0.000000] [sd_task/INFO] Displaying task 7->end
> [0.000000] [sd_task/INFO] - state: not scheduled not runnable
> [0.000000] [sd_task/INFO] - kind: end-to-end communication
-> [0.000000] [sd_task/INFO] - tracing category: COMM_E2E
-> [0.000000] [sd_task/INFO] - amount: 10029
+> [0.000000] [sd_task/INFO] - tracing category: taskB
+> [0.000000] [sd_task/INFO] - amount: 10014000
> [0.000000] [sd_task/INFO] - Dependencies to satisfy: 1
> [0.000000] [sd_task/INFO] - pre-dependencies:
-> [0.000000] [sd_task/INFO] 4
+> [0.000000] [sd_task/INFO] 7
> [0.000000] [sd_task/INFO] - post-dependencies:
-> [0.000000] [sd_task/INFO] 5
-> [0.000000] [sd_task/INFO] Displaying task 7->8
+> [0.000000] [sd_task/INFO] end
+> [0.000000] [sd_task/INFO] Displaying task root->5
> [0.000000] [sd_task/INFO] - state: not scheduled not runnable
> [0.000000] [sd_task/INFO] - kind: end-to-end communication
> [0.000000] [sd_task/INFO] - tracing category: COMM_E2E
-> [0.000000] [sd_task/INFO] - amount: 10000
+> [0.000000] [sd_task/INFO] - amount: 10014000
> [0.000000] [sd_task/INFO] - Dependencies to satisfy: 1
> [0.000000] [sd_task/INFO] - pre-dependencies:
-> [0.000000] [sd_task/INFO] 7
+> [0.000000] [sd_task/INFO] root
> [0.000000] [sd_task/INFO] - post-dependencies:
-> [0.000000] [sd_task/INFO] 8
+> [0.000000] [sd_task/INFO] 5
> [0.000000] [sd_task/INFO] Displaying task end
> [0.000000] [sd_task/INFO] - state: not scheduled not runnable
> [0.000000] [sd_task/INFO] - kind: sequential computation
> [0.000000] [sd_task/INFO] 9
> [0.000000] [test/INFO] ------------------- Schedule tasks ---------------------------
> [0.000000] [test/INFO] ------------------- Run the schedule ---------------------------
-> [60.002281] [test/INFO] ------------------- Produce the trace file---------------------------
-> [60.002281] [test/INFO] Producing the trace of the run into dag.trace
+> [62.002281] [test/INFO] ------------------- Produce the trace file---------------------------
+> [62.002281] [test/INFO] Producing the trace of the run into dag.trace
$ cat ${srcdir:=.}/dag.trace
-> [0.000000] C2-05 compute 0.000000 # root
-> [0.000000] C2-06 compute 10000000129.452715 # 0
-> [2.000380] C2-07 compute 10000000131.133657 # 1
-> [4.000760] C2-08 compute 10000000121.124870 # 2
-> [6.001140] C2-09 compute 10000000230.608025 # 3
-> [8.001140] C1-00 compute 10000000004.994019 # 4
-> [18.001520] C1-01 compute 10000000046.016401 # 5
-> [28.001520] C1-02 compute 10000000091.598791 # 6
-> [38.001901] C1-03 compute 10000000040.679438 # 7
-> [48.002281] C1-04 compute 10000000250.490017 # 8
-> [58.002281] C2-05 compute 10000000079.267649 # 9
-> [6.000760] C2-08 send C2-09 10001.781645 # 2->3
-> [6.001140] C2-09 recv C2-08 10001.781645 # 2->3
-> [38.001521] C1-02 send C1-03 10004.920415 # 6->7
-> [38.001901] C1-03 recv C1-02 10004.920415 # 6->7
-> [0.000000] C2-05 send C1-01 10014000.000000 # root->5
-> [0.292217] C1-01 recv C2-05 10014000.000000 # root->5
-> [4.000380] C2-07 send C2-08 10004.164631 # 1->2
-> [4.000760] C2-08 recv C2-07 10004.164631 # 1->2
-> [48.001901] C1-03 send C2-05 10014000.000000 # 7->end
-> [48.294118] C2-05 recv C1-03 10014000.000000 # 7->end
-> [2.000000] C2-06 send C2-07 10001.389601 # 0->1
-> [2.000380] C2-07 recv C2-06 10001.389601 # 0->1
-> [18.001140] C1-00 send C1-01 10029.262823 # 4->5
-> [18.001520] C1-01 recv C1-00 10029.262823 # 4->5
-> [48.001901] C1-03 send C1-04 10000.234049 # 7->8
-> [48.002281] C1-04 recv C1-03 10000.234049 # 7->8
-> [60.002281] C2-05 compute 10000000129.452715 # end
+> [0.000000->0.000000] C2-05 compute 0.000000 flops # root
+> [0.000000->2.000000] C2-06 compute 10000000129.452715 flops # 0
+> [2.000380->4.000380] C2-07 compute 10000000131.133657 flops # 1
+> [4.000760->6.000760] C2-08 compute 10000000121.124870 flops # 2
+> [6.001140->8.001140] C2-09 compute 10000000230.608025 flops # 3
+> [8.001140->18.001140] C1-00 compute 10000000004.994019 flops # 4
+> [18.001520->28.001520] C1-01 compute 10000000046.016401 flops # 5
+> [28.001520->38.001521] C1-02 compute 10000000091.598791 flops # 6
+> [38.001901->48.001901] C1-03 compute 10000000040.679438 flops # 7
+> [48.002281->58.002281] C1-04 compute 10000000250.490017 flops # 8
+> [58.002281->60.002281] C2-05 compute 10000000079.267649 flops # 9
+> [2.000000 -> 2.000380] C2-06 -> C2-07 transfer of 10001 bytes # 0->1
+> [4.000380 -> 4.000760] C2-07 -> C2-08 transfer of 10004 bytes # 1->2
+> [6.000760 -> 6.001140] C2-08 -> C2-09 transfer of 10002 bytes # 2->3
+> [18.001140 -> 18.001520] C1-00 -> C1-01 transfer of 10029 bytes # 4->5
+> [38.001521 -> 38.001901] C1-02 -> C1-03 transfer of 10005 bytes # 6->7
+> [48.001901 -> 48.002281] C1-03 -> C1-04 transfer of 10000 bytes # 7->8
+> [48.001901 -> 48.294118] C1-03 -> C2-05 transfer of 10014000 bytes # 7->end
+> [0.000000 -> 0.292217] C2-05 -> C1-01 transfer of 10014000 bytes # root->5
+> [60.002281->62.002281] C2-05 compute 10000000129.452715 flops # end
$ rm -f dag.trace
! expect signal SIGABRT
$ $SG_TEST_EXENV ./simulate_dot --log=no_loc "--log=sd_dotparse.thres:verbose" ${srcdir:=.}/../2clusters.xml ${srcdir:=.}/dag_with_bad_schedule.dot
> [0.000000] [surf_workstation/INFO] surf_workstation_model_init_ptask_L07
-> [0.000000] [sd_dotparse/WARNING] 'end' node is explicitly declared in the DOT file. Update it
-> [0.000000] [sd_dotparse/VERBOSE] The schedule is ignored, there are not enough computers
-> [0.000000] [sd_dotparse/WARNING] 0->1 already exists
-> [0.000000] [sd_dotparse/WARNING] is not an integer
+> [0.000000] [sd_dotparse/VERBOSE] The schedule is ignored, the task end is not correctly scheduled
> [0.000000] [sd_dotparse/VERBOSE] The schedule is ignored, the task 1 is not correctly scheduled
-> [0.000000] [sd_dotparse/WARNING] 1->2 already exists
> [0.000000] [sd_dotparse/VERBOSE] The task 0 starts on the computer 1 at the position : 1 like the task 2
-> [0.000000] [sd_dotparse/WARNING] 2->3 already exists
-> [0.000000] [sd_dotparse/WARNING] is not an integer
> [0.000000] [sd_dotparse/VERBOSE] The schedule is ignored, the task 3 is not correctly scheduled
-> [0.000000] [sd_dotparse/WARNING] 4->5 already exists
-> [0.000000] [sd_dotparse/WARNING] 6->7 already exists
-> [0.000000] [sd_dotparse/WARNING] 7->end already exists
-> [0.000000] [sd_dotparse/WARNING] 7->8 already exists
-> [0.000000] [sd_dotparse/WARNING] 'root' node is explicitly declared in the DOT file. Update it
-> [0.000000] [sd_dotparse/WARNING] root->5 already exists
-> [0.000000] [sd_dotparse/WARNING] is not an integer
-> [0.000000] [sd_dotparse/WARNING] is not an integer
> [0.000000] [sd_dotparse/VERBOSE] The schedule is ignored, the task root is not correctly scheduled
> [0.000000] [sd_dotparse/WARNING] The scheduling is ignored
> [0.000000] [xbt/CRITICAL] The dot file with the provided scheduling is wrong, more information with the option : --log=sd_dotparse.thres:verbose
$ $SG_TEST_EXENV ./simulate_dot --log=no_loc ${srcdir:=.}/../2clusters.xml ${srcdir:=.}/dag_with_good_schedule.dot
> [0.000000] [surf_workstation/INFO] surf_workstation_model_init_ptask_L07
-> [0.000000] [sd_dotparse/WARNING] 'end' node is explicitly declared in the DOT file. Update it
-> [0.000000] [sd_dotparse/WARNING] 'root' node is explicitly declared in the DOT file. Update it
-> [0.000000] [sd_dotparse/WARNING] 0->2 already exists
-> [0.000000] [sd_dotparse/WARNING] 1->2 already exists
-> [0.000000] [sd_dotparse/WARNING] 2->3 already exists
-> [0.000000] [sd_dotparse/WARNING] root->5 already exists
-> [0.000000] [sd_dotparse/WARNING] 4->5 already exists
-> [0.000000] [sd_dotparse/WARNING] 6->7 already exists
-> [0.000000] [sd_dotparse/WARNING] 7->end already exists
-> [0.000000] [sd_dotparse/WARNING] 7->8 already exists
> [0.000000] [test/INFO] ------------------- Display all tasks of the loaded DAG ---------------------------
> [0.000000] [sd_task/INFO] Displaying task root
> [0.000000] [sd_task/INFO] - state: runnable
> [0.000000] [sd_task/INFO] 8
> [0.000000] [sd_task/INFO] - post-dependencies:
> [0.000000] [sd_task/INFO] end
-> [0.000000] [sd_task/INFO] Displaying task 2->3
-> [0.000000] [sd_task/INFO] - state: scheduled not runnable
-> [0.000000] [sd_task/INFO] - kind: end-to-end communication
-> [0.000000] [sd_task/INFO] - tracing category: COMM_E2E
-> [0.000000] [sd_task/INFO] - amount: 10002
-> [0.000000] [sd_task/INFO] - Dependencies to satisfy: 1
-> [0.000000] [sd_task/INFO] - pre-dependencies:
-> [0.000000] [sd_task/INFO] 2
-> [0.000000] [sd_task/INFO] - post-dependencies:
-> [0.000000] [sd_task/INFO] 3
-> [0.000000] [sd_task/INFO] Displaying task 6->7
-> [0.000000] [sd_task/INFO] - state: scheduled not runnable
-> [0.000000] [sd_task/INFO] - kind: end-to-end communication
-> [0.000000] [sd_task/INFO] - tracing category: COMM_E2E
-> [0.000000] [sd_task/INFO] - amount: 10005
-> [0.000000] [sd_task/INFO] - Dependencies to satisfy: 1
-> [0.000000] [sd_task/INFO] - pre-dependencies:
-> [0.000000] [sd_task/INFO] 6
-> [0.000000] [sd_task/INFO] - post-dependencies:
-> [0.000000] [sd_task/INFO] 7
> [0.000000] [sd_task/INFO] Displaying task root->5
> [0.000000] [sd_task/INFO] - state: scheduled not runnable
> [0.000000] [sd_task/INFO] - kind: end-to-end communication
> [0.000000] [sd_task/INFO] root
> [0.000000] [sd_task/INFO] - post-dependencies:
> [0.000000] [sd_task/INFO] 5
-> [0.000000] [sd_task/INFO] Displaying task 1->2
+> [0.000000] [sd_task/INFO] Displaying task 0->2
> [0.000000] [sd_task/INFO] - state: scheduled not runnable
> [0.000000] [sd_task/INFO] - kind: end-to-end communication
-> [0.000000] [sd_task/INFO] - tracing category: COMM_E2E
-> [0.000000] [sd_task/INFO] - amount: 10004
+> [0.000000] [sd_task/INFO] - tracing category: taskA
+> [0.000000] [sd_task/INFO] - amount: 10001
> [0.000000] [sd_task/INFO] - Dependencies to satisfy: 1
> [0.000000] [sd_task/INFO] - pre-dependencies:
-> [0.000000] [sd_task/INFO] 1
+> [0.000000] [sd_task/INFO] 0
> [0.000000] [sd_task/INFO] - post-dependencies:
> [0.000000] [sd_task/INFO] 2
-> [0.000000] [sd_task/INFO] Displaying task 7->end
+> [0.000000] [sd_task/INFO] Displaying task 1->2
> [0.000000] [sd_task/INFO] - state: scheduled not runnable
> [0.000000] [sd_task/INFO] - kind: end-to-end communication
-> [0.000000] [sd_task/INFO] - tracing category: COMM_E2E
-> [0.000000] [sd_task/INFO] - amount: 10014000
+> [0.000000] [sd_task/INFO] - tracing category: taskA
+> [0.000000] [sd_task/INFO] - amount: 10004
> [0.000000] [sd_task/INFO] - Dependencies to satisfy: 1
> [0.000000] [sd_task/INFO] - pre-dependencies:
-> [0.000000] [sd_task/INFO] 7
+> [0.000000] [sd_task/INFO] 1
> [0.000000] [sd_task/INFO] - post-dependencies:
-> [0.000000] [sd_task/INFO] end
-> [0.000000] [sd_task/INFO] Displaying task 0->2
+> [0.000000] [sd_task/INFO] 2
+> [0.000000] [sd_task/INFO] Displaying task 2->3
> [0.000000] [sd_task/INFO] - state: scheduled not runnable
> [0.000000] [sd_task/INFO] - kind: end-to-end communication
-> [0.000000] [sd_task/INFO] - tracing category: oi
-> [0.000000] [sd_task/INFO] - amount: 10001
+> [0.000000] [sd_task/INFO] - tracing category: taskA
+> [0.000000] [sd_task/INFO] - amount: 10002
> [0.000000] [sd_task/INFO] - Dependencies to satisfy: 1
> [0.000000] [sd_task/INFO] - pre-dependencies:
-> [0.000000] [sd_task/INFO] 0
-> [0.000000] [sd_task/INFO] - post-dependencies:
> [0.000000] [sd_task/INFO] 2
+> [0.000000] [sd_task/INFO] - post-dependencies:
+> [0.000000] [sd_task/INFO] 3
> [0.000000] [sd_task/INFO] Displaying task 4->5
> [0.000000] [sd_task/INFO] - state: scheduled not runnable
> [0.000000] [sd_task/INFO] - kind: end-to-end communication
-> [0.000000] [sd_task/INFO] - tracing category: COMM_E2E
+> [0.000000] [sd_task/INFO] - tracing category: taskB
> [0.000000] [sd_task/INFO] - amount: 10029
> [0.000000] [sd_task/INFO] - Dependencies to satisfy: 1
> [0.000000] [sd_task/INFO] - pre-dependencies:
> [0.000000] [sd_task/INFO] 4
> [0.000000] [sd_task/INFO] - post-dependencies:
> [0.000000] [sd_task/INFO] 5
+> [0.000000] [sd_task/INFO] Displaying task 6->7
+> [0.000000] [sd_task/INFO] - state: scheduled not runnable
+> [0.000000] [sd_task/INFO] - kind: end-to-end communication
+> [0.000000] [sd_task/INFO] - tracing category: taskB
+> [0.000000] [sd_task/INFO] - amount: 10005
+> [0.000000] [sd_task/INFO] - Dependencies to satisfy: 1
+> [0.000000] [sd_task/INFO] - pre-dependencies:
+> [0.000000] [sd_task/INFO] 6
+> [0.000000] [sd_task/INFO] - post-dependencies:
+> [0.000000] [sd_task/INFO] 7
> [0.000000] [sd_task/INFO] Displaying task 7->8
> [0.000000] [sd_task/INFO] - state: scheduled not runnable
> [0.000000] [sd_task/INFO] - kind: end-to-end communication
-> [0.000000] [sd_task/INFO] - tracing category: COMM_E2E
+> [0.000000] [sd_task/INFO] - tracing category: taskB
> [0.000000] [sd_task/INFO] - amount: 10000
> [0.000000] [sd_task/INFO] - Dependencies to satisfy: 1
> [0.000000] [sd_task/INFO] - pre-dependencies:
> [0.000000] [sd_task/INFO] 7
> [0.000000] [sd_task/INFO] - post-dependencies:
> [0.000000] [sd_task/INFO] 8
+> [0.000000] [sd_task/INFO] Displaying task 7->end
+> [0.000000] [sd_task/INFO] - state: scheduled not runnable
+> [0.000000] [sd_task/INFO] - kind: end-to-end communication
+> [0.000000] [sd_task/INFO] - tracing category: taskB
+> [0.000000] [sd_task/INFO] - amount: 10014000
+> [0.000000] [sd_task/INFO] - Dependencies to satisfy: 1
+> [0.000000] [sd_task/INFO] - pre-dependencies:
+> [0.000000] [sd_task/INFO] 7
+> [0.000000] [sd_task/INFO] - post-dependencies:
+> [0.000000] [sd_task/INFO] end
> [0.000000] [sd_task/INFO] Displaying task end
> [0.000000] [sd_task/INFO] - state: scheduled not runnable
> [0.000000] [sd_task/INFO] - kind: sequential computation
> [0.000000] [sd_task/INFO] 9
> [0.000000] [sd_task/INFO] 7
> [0.000000] [test/INFO] ------------------- Run the schedule ---------------------------
-> [18.000866] [test/INFO] ------------------- Produce the trace file---------------------------
-> [18.000866] [test/INFO] Producing the trace of the run into dag_with_good_schedule.trace
+> [20.000866] [test/INFO] ------------------- Produce the trace file---------------------------
+> [20.000866] [test/INFO] Producing the trace of the run into dag_with_good_schedule.trace
$ cat ${srcdir:=.}/dag_with_good_schedule.trace
-> [0.000000] C2-05 compute 0.000000 # root
-> [0.000000] C2-06 compute 10000000129.452715 # 0
-> [0.000000] C2-05 compute 10000000131.133657 # 1
-> [2.000380] C2-06 compute 10000000121.124870 # 2
-> [4.000415] C2-06 compute 10000000230.608025 # 3
-> [6.000415] C2-05 compute 10000000004.994019 # 4
-> [8.000450] C2-05 compute 10000000046.016401 # 5
-> [10.000450] C2-05 compute 10000000091.598791 # 6
-> [12.000485] C2-05 compute 10000000040.679438 # 7
-> [14.000865] C2-06 compute 10000000250.490017 # 8
-> [16.000866] C2-06 compute 10000000079.267649 # 9
-> [4.000380] C2-06 send C2-06 10001.781645 # 2->3
-> [4.000415] C2-06 recv C2-06 10001.781645 # 2->3
-> [12.000450] C2-05 send C2-05 10004.920415 # 6->7
-> [12.000485] C2-05 recv C2-05 10004.920415 # 6->7
-> [0.000000] C2-05 send C2-05 10014000.000000 # root->5
-> [0.020123] C2-05 recv C2-05 10014000.000000 # root->5
-> [2.000000] C2-05 send C2-06 10004.164631 # 1->2
-> [2.000380] C2-06 recv C2-05 10004.164631 # 1->2
-> [14.000485] C2-05 send C2-05 10014000.000000 # 7->end
-> [14.020609] C2-05 recv C2-05 10014000.000000 # 7->end
-> [2.000000] C2-06 send C2-06 10001.389601 # 0->2
-> [2.000035] C2-06 recv C2-06 10001.389601 # 0->2
-> [8.000415] C2-05 send C2-05 10029.262823 # 4->5
-> [8.000450] C2-05 recv C2-05 10029.262823 # 4->5
-> [14.000485] C2-05 send C2-06 10000.234049 # 7->8
-> [14.000865] C2-06 recv C2-05 10000.234049 # 7->8
-> [18.000866] C2-05 compute 10000000129.452715 # end
-
+> [0.000000->0.000000] C2-05 compute 0.000000 flops # root
+> [0.000000->2.000000] C2-06 compute 10000000129.452715 flops # 0
+> [0.000000->2.000000] C2-05 compute 10000000131.133657 flops # 1
+> [2.000380->4.000380] C2-06 compute 10000000121.124870 flops # 2
+> [4.000415->6.000415] C2-06 compute 10000000230.608025 flops # 3
+> [6.000415->8.000415] C2-05 compute 10000000004.994019 flops # 4
+> [8.000450->10.000450] C2-05 compute 10000000046.016401 flops # 5
+> [10.000450->12.000450] C2-05 compute 10000000091.598791 flops # 6
+> [12.000485->14.000485] C2-05 compute 10000000040.679438 flops # 7
+> [14.000865->16.000866] C2-06 compute 10000000250.490017 flops # 8
+> [16.000866->18.000866] C2-06 compute 10000000079.267649 flops # 9
+> [0.000000 -> 0.020123] C2-05 -> C2-05 transfer of 10014000 bytes # root->5
+> [2.000000 -> 2.000035] C2-06 -> C2-06 transfer of 10001 bytes # 0->2
+> [2.000000 -> 2.000380] C2-05 -> C2-06 transfer of 10004 bytes # 1->2
+> [4.000380 -> 4.000415] C2-06 -> C2-06 transfer of 10002 bytes # 2->3
+> [8.000415 -> 8.000450] C2-05 -> C2-05 transfer of 10029 bytes # 4->5
+> [12.000450 -> 12.000485] C2-05 -> C2-05 transfer of 10005 bytes # 6->7
+> [14.000485 -> 14.000865] C2-05 -> C2-06 transfer of 10000 bytes # 7->8
+> [14.000485 -> 14.020609] C2-05 -> C2-05 transfer of 10014000 bytes # 7->end
+> [18.000866->20.000866] C2-05 compute 10000000129.452715 flops # end
$ rm -f ${srcdir:=.}/dag_with_good_schedule.trace
! expect signal SIGABRT
$ $SG_TEST_EXENV ./dot_test --log=no_loc ${srcdir:=.}/../2clusters.xml ${srcdir:=.}/dag_with_cycle.dot
> [0.000000] [surf_workstation/INFO] surf_workstation_model_init_ptask_L07
-> [0.000000] [sd_dotparse/WARNING] 'end' node is explicitly declared in the DOT file. Update it
-> [0.000000] [sd_dotparse/WARNING] 0->2 already exists
-> [0.000000] [sd_dotparse/WARNING] 1->2 already exists
-> [0.000000] [sd_dotparse/WARNING] 2->3 already exists
-> [0.000000] [sd_dotparse/WARNING] 4->5 already exists
-> [0.000000] [sd_dotparse/WARNING] 6->7 already exists
-> [0.000000] [sd_dotparse/WARNING] 7->end already exists
-> [0.000000] [sd_dotparse/WARNING] 7->8 already exists
-> [0.000000] [sd_dotparse/WARNING] 'root' node is explicitly declared in the DOT file. Update it
-> [0.000000] [sd_dotparse/WARNING] root->5 already exists
> [0.000000] [sd_daxparse/WARNING] the task root is not marked
> [0.000000] [sd_daxparse/WARNING] the task 0 is in a cycle
> [0.000000] [sd_daxparse/WARNING] the task 1 is in a cycle
${CMAKE_CURRENT_SOURCE_DIR}/replay/actions_allReduce.txt
${CMAKE_CURRENT_SOURCE_DIR}/replay/actions_barrier.txt
${CMAKE_CURRENT_SOURCE_DIR}/replay/actions_with_isend.txt
+ ${CMAKE_CURRENT_SOURCE_DIR}/replay/actions_alltoall.txt
+ ${CMAKE_CURRENT_SOURCE_DIR}/replay/actions_alltoallv.txt
+ ${CMAKE_CURRENT_SOURCE_DIR}/replay/actions_waitall.txt
PARENT_SCOPE
)
#include "Summa.h"
#include "2.5D_MM.h"
#include <stdlib.h>
+#include <stdio.h>
#include "xbt/log.h"
#define CHECK_25D 1
#include "Matrix_init.h"
#include "Summa.h"
#include "xbt/log.h"
+#include <stdio.h>
+
XBT_LOG_NEW_DEFAULT_CATEGORY(MM_Summa,
"Messages specific for this msg example");
--- /dev/null
+0 init
+1 init
+2 init
+
+0 allToAll 500 500
+1 allToAll 500 500
+2 allToAll 500 500
+
+
+0 finalize
+1 finalize
+2 finalize
--- /dev/null
+0 init
+1 init
+2 init
+
+0 allToAllV 100 1 40 30 1 20 150 1000 1 80 100 1 20 110
+1 allToAllV 1000 80 1 40 1 100 160 1000 40 1 30 10 70 140
+2 allToAllV 1000 100 30 1 1 120 150 1000 30 40 1 1 50 60
+
+
+0 finalize
+1 finalize
+2 finalize
--- /dev/null
+0 init 1
+1 init 1
+2 init 1
+
+0 bcast 5e8 1 0
+1 bcast 5e8 1 0
+2 bcast 5e8 1 0
+
+0 compute 5e8
+1 compute 2e8
+2 compute 5e8
+
+0 bcast 5e8 0 3
+1 bcast 5e8 0 3
+2 bcast 5e8 0 3
+
+0 compute 5e8
+1 compute 2e8
+2 compute 5e8
+
+0 reduce 5e8 5e8 0 4
+1 reduce 5e8 5e8 0 4
+2 reduce 5e8 5e8 0 4
+
+0 finalize
+1 finalize
+2 finalize
--- /dev/null
+0 init
+1 init
+2 init
+
+0 Irecv 1 2000
+1 Isend 0 2000
+2 Irecv 1 3000
+
+0 Irecv 2 3000
+1 Isend 2 3000
+2 Isend 0 3000
+
+0 waitAll
+1 waitAll
+2 waitAll
+
+0 finalize
+1 finalize
+2 finalize
> [Tremblay:0:(0) 152.576600] [smpi_replay/VERBOSE] 0 bcast 5e8 73.739750
> [Jupiter:1:(0) 155.197969] [smpi_replay/VERBOSE] 1 compute 2e8 2.621369
> [Tremblay:0:(0) 157.673699] [smpi_replay/VERBOSE] 0 compute 5e8 5.097100
-> [Fafard:2:(0) 222.850234] [smpi_replay/VERBOSE] 2 reduce 5e8 5e8 72.283426
-> [Jupiter:1:(0) 231.413449] [smpi_replay/VERBOSE] 1 reduce 5e8 5e8 76.215480
-> [Tremblay:0:(0) 231.413449] [smpi_replay/VERBOSE] 0 reduce 5e8 5e8 73.739750
-> [Tremblay:0:(0) 231.413449] [smpi_replay/INFO] Simulation time 231.413
+> [Fafard:2:(0) 229.403658] [smpi_replay/VERBOSE] 2 reduce 5e8 5e8 78.836850
+> [Tremblay:0:(0) 236.510549] [smpi_replay/VERBOSE] 0 reduce 5e8 5e8 78.836850
+> [Jupiter:1:(0) 237.966873] [smpi_replay/VERBOSE] 1 reduce 5e8 5e8 82.768904
+> [Jupiter:1:(0) 237.966873] [smpi_replay/INFO] Simulation time 237.967
$ rm -f replay/one_trace
> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'tracing/smpi/computing' to '1'
> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'smpi/cpu_threshold' to '1'
> [0.000000] [surf_config/INFO] Switching workstation model to compound since you changed the network and/or cpu model(s)
-> [Tremblay:0:(0) 231.413449] [smpi_replay/INFO] Simulation time 231.413
+> [Jupiter:1:(0) 237.966873] [smpi_replay/INFO] Simulation time 237.967
$ rm -f replay/one_trace
> 12 155.197969 2 2 6
> 13 157.673699 2 1
> 12 157.673699 2 1 6
-> 13 222.850234 2 3
-> 12 222.850234 2 3 4
-> 13 222.850234 2 3
-> 7 222.850234 1 3
-> 13 231.413449 2 2
-> 12 231.413449 2 2 4
-> 13 231.413449 2 2
-> 7 231.413449 1 2
-> 13 231.413449 2 1
-> 12 231.413449 2 1 4
-> 13 231.413449 2 1
-> 7 231.413449 1 1
+> 13 229.403658 2 3
+> 12 229.403658 2 3 4
+> 13 229.403658 2 3
+> 7 229.403658 1 3
+> 13 236.510549 2 1
+> 12 236.510549 2 1 4
+> 13 236.510549 2 1
+> 7 236.510549 1 1
+> 13 237.966873 2 2
+> 12 237.966873 2 2 4
+> 13 237.966873 2 2
+> 7 237.966873 1 2
$ rm -f ./simgrid.trace
> [Jupiter:1:(0) 160.586347] [smpi_replay/INFO] Simulation time 160.586
$ rm -f replay/one_trace
+
+p Test of AllToAll replay with SMPI (one trace for all processes)
+
+< replay/actions_alltoall.txt
+$ mkfile replay/one_trace
+
+$ ../../bin/smpirun -ext smpi_replay --log=replay.thresh:critical --log=smpi_replay.thresh:verbose --log=no_loc --cfg=smpi/cpu_threshold:1 -np 3 -platform ${srcdir:=.}/replay/replay_platform.xml -hostfile ${srcdir:=.}/hostfile ./smpi_replay replay/one_trace
+> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'maxmin/precision' to '1e-9'
+> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'network/model' to 'SMPI'
+> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'network/TCP_gamma' to '4194304'
+> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'smpi/cpu_threshold' to '1'
+> [0.000000] [surf_config/INFO] Switching workstation model to compound since you changed the network and/or cpu model(s)
+> [Tremblay:0:(0) 0.004041] [smpi_replay/VERBOSE] 0 allToAll 500 500 0.004041
+> [Jupiter:1:(0) 0.006920] [smpi_replay/VERBOSE] 1 allToAll 500 500 0.006920
+> [Fafard:2:(0) 0.006920] [smpi_replay/VERBOSE] 2 allToAll 500 500 0.006920
+> [Fafard:2:(0) 0.006920] [smpi_replay/INFO] Simulation time 0.00692004
+
+$ rm -f replay/one_trace
+
+p Test of AllToAllv replay with SMPI (one trace for all processes)
+
+< replay/actions_alltoallv.txt
+$ mkfile replay/one_trace
+
+$ ../../bin/smpirun -ext smpi_replay --log=replay.thresh:critical --log=smpi_replay.thresh:verbose --log=no_loc --cfg=smpi/cpu_threshold:1 -np 3 -platform ${srcdir:=.}/replay/replay_platform.xml -hostfile ${srcdir:=.}/hostfile ./smpi_replay replay/one_trace
+> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'maxmin/precision' to '1e-9'
+> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'network/model' to 'SMPI'
+> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'network/TCP_gamma' to '4194304'
+> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'smpi/cpu_threshold' to '1'
+> [0.000000] [surf_config/INFO] Switching workstation model to compound since you changed the network and/or cpu model(s)
+> [Tremblay:0:(0) 0.003999] [smpi_replay/VERBOSE] 0 allToAllV 100 1 40 30 1 20 150 1000 1 80 100 1 20 110 0.003999
+> [Jupiter:1:(0) 0.006934] [smpi_replay/VERBOSE] 1 allToAllV 1000 80 1 40 1 100 160 1000 40 1 30 10 70 140 0.006934
+> [Fafard:2:(0) 0.006936] [smpi_replay/VERBOSE] 2 allToAllV 1000 100 30 1 1 120 150 1000 30 40 1 1 50 60 0.006936
+> [Fafard:2:(0) 0.006936] [smpi_replay/INFO] Simulation time 0.00693554
+
+$ rm -f replay/one_trace
+
+p Test of waitall replay with SMPI (one trace for all processes)
+
+< replay/actions_waitall.txt
+$ mkfile replay/one_trace
+
+$ ../../bin/smpirun -ext smpi_replay --log=replay.thresh:critical --log=smpi_replay.thresh:verbose --log=no_loc --cfg=smpi/cpu_threshold:1 -np 3 -platform ${srcdir:=.}/replay/replay_platform.xml -hostfile ${srcdir:=.}/hostfile ./smpi_replay replay/one_trace
+> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'maxmin/precision' to '1e-9'
+> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'network/model' to 'SMPI'
+> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'network/TCP_gamma' to '4194304'
+> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'smpi/cpu_threshold' to '1'
+> [0.000000] [surf_config/INFO] Switching workstation model to compound since you changed the network and/or cpu model(s)
+> [Tremblay:0:(0) 0.000000] [smpi_replay/VERBOSE] 0 Irecv 1 2000 0.000000
+> [Jupiter:1:(0) 0.000000] [smpi_replay/VERBOSE] 1 Isend 0 2000 0.000000
+> [Fafard:2:(0) 0.000000] [smpi_replay/VERBOSE] 2 Irecv 1 3000 0.000000
+> [Tremblay:0:(0) 0.000000] [smpi_replay/VERBOSE] 0 Irecv 2 3000 0.000000
+> [Jupiter:1:(0) 0.000000] [smpi_replay/VERBOSE] 1 Isend 2 3000 0.000000
+> [Jupiter:1:(0) 0.000000] [smpi_replay/VERBOSE] 1 waitAll 0.000000
+> [Fafard:2:(0) 0.000000] [smpi_replay/VERBOSE] 2 Isend 0 3000 0.000000
+> [Tremblay:0:(0) 0.003787] [smpi_replay/VERBOSE] 0 waitAll 0.003787
+> [Fafard:2:(0) 0.006220] [smpi_replay/VERBOSE] 2 waitAll 0.006220
+> [Fafard:2:(0) 0.006220] [smpi_replay/INFO] Simulation time 0.00622039
+
+$ rm -f replay/one_trace
} s_msg_host_priv_t, *msg_host_priv_t;
static inline msg_host_priv_t MSG_host_priv(msg_host_t host){
- return xbt_lib_get_level(host, MSG_HOST_LEVEL);
+ return (msg_host_priv_t )xbt_lib_get_level(host, MSG_HOST_LEVEL);
}
} e_msg_vm_state_t;
typedef struct msg_vm {
- const char *name;
+ char *name;
s_xbt_swag_hookup_t all_vms_hookup;
s_xbt_swag_hookup_t host_vms_hookup;
xbt_dynar_t processes;
#cmakedefine HAVE_MMAP @HAVE_MMAP@
/* Get the config */
-#undef SIMGRID_NEED_GETLINE
#undef SIMGRID_NEED_ASPRINTF
#undef SIMGRID_NEED_VASPRINTF
-@need_getline@
@simgrid_need_asprintf@
@simgrid_need_vasprintf@
-#include <stdio.h> /* FILE, getline if it exists */
-#include <stdlib.h> /* size_t, ssize_t */
-XBT_PUBLIC(ssize_t) xbt_getline(char **lineptr, size_t * n, FILE * stream);
-
#include <stdarg.h>
/* snprintf related functions */
} s_xbt_dynar_t;
static XBT_INLINE void
-_xbt_dynar_cursor_first(const xbt_dynar_t dynar,
+_xbt_dynar_cursor_first(const xbt_dynar_t dynar _XBT_GNUC_UNUSED,
unsigned int *const cursor)
{
/* iterating over a NULL dynar is a no-op (but we don't want to have uninitialized counters) */
* @brief String manipulation functions
*
* This module defines several string related functions. We redefine some quite classical
- * functions on the platforms were they are not nativaly defined (such as getline() or
+ * functions on the platforms were they are not nativaly defined (such as xbt_getline() or
* asprintf()), while some other are a bit more exotic.
* @{
*/
+/* Our own implementation of getline, mainly useful on the platforms not enjoying this function */
+#include <stdio.h> /* FILE */
+#include <stdlib.h> /* size_t, ssize_t */
+XBT_PUBLIC(ssize_t) xbt_getline(char **lineptr, size_t * n, FILE * stream);
+
/* Trim related functions */
XBT_PUBLIC(void) xbt_str_rtrim(char *s, const char *char_list);
XBT_PUBLIC(void) xbt_str_ltrim(char *s, const char *char_list);
jxbt_throw_native(env,bprintf("comm is null"));
return JNI_FALSE;
}
- xbt_ex_t e;
- TRY {
- if (MSG_comm_test(comm)) {
- msg_error_t status = MSG_comm_get_status(comm);
- if (status == MSG_OK) {
- jcomm_bind_task(env,jcomm);
- return JNI_TRUE;
- }
- else {
- //send the correct exception
- jmsg_throw_status(env,status);
- return JNI_FALSE;
- }
- }
- else {
- return JNI_FALSE;
+
+ if (MSG_comm_test(comm)) {
+ msg_error_t status = MSG_comm_get_status(comm);
+ if (status == MSG_OK) {
+ jcomm_bind_task(env,jcomm);
+ return JNI_TRUE;
+ } else {
+ //send the correct exception
+ jmsg_throw_status(env,status);
}
}
- CATCH(e) {
- xbt_ex_free(e);
- }
-
return JNI_FALSE;
}
JNIEXPORT void JNICALL
val_t PJ_value_get_or_new (const char *name, const char *color, type_t father)
{
+ val_t ret = 0;
xbt_ex_t e;
TRY {
- return PJ_value_get(name, father);
+ ret = PJ_value_get(name, father);
}
CATCH(e) {
xbt_ex_free(e);
- return PJ_value_new(name, color, father);
+ ret = PJ_value_new(name, color, father);
}
- THROW_IMPOSSIBLE;
+ return ret;
}
val_t PJ_value_get (const char *name, type_t father)
int raw_mem_set = (mmalloc_get_current_heap() == raw_heap);
MC_SET_RAW_MEM;
-
- if(mc_heap_comparison_ignore == NULL)
- mc_heap_comparison_ignore = xbt_dynar_new(sizeof(mc_heap_ignore_region_t), heap_ignore_region_free_voidp);
mc_heap_ignore_region_t region = NULL;
region = xbt_new0(s_mc_heap_ignore_region_t, 1);
region->address = address;
region->size = size;
-
+
region->block = ((char*)address - (char*)((xbt_mheap_t)std_heap)->heapbase) / BLOCKSIZE + 1;
if(((xbt_mheap_t)std_heap)->heapinfo[region->block].type == 0){
region->fragment = ((uintptr_t) (ADDR2UINT (address) % (BLOCKSIZE))) >> ((xbt_mheap_t)std_heap)->heapinfo[region->block].type;
((xbt_mheap_t)std_heap)->heapinfo[region->block].busy_frag.ignore[region->fragment] = 1;
}
+
+ if(mc_heap_comparison_ignore == NULL){
+ mc_heap_comparison_ignore = xbt_dynar_new(sizeof(mc_heap_ignore_region_t), heap_ignore_region_free_voidp);
+ xbt_dynar_push(mc_heap_comparison_ignore, ®ion);
+ if(!raw_mem_set)
+ MC_UNSET_RAW_MEM;
+ return;
+ }
unsigned int cursor = 0;
mc_heap_ignore_region_t current_region;
- xbt_dynar_foreach(mc_heap_comparison_ignore, cursor, current_region){
+ int start = 0;
+ int end = xbt_dynar_length(mc_heap_comparison_ignore) - 1;
+
+ while(start <= end){
+ cursor = (start + end) / 2;
+ current_region = (mc_heap_ignore_region_t)xbt_dynar_get_as(mc_heap_comparison_ignore, cursor, mc_heap_ignore_region_t);
+ if(current_region->address == address){
+ heap_ignore_region_free(region);
+ if(!raw_mem_set)
+ MC_UNSET_RAW_MEM;
+ return;
+ }
+ if(current_region->address < address)
+ start = cursor + 1;
if(current_region->address > address)
- break;
+ end = cursor - 1;
}
- xbt_dynar_insert_at(mc_heap_comparison_ignore, cursor, ®ion);
+ if(current_region->address < address)
+ xbt_dynar_insert_at(mc_heap_comparison_ignore, cursor + 1, ®ion);
+ else
+ xbt_dynar_insert_at(mc_heap_comparison_ignore, cursor, ®ion);
MC_UNSET_RAW_MEM;
msg_host_t host)
{
xbt_ex_t e;
- msg_error_t ret;
+ msg_error_t ret = MSG_OK;
XBT_DEBUG
("MSG_task_receive_ext: Trying to receive a message on mailbox '%s'",
alias);
#endif
}
+
/**
* \ingroup msg_VMs
* \brief Reboot the VM, restarting all the processes in it.
*/
void MSG_vm_reboot(msg_vm_t vm)
{
- xbt_dynar_t new_processes = xbt_dynar_new(sizeof(msg_process_t),NULL);
-
+ xbt_dynar_t process_list = xbt_dynar_new(sizeof(msg_process_t), NULL);
msg_process_t process;
unsigned int cpt;
- xbt_dynar_foreach(vm->processes,cpt,process) {
- msg_process_t new_process = MSG_process_restart(process);
- xbt_dynar_push_as(new_processes,msg_process_t,new_process);
-
+ xbt_dynar_foreach(vm->processes, cpt, process) {
+ xbt_dynar_push_as(process_list, msg_process_t, process);
}
- xbt_dynar_foreach(new_processes, cpt, process) {
- MSG_vm_bind(vm,process);
+ xbt_dynar_foreach(process_list, cpt, process) {
+ msg_process_t new_process = MSG_process_restart(process);
+ MSG_vm_bind(vm, new_process);
}
- xbt_dynar_free(&new_processes);
+ xbt_dynar_free(&process_list);
}
+
/** @brief Destroy a msg_vm_t.
* @ingroup msg_VMs
*/
TRACE_msg_vm_end(vm);
#endif
-
+ xbt_free(vm->name);
xbt_dynar_free(&vm->processes);
xbt_free(vm);
}
}
//set task category
+ if (task->category)
+ xbt_free(task->category);
task->category = xbt_strdup (category);
XBT_DEBUG("SD task %p(%s), category %s", task, task->name, task->category);
}
parallel
} seq_par_t;
-void dot_add_task(Agnode_t * dag_node);
-void dot_add_parallel_task(Agnode_t * dag_node);
-void dot_add_input_dependencies(SD_task_t current_job, Agedge_t * edge,
- seq_par_t seq_or_par);
-void dot_add_output_dependencies(SD_task_t current_job, Agedge_t * edge,
- seq_par_t seq_or_par);
-xbt_dynar_t SD_dotload_generic(const char * filename);
-
-static double dot_parse_double(const char *string) {
- if (string == NULL)
- return -1;
- double value = -1;
- char *err;
-
- errno = 0;
- value = strtod(string,&err);
- if(errno) {
- XBT_WARN("Failed to convert string to double: %s\n",strerror(errno));
- return -1;
- }
- return value;
-}
-
-
-static int dot_parse_int(const char *string) {
- if (string == NULL)
- return -10;
- int ret = 0;
- int value = -1;
-
- ret = sscanf(string, "%d", &value);
- if (ret != 1)
- XBT_WARN("%s is not an integer", string);
- return value;
-}
+xbt_dynar_t SD_dotload_generic(const char * filename, seq_par_t seq_or_par);
static xbt_dynar_t result;
static xbt_dict_t jobs;
-static xbt_dict_t files;
static xbt_dict_t computers;
-static SD_task_t root_task, end_task;
static Agraph_t *dag_dot;
static bool schedule = true;
-static void dump_res() {
- unsigned int cursor;
- SD_task_t task;
- xbt_dynar_foreach(result, cursor, task) {
- XBT_INFO("Task %d", cursor);
- SD_task_dump(task);
- }
-}
-
-
static void dot_task_free(void *task) {
SD_task_t t = task;
SD_task_destroy(t);
* if they aren't here, there choose to be equal to zero.
*/
xbt_dynar_t SD_dotload(const char *filename) {
- SD_dotload_generic(filename);
+ computers = xbt_dict_new_homogeneous(NULL);
+ schedule = false;
+ SD_dotload_generic(filename, sequential);
xbt_dynar_t computer = NULL;
xbt_dict_cursor_t dict_cursor;
char *computer_name;
}
xbt_dynar_t SD_dotload_with_sched(const char *filename) {
- SD_dotload_generic(filename);
+ computers = xbt_dict_new_homogeneous(NULL);
+ SD_dotload_generic(filename, sequential);
- if(schedule == true){
+ if(schedule){
xbt_dynar_t computer = NULL;
xbt_dict_cursor_t dict_cursor;
char *computer_name;
const SD_workstation_t *workstations = SD_workstation_get_list ();
xbt_dict_foreach(computers,dict_cursor,computer_name,computer){
- int count_computer = dot_parse_int(computer_name);
+ int count_computer = atoi(computer_name);
unsigned int count=0;
SD_task_t task;
SD_task_t task_previous = NULL;
}
xbt_dynar_t SD_PTG_dotload(const char * filename) {
- xbt_assert(filename, "Unable to use a null file descriptor\n");
- FILE *in_file = fopen(filename, "r");
- dag_dot = agread(in_file, NIL(Agdisc_t *));
-
- result = xbt_dynar_new(sizeof(SD_task_t), dot_task_p_free);
- files = xbt_dict_new_homogeneous(&dot_task_free);
- jobs = xbt_dict_new_homogeneous(NULL);
- computers = xbt_dict_new_homogeneous(NULL);
- root_task = SD_task_create_comp_par_amdahl("root", NULL, 0., 0.);
- /* by design the root task is always SCHEDULABLE */
- __SD_task_set_state(root_task, SD_SCHEDULABLE);
-
- xbt_dict_set(jobs, "root", root_task, NULL);
- xbt_dynar_push(result, &root_task);
- end_task = SD_task_create_comp_par_amdahl("end", NULL, 0., 0.);
- xbt_dict_set(jobs, "end", end_task, NULL);
-
- Agnode_t *dag_node = NULL;
- for (dag_node = agfstnode(dag_dot); dag_node; dag_node = agnxtnode(dag_dot,
- dag_node)) {
- dot_add_parallel_task(dag_node);
- }
- agclose(dag_dot);
- xbt_dict_free(&jobs);
-
- /* And now, post-process the files.
- * We want a file task per pair of computation tasks exchanging the file.
- * Duplicate on need
- * Files not produced in the system are said to be produced by root task
- * (top of DAG).
- * Files not consumed in the system are said to be consumed by end task
- * (bottom of DAG).
- */
- xbt_dict_cursor_t cursor;
- SD_task_t file;
- char *name;
- xbt_dict_foreach(files, cursor, name, file) {
- unsigned int cpt1, cpt2;
- SD_task_t newfile = NULL;
- SD_dependency_t depbefore, depafter;
- if (xbt_dynar_is_empty(file->tasks_before)) {
- xbt_dynar_foreach(file->tasks_after, cpt2, depafter) {
- SD_task_t newfile =
- SD_task_create_comm_par_mxn_1d_block(file->name, NULL, file->amount);
- SD_task_dependency_add(NULL, NULL, root_task, newfile);
- SD_task_dependency_add(NULL, NULL, newfile, depafter->dst);
- xbt_dynar_push(result, &newfile);
- }
- } else if (xbt_dynar_is_empty(file->tasks_after)) {
- xbt_dynar_foreach(file->tasks_before, cpt2, depbefore) {
- SD_task_t newfile =
- SD_task_create_comm_par_mxn_1d_block(file->name, NULL,
- file->amount);
- SD_task_dependency_add(NULL, NULL, depbefore->src, newfile);
- SD_task_dependency_add(NULL, NULL, newfile, end_task);
- xbt_dynar_push(result, &newfile);
- }
- } else {
- xbt_dynar_foreach(file->tasks_before, cpt1, depbefore) {
- xbt_dynar_foreach(file->tasks_after, cpt2, depafter) {
- if (depbefore->src == depafter->dst) {
- XBT_WARN
- ("File %s is produced and consumed by task %s. This loop dependency will prevent the execution of the task.",
- file->name, depbefore->src->name);
- }
- newfile =
- SD_task_create_comm_par_mxn_1d_block(file->name, NULL,
- file->amount);
- SD_task_dependency_add(NULL, NULL, depbefore->src, newfile);
- SD_task_dependency_add(NULL, NULL, newfile, depafter->dst);
- xbt_dynar_push(result, &newfile);
- }
- }
- }
- }
-
- /* Push end task last */
- xbt_dynar_push(result, &end_task);
-
- /* Free previous copy of the files */
- xbt_dict_free(&files);
- xbt_dict_free(&computers);
- fclose(in_file);
+ xbt_dynar_t result = SD_dotload_generic(filename, parallel);
if (!acyclic_graph_detail(result)) {
XBT_ERROR("The DOT described in %s is not a DAG. It contains a cycle.",
basename((char*)filename));
return result;
}
+#ifdef HAVE_CGRAPH_H
+static int edge_compare(const void *a, const void *b)
+{
+ unsigned va = AGSEQ(*(Agedge_t **)a);
+ unsigned vb = AGSEQ(*(Agedge_t **)b);
+ return va == vb ? 0 : (va < vb ? -1 : 1);
+}
+#endif
-xbt_dynar_t SD_dotload_generic(const char * filename) {
+xbt_dynar_t SD_dotload_generic(const char * filename, seq_par_t seq_or_par){
xbt_assert(filename, "Unable to use a null file descriptor\n");
- FILE *in_file = fopen(filename, "r");
- dag_dot = agread(in_file, NIL(Agdisc_t *));
-
+ unsigned int i;
result = xbt_dynar_new(sizeof(SD_task_t), dot_task_p_free);
- files = xbt_dict_new_homogeneous(NULL);
jobs = xbt_dict_new_homogeneous(NULL);
- computers = xbt_dict_new_homogeneous(NULL);
- root_task = SD_task_create_comp_seq("root", NULL, 0);
- /* by design the root task is always SCHEDULABLE */
- __SD_task_set_state(root_task, SD_SCHEDULABLE);
-
- xbt_dict_set(jobs, "root", root_task, NULL);
- xbt_dynar_push(result, &root_task);
- end_task = SD_task_create_comp_seq("end", NULL, 0);
- xbt_dict_set(jobs, "end", end_task, NULL);
-
- Agnode_t *dag_node = NULL;
- for (dag_node = agfstnode(dag_dot); dag_node; dag_node = agnxtnode(dag_dot, dag_node)) {
- dot_add_task(dag_node);
- }
- agclose(dag_dot);
- xbt_dict_free(&jobs);
-
- /* And now, post-process the files.
- * We want a file task per pair of computation tasks exchanging the file.
- * Duplicate on need
- * Files not produced in the system are said to be produced by root task
- * (top of DAG).
- * Files not consumed in the system are said to be consumed by end task
- * (bottom of DAG).
+ FILE *in_file = fopen(filename, "r");
+ if (in_file == NULL)
+ xbt_die("Failed to open file: %s", filename);
+ dag_dot = agread(in_file, NIL(Agdisc_t *));
+ SD_task_t root, end, task;
+ /*
+ * Create all the nodes
*/
- xbt_dict_cursor_t cursor;
- SD_task_t file;
- char *name;
- xbt_dict_foreach(files, cursor, name, file) {
- XBT_DEBUG("Considering file '%s' stored in the dictionary",
- file->name);
- if (xbt_dynar_is_empty(file->tasks_before)) {
- XBT_DEBUG("file '%s' has no source. Add dependency from 'root'",
- file->name);
- SD_task_dependency_add(NULL, NULL, root_task, file);
- } else if (xbt_dynar_is_empty(file->tasks_after)) {
- XBT_DEBUG("file '%s' has no destination. Add dependency to 'end'",
- file->name);
- SD_task_dependency_add(NULL, NULL, file, end_task);
- }
- xbt_dynar_push(result, &file);
- }
+ Agnode_t *node = NULL;
+ for (node = agfstnode(dag_dot); node; node = agnxtnode(dag_dot, node)) {
- /* Push end task last */
- xbt_dynar_push(result, &end_task);
+ char *name = agnameof(node);
+ double amount = atof(agget(node, (char *) "size"));
+ double alpha;
- xbt_dict_free(&files);
- fclose(in_file);
- if (!acyclic_graph_detail(result)) {
- XBT_ERROR("The DOT described in %s is not a DAG. It contains a cycle.",
- basename((char*)filename));
- xbt_dynar_free(&result);
- /* (result == NULL) here */
- }
- return result;
-}
+ if (seq_or_par == sequential){
+ XBT_DEBUG("See <job id=%s amount =%.0f>", name, amount);
+ } else {
+ if (!strcmp(agget(node, (char *) "alpha"), "")){
+ alpha = atof(agget(node, (char *) "alpha"));
+ if (alpha == -1.){
+ XBT_DEBUG("negative alpha value provided. Set to 0.");
+ alpha = 0.0 ;
+ }
+ } else {
+ XBT_DEBUG("no alpha value provided. Set to 0");
+ alpha = 0.0 ;
+ }
-/* dot_add_parallel_task create a sd_task of SD_TASK_COMP_PAR_AMDHAL type and
- * all transfers required for this task. The execution time of the task is
- * given by the attribute size. The unit of size is the Flop.*/
-void dot_add_parallel_task(Agnode_t * dag_node) {
- char *name = agnameof(dag_node);
- SD_task_t current_job;
- double amount = dot_parse_double(agget(dag_node, (char *) "size"));
- double alpha = dot_parse_double(agget(dag_node, (char *) "alpha"));
-
- if (alpha == -1.)
- alpha = 0.0;
-
- XBT_DEBUG("See <job id=%s amount=%s %.0f alpha=%.2f>", name,
- agget(dag_node, (char *) "size"), amount, alpha);
- if (!strcmp(name, "root")){
- XBT_WARN("'root' node is explicitly declared in the DOT file. Update it");
- root_task->amount = amount;
- root_task->alpha = alpha;
-#ifdef HAVE_TRACING
- TRACE_sd_dotloader (root_task, agget (dag_node, (char*)"category"));
-#endif
- }
+ XBT_DEBUG("See <job id=%s amount =%.0f alpha = %.3f>",
+ name, amount, alpha);
+ }
- if (!strcmp(name, "end")){
- XBT_WARN("'end' node is explicitly declared in the DOT file. Update it");
- end_task->amount = amount;
- end_task->alpha = alpha;
+ if (!(task = xbt_dict_get_or_null(jobs, name))) {
+ if (seq_or_par == sequential){
+ task = SD_task_create_comp_seq(name, NULL , amount);
+ } else {
+ task = SD_task_create_comp_par_amdahl(name, NULL , amount, alpha);
+ }
#ifdef HAVE_TRACING
- TRACE_sd_dotloader (end_task, agget (dag_node, (char*)"category"));
+ TRACE_sd_dotloader (task, agget (node, (char*)"category"));
#endif
- }
+ xbt_dict_set(jobs, name, task, NULL);
+ if (!strcmp(name, "root")){
+ /* by design the root task is always SCHEDULABLE */
+ __SD_task_set_state(task, SD_SCHEDULABLE);
+ /* Put it at the beginning of the dynar */
+ xbt_dynar_insert_at(result, 0, &task);
+ } else {
+ if (!strcmp(name, "end")){
+ XBT_DEBUG("Declaration of the 'end' node, don't store it yet.");
+ end = task;
+ /* Should be inserted later in the dynar */
+ } else {
+ xbt_dynar_push(result, &task);
+ }
+ }
- current_job = xbt_dict_get_or_null(jobs, name);
- if (current_job == NULL) {
- current_job =
- SD_task_create_comp_par_amdahl(name, NULL , amount, alpha);
-#ifdef HAVE_TRACING
- TRACE_sd_dotloader (current_job, agget (dag_node, (char*)"category"));
-#endif
- xbt_dict_set(jobs, name, current_job, NULL);
- xbt_dynar_push(result, ¤t_job);
+ if((seq_or_par == sequential) &&
+ (schedule ||
+ XBT_LOG_ISENABLED(sd_dotparse, xbt_log_priority_verbose))){
+ /* try to take the information to schedule the task only if all is
+ * right*/
+ int performer, order;
+ char *char_performer = agget(node, (char *) "performer");
+ char *char_order = agget(node, (char *) "order");
+ /* performer is the computer which execute the task */
+ performer =
+ ((!char_performer || !strcmp(char_performer,"")) ? -1:atoi(char_performer));
+ /* order is giving the task order on one computer */
+ order = ((!char_order || !strcmp(char_order, ""))? -1:atoi(char_order));
+
+ XBT_DEBUG ("Task '%s' is scheduled on workstation '%d' in position '%d'",
+ task->name, performer, order);
+ xbt_dynar_t computer = NULL;
+ if(performer != -1 && order != -1){
+ /* required parameters are given */
+ computer = xbt_dict_get_or_null(computers, char_performer);
+ if(computer == NULL){
+ computer = xbt_dynar_new(sizeof(SD_task_t), NULL);
+ xbt_dict_set(computers, char_performer, computer, NULL);
+ }
+ if(performer < xbt_lib_length(host_lib)){
+ /* the wanted computer is available */
+ SD_task_t *task_test = NULL;
+ if(order < computer->used)
+ task_test = xbt_dynar_get_ptr(computer,order);
+ if(task_test != NULL && *task_test != NULL && *task_test != task){
+ /* the user gives the same order to several tasks */
+ schedule = false;
+ XBT_VERB("The task %s starts on the computer %s at the position : %s like the task %s",
+ (*task_test)->name, char_performer, char_order,
+ task->name);
+ }else{
+ /* the parameter seems to be ok */
+ xbt_dynar_set_as(computer, order, SD_task_t, task);
+ }
+ }else{
+ /* the platform has not enough processors to schedule the DAG like
+ * the user wants*/
+ schedule = false;
+ XBT_VERB("The schedule is ignored, there are not enough computers");
+ }
+ }
+ else {
+ /* one of required parameters is not given */
+ schedule = false;
+ XBT_VERB("The schedule is ignored, the task %s is not correctly scheduled",
+ task->name);
+ }
+ }
+ } else {
+ XBT_WARN("Task '%s' is defined more than once", name);
+ }
}
- Agedge_t *e;
- int count = 0;
- for (e = agfstin(dag_dot, dag_node); e; e = agnxtin(dag_dot, e)) {
- dot_add_input_dependencies(current_job, e, parallel);
- count++;
- }
- if (count == 0 && current_job != root_task) {
- SD_task_dependency_add(NULL, NULL, root_task, current_job);
- }
- count = 0;
- for (e = agfstout(dag_dot, dag_node); e; e = agnxtout(dag_dot, e)) {
- dot_add_output_dependencies(current_job, e, parallel);
- count++;
- }
- if (count == 0 && current_job != end_task) {
- SD_task_dependency_add(NULL, NULL, current_job, end_task);
+ /*
+ * Check if 'root' and 'end' nodes have been explicitly declared.
+ * If not, create them.
+ */
+ if (!(root = xbt_dict_get_or_null(jobs, "root"))){
+ if (seq_or_par == sequential)
+ root = SD_task_create_comp_seq("root", NULL, 0);
+ else
+ root = SD_task_create_comp_par_amdahl("root", NULL, 0, 0);
+ /* by design the root task is always SCHEDULABLE */
+ __SD_task_set_state(root, SD_SCHEDULABLE);
+ /* Put it at the beginning of the dynar */
+ xbt_dynar_insert_at(result, 0, &root);
}
-}
-
-/* dot_add_task create a sd_task and all transfers required for this
- * task. The execution time of the task is given by the attribute size.
- * The unit of size is the Flop.*/
-void dot_add_task(Agnode_t * dag_node) {
- char *name = agnameof(dag_node);
- SD_task_t current_job;
- double runtime = dot_parse_double(agget(dag_node, (char *) "size"));
- XBT_DEBUG("See <job id=%s runtime=%s %.0f>", name,
- agget(dag_node, (char *) "size"), runtime);
-
- if (!strcmp(name, "root")){
- XBT_WARN("'root' node is explicitly declared in the DOT file. Update it");
- root_task->amount = runtime;
-#ifdef HAVE_TRACING
- TRACE_sd_dotloader (root_task, agget (dag_node, (char*)"category"));
-#endif
+ if (!(end = xbt_dict_get_or_null(jobs, "end"))){
+ if (seq_or_par == sequential)
+ end = SD_task_create_comp_seq("end", NULL, 0);
+ else
+ end = SD_task_create_comp_par_amdahl("end", NULL, 0, 0);
+ /* Should be inserted later in the dynar */
}
- if (!strcmp(name, "end")){
- XBT_WARN("'end' node is explicitly declared in the DOT file. Update it");
- end_task->amount = runtime;
-#ifdef HAVE_TRACING
- TRACE_sd_dotloader (end_task, agget (dag_node, (char*)"category"));
+ /*
+ * Create edges
+ */
+ xbt_dynar_t edges = xbt_dynar_new(sizeof(Agedge_t*), NULL);
+ for (node = agfstnode(dag_dot); node; node = agnxtnode(dag_dot, node)) {
+ unsigned cursor;
+ Agedge_t * edge;
+ xbt_dynar_reset(edges);
+ for (edge = agfstout(dag_dot, node); edge; edge = agnxtout(dag_dot, edge))
+ xbt_dynar_push_as(edges, Agedge_t *, edge);
+#ifdef HAVE_CGRAPH_H
+ /* Hack: circumvent a bug in libcgraph, where the edges are not always given
+ * back in creation order. We sort them again, according to their sequence
+ * id. The problem appears to be solved (i.e.: I did not test it) in
+ * graphviz' mercurial repository by the following changeset:
+ * changeset: 8431:d5f1fb7e8103
+ * user: Emden Gansner <erg@research.att.com>
+ * date: Tue Oct 11 12:38:58 2011 -0400
+ * summary: Make sure edges are stored in node creation order
+ * It should be fixed in graphviz 2.30 and above.
+ */
+ xbt_dynar_sort(edges, edge_compare);
#endif
- }
-
- current_job = xbt_dict_get_or_null(jobs, name);
- if (!current_job) {
- current_job =
- SD_task_create_comp_seq(name, NULL , runtime);
+ xbt_dynar_foreach(edges, cursor, edge) {
+ SD_task_t src, dst;
+ char *src_name=agnameof(agtail(edge));
+ char *dst_name=agnameof(aghead(edge));
+ double size = atof(agget(edge, (char *) "size"));
+
+ src = xbt_dict_get_or_null(jobs, src_name);
+ dst = xbt_dict_get_or_null(jobs, dst_name);
+
+ if (size > 0) {
+ char *name =
+ xbt_malloc((strlen(src_name)+strlen(dst_name)+6)*sizeof(char));
+ sprintf(name, "%s->%s", src_name, dst_name);
+ XBT_DEBUG("See <transfer id=%s amount = %.0f>", name, size);
+ if (!(task = xbt_dict_get_or_null(jobs, name))) {
+ if (seq_or_par == sequential)
+ task = SD_task_create_comm_e2e(name, NULL , size);
+ else
+ task = SD_task_create_comm_par_mxn_1d_block(name, NULL , size);
#ifdef HAVE_TRACING
- TRACE_sd_dotloader (current_job, agget (dag_node, (char*)"category"));
+ TRACE_sd_dotloader (task, agget (node, (char*)"category"));
#endif
- xbt_dict_set(jobs, name, current_job, NULL);
- xbt_dynar_push(result, ¤t_job);
- }
- Agedge_t *e;
- int count = 0;
-
- for (e = agfstin(dag_dot, dag_node); e; e = agnxtin(dag_dot, e)) {
- dot_add_input_dependencies(current_job, e, sequential);
- count++;
- }
- if (count == 0 && current_job != root_task) {
- SD_task_dependency_add(NULL, NULL, root_task, current_job);
- }
- count = 0;
- for (e = agfstout(dag_dot, dag_node); e; e = agnxtout(dag_dot, e)) {
- dot_add_output_dependencies(current_job, e, sequential);
- count++;
- }
- if (count == 0 && current_job != end_task) {
- SD_task_dependency_add(NULL, NULL, current_job, end_task);
- }
-
- if(schedule || XBT_LOG_ISENABLED(sd_dotparse, xbt_log_priority_verbose)){
- /* try to take the information to schedule the task only if all is
- * right*/
- /* performer is the computer which execute the task */
- unsigned long performer = -1;
- char * char_performer = agget(dag_node, (char *) "performer");
- if (char_performer != NULL)
- performer = (long) dot_parse_int(char_performer);
-
- /* order is giving the task order on one computer */
- unsigned long order = -1;
- char * char_order = agget(dag_node, (char *) "order");
- if (char_order != NULL)
- order = (long) dot_parse_int(char_order);
- xbt_dynar_t computer = NULL;
- if(performer != -1 && order != -1){
- /* required parameters are given */
- computer = xbt_dict_get_or_null(computers, char_performer);
- if(computer == NULL){
- computer = xbt_dynar_new(sizeof(SD_task_t), NULL);
- xbt_dict_set(computers, char_performer, computer, NULL);
- }
- if(performer < xbt_lib_length(host_lib)){
- /* the wanted computer is available */
- SD_task_t *task_test = NULL;
- if(order < computer->used)
- task_test = xbt_dynar_get_ptr(computer,order);
- if(task_test != NULL && *task_test != NULL && *task_test != current_job){
- /* the user gives the same order to several tasks */
- schedule = false;
- XBT_VERB("The task %s starts on the computer %s at the position : %s like the task %s",
- (*task_test)->name, char_performer, char_order,
- current_job->name);
- }else{
- /* the parameter seems to be ok */
- xbt_dynar_set_as(computer, order, SD_task_t, current_job);
+ SD_task_dependency_add(NULL, NULL, src, task);
+ SD_task_dependency_add(NULL, NULL, task, dst);
+ xbt_dict_set(jobs, name, task, NULL);
+ xbt_dynar_push(result, &task);
+ } else {
+ XBT_WARN("Task '%s' is defined more than once", name);
}
- }else{
- /* the platform has not enough processors to schedule the DAG like
- * the user wants*/
- schedule = false;
- XBT_VERB("The schedule is ignored, there are not enough computers");
+ xbt_free(name);
+ } else {
+ SD_task_dependency_add(NULL, NULL, src, dst);
}
}
- else {
- /* one of required parameters is not given */
- schedule = false;
- XBT_VERB("The schedule is ignored, the task %s is not correctly scheduled",
- current_job->name);
- }
}
-}
+ xbt_dynar_free(&edges);
-/* dot_add_output_dependencies create the dependencies between a task
- * and a transfers. This is given by the edges in the dot file.
- * The amount of data transfers is given by the attribute size on the
- * edge. */
-void dot_add_input_dependencies(SD_task_t current_job, Agedge_t * edge,
- seq_par_t seq_or_par) {
- SD_task_t file = NULL;
- char *name_tail=agnameof(agtail(edge));
- char *name_head=agnameof(aghead(edge));
- char *name = xbt_malloc((strlen(name_head)+strlen(name_tail)+6)*sizeof(char));
- sprintf(name, "%s->%s", name_tail, name_head);
- double size = dot_parse_double(agget(edge, (char *) "size"));
- XBT_DEBUG("add input -- edge: %s, size : %e, get size : %s",
- name, size, agget(edge, (char *) "size"));
-
- if (size > 0) {
- file = xbt_dict_get_or_null(files, name);
- if (file == NULL) {
- if (seq_or_par == sequential){
- file = SD_task_create_comm_e2e(name, NULL, size);
- } else {
- file = SD_task_create_comm_par_mxn_1d_block(name, NULL, size);
- }
-#ifdef HAVE_TRACING
- TRACE_sd_dotloader (file, agget (edge, (char*)"category"));
-#endif
- XBT_DEBUG("add input -- adding %s to the dict as new file", name);
- xbt_dict_set(files, name, file, NULL);
- } else {
- XBT_WARN("%s already exists", name);
- if (SD_task_get_amount(file) != size) {
- XBT_WARN("Ignoring file %s size redefinition from %.0f to %.0f",
- name, SD_task_get_amount(file), size);
- }
- }
- SD_task_dependency_add(NULL, NULL, file, current_job);
- } else {
- file = xbt_dict_get_or_null(jobs, name_tail);
- if (file != NULL) {
- SD_task_dependency_add(NULL, NULL, file, current_job);
+ /* all compute and transfer tasks have been created, put the "end" node at
+ * the end of dynar
+ */
+ XBT_DEBUG("All tasks have been created, put %s at the end of the dynar",
+ end->name);
+ xbt_dynar_push(result, &end);
+
+ /* Connect entry tasks to 'root', and exit tasks to 'end'*/
+
+ xbt_dynar_foreach (result, i, task){
+ if (task == root || task == end)
+ continue;
+ if (xbt_dynar_is_empty(task->tasks_before)) {
+ XBT_DEBUG("file '%s' has no source. Add dependency from 'root'",
+ task->name);
+ SD_task_dependency_add(NULL, NULL, root, task);
+ } else if (xbt_dynar_is_empty(task->tasks_after)) {
+ XBT_DEBUG("file '%s' has no destination. Add dependency to 'end'",
+ task->name);
+ SD_task_dependency_add(NULL, NULL, task, end);
}
}
- free(name);
-}
-/* dot_add_output_dependencies create the dependencies between a
- * transfers and a task. This is given by the edges in the dot file.
- * The amount of data transfers is given by the attribute size on the
- * edge. */
-void dot_add_output_dependencies(SD_task_t current_job, Agedge_t * edge,
- seq_par_t seq_or_par) {
- SD_task_t file;
- char *name_tail=agnameof(agtail(edge));
- char *name_head=agnameof(aghead(edge));
- char *name = xbt_malloc((strlen(name_head)+strlen(name_tail)+6)*sizeof(char));
- sprintf(name, "%s->%s", name_tail, name_head);
- double size = dot_parse_double(agget(edge, (char *) "size"));
- XBT_DEBUG("add_output -- edge: %s, size : %e, get size : %s",
- name, size, agget(edge, (char *) "size"));
-
- if (size > 0) {
- file = xbt_dict_get_or_null(files, name);
- if (file == NULL) {
- if (seq_or_par == sequential){
- file = SD_task_create_comm_e2e(name, NULL, size);
- } else {
- file = SD_task_create_comm_par_mxn_1d_block(name, NULL, size);
- }
-#ifdef HAVE_TRACING
- TRACE_sd_dotloader (file, agget (edge, (char*)"category"));
-#endif
- XBT_DEBUG("add output -- adding %s to the dict as new file", name);
- xbt_dict_set(files, name, file, NULL);
- } else {
- XBT_WARN("%s already exists", name);
- if (SD_task_get_amount(file) != size) {
- XBT_WARN("Ignoring file %s size redefinition from %.0f to %.0f",
- name, SD_task_get_amount(file), size);
- }
- }
- SD_task_dependency_add(NULL, NULL, current_job, file);
- if (xbt_dynar_length(file->tasks_before) > 1) {
- XBT_WARN("File %s created at more than one location...", file->name);
- }
- } else {
- file = xbt_dict_get_or_null(jobs, name_head);
- if (file != NULL) {
- SD_task_dependency_add(NULL, NULL, current_job, file);
- }
+ agclose(dag_dot);
+ xbt_dict_free(&jobs);
+ fclose(in_file);
+
+ if (!acyclic_graph_detail(result)) {
+ XBT_ERROR("The DOT described in %s is not a DAG. It contains a cycle.",
+ basename((char*)filename));
+ xbt_dynar_free(&result);
+ /* (result == NULL) here */
}
- free(name);
+ return result;
}
SD_task_t dst)
{
xbt_dynar_t dynar;
- int length;
+ unsigned long length;
int found = 0;
- int i;
+ unsigned long i;
SD_dependency_t dependency;
dynar = src->tasks_after;
for (i = 0; i < length && !found; i++) {
xbt_dynar_get_cpy(dynar, i, &dependency);
found = (dependency->dst == dst);
- XBT_DEBUG("Dependency %d: dependency->dst = %s", i,
+ XBT_DEBUG("Dependency %lu: dependency->dst = %s", i,
SD_task_get_name(dependency->dst));
}
{
xbt_dynar_t dynar;
- int length;
+ unsigned long length;
int found = 0;
- int i;
+ unsigned long i;
SD_dependency_t dependency;
/* remove the dependency from src->tasks_after */
{
xbt_dynar_t dynar;
- int length;
+ unsigned long length;
int found = 0;
- int i;
+ unsigned long i;
SD_dependency_t dependency;
dynar = src->tasks_after;
switch (process->waiting_action->type) {
- case SIMIX_ACTION_EXECUTE:
- case SIMIX_ACTION_PARALLEL_EXECUTE:
- SIMIX_host_execution_destroy(process->waiting_action);
- break;
-
- case SIMIX_ACTION_COMMUNICATE:
- xbt_fifo_remove(process->comms, process->waiting_action);
- SIMIX_comm_cancel(process->waiting_action);
- break;
-
- case SIMIX_ACTION_SLEEP:
- SIMIX_process_sleep_destroy(process->waiting_action);
- break;
-
- case SIMIX_ACTION_SYNCHRO:
- SIMIX_synchro_stop_waiting(process, &process->simcall);
- SIMIX_synchro_destroy(process->waiting_action);
- break;
-
- case SIMIX_ACTION_IO:
- SIMIX_io_destroy(process->waiting_action);
- break;
-
- /* **************************************/
- /* TUTORIAL: New API */
- case SIMIX_ACTION_NEW_API:
- SIMIX_new_api_destroy(process->waiting_action);
- break;
- /* **************************************/
+ case SIMIX_ACTION_EXECUTE:
+ case SIMIX_ACTION_PARALLEL_EXECUTE:
+ SIMIX_host_execution_destroy(process->waiting_action);
+ break;
+
+ case SIMIX_ACTION_COMMUNICATE:
+ xbt_fifo_remove(process->comms, process->waiting_action);
+ SIMIX_comm_cancel(process->waiting_action);
+ SIMIX_comm_destroy(process->waiting_action);
+ break;
+
+ case SIMIX_ACTION_SLEEP:
+ SIMIX_process_sleep_destroy(process->waiting_action);
+ break;
+
+ case SIMIX_ACTION_SYNCHRO:
+ SIMIX_synchro_stop_waiting(process, &process->simcall);
+ SIMIX_synchro_destroy(process->waiting_action);
+ break;
+
+ case SIMIX_ACTION_IO:
+ SIMIX_io_destroy(process->waiting_action);
+ break;
+
+ /* **************************************/
+ /* TUTORIAL: New API */
+ case SIMIX_ACTION_NEW_API:
+ SIMIX_new_api_destroy(process->waiting_action);
+ break;
+ /* **************************************/
}
}
case SURF_ACTION_FAILED:
simcall->issuer->context->iwannadie = 1;
//SMX_EXCEPTION(simcall->issuer, host_error, 0, "Host failed");
+ state = SIMIX_SRC_HOST_FAILURE;
break;
case SURF_ACTION_DONE:
{
int index = MPI_UNDEFINED;
- if (rank < group->size) {
+ if (0 <= rank && rank < group->size) {
index = group->rank_to_index_map[rank];
}
return index;
#include <xbt.h>
#include <xbt/replay.h>
-#define MPI_DTYPE MPI_BYTE
-
XBT_LOG_NEW_DEFAULT_SUBCATEGORY(smpi_replay,smpi,"Trace Replay with SMPI");
int communicator_size = 0;
static int active_processes = 0;
xbt_dynar_t *reqq;
+MPI_Datatype MPI_DEFAULT_TYPE, MPI_CURRENT_TYPE;
+
static void log_timed_action (const char *const *action, double clock){
if (XBT_LOG_ISENABLED(smpi_replay, xbt_log_priority_verbose)){
char *name = xbt_str_join_array(action, " ");
}
}
-
typedef struct {
xbt_dynar_t isends; /* of MPI_Request */
xbt_dynar_t irecvs; /* of MPI_Request */
return value;
}
+static MPI_Datatype decode_datatype(const char *const action)
+{
+// Declared datatypes,
+
+ switch(atoi(action))
+ {
+ case 0:
+ MPI_CURRENT_TYPE=MPI_DOUBLE;
+ break;
+ case 1:
+ MPI_CURRENT_TYPE=MPI_INT;
+ break;
+ case 2:
+ MPI_CURRENT_TYPE=MPI_CHAR;
+ break;
+ case 3:
+ MPI_CURRENT_TYPE=MPI_SHORT;
+ break;
+ case 4:
+ MPI_CURRENT_TYPE=MPI_LONG;
+ break;
+ case 5:
+ MPI_CURRENT_TYPE=MPI_FLOAT;
+ break;
+ case 6:
+ MPI_CURRENT_TYPE=MPI_BYTE;
+ break;
+ default:
+ MPI_CURRENT_TYPE=MPI_DEFAULT_TYPE;
+
+ }
+ return MPI_CURRENT_TYPE;
+}
+
static void action_init(const char *const *action)
{
int i;
smpi_replay_globals_t globals = xbt_new(s_smpi_replay_globals_t, 1);
globals->isends = xbt_dynar_new(sizeof(MPI_Request),NULL);
globals->irecvs = xbt_dynar_new(sizeof(MPI_Request),NULL);
-
+
+ if(action[2]) MPI_DEFAULT_TYPE= MPI_DOUBLE; // default MPE dataype
+ else MPI_DEFAULT_TYPE= MPI_BYTE; // default TAU datatype
smpi_process_set_user_data((void*) globals);
for(i=0;i<active_processes;i++){
reqq[i]=xbt_dynar_new(sizeof(MPI_Request),NULL);
}
-
-
}
static void action_finalize(const char *const *action)
{
smpi_replay_globals_t globals =
(smpi_replay_globals_t) smpi_process_get_user_data();
-
if (globals){
XBT_DEBUG("There are %lu isends and %lu irecvs in the dynars",
xbt_dynar_length(globals->isends),xbt_dynar_length(globals->irecvs));
int to = atoi(action[2]);
double size=parse_double(action[3]);
double clock = smpi_process_simulated_elapsed();
+
+ if(action[4]) {
+ MPI_CURRENT_TYPE=decode_datatype(action[4]);
+ } else {
+ MPI_CURRENT_TYPE= MPI_DEFAULT_TYPE;
+ }
+
#ifdef HAVE_TRACING
int rank = smpi_comm_rank(MPI_COMM_WORLD);
TRACE_smpi_computing_out(rank);
TRACE_smpi_send(rank, rank, dst_traced);
#endif
- smpi_mpi_send(NULL, size, MPI_DTYPE, to , 0, MPI_COMM_WORLD);
+ smpi_mpi_send(NULL, size, MPI_CURRENT_TYPE, to , 0, MPI_COMM_WORLD);
log_timed_action (action, clock);
double size=parse_double(action[3]);
double clock = smpi_process_simulated_elapsed();
MPI_Request request;
+
+ if(action[4]) MPI_CURRENT_TYPE=decode_datatype(action[4]);
+ else MPI_CURRENT_TYPE= MPI_DEFAULT_TYPE;
+
smpi_replay_globals_t globals =
(smpi_replay_globals_t) smpi_process_get_user_data();
#ifdef HAVE_TRACING
TRACE_smpi_send(rank, rank, dst_traced);
#endif
- request = smpi_mpi_isend(NULL, size, MPI_DTYPE, to, 0,MPI_COMM_WORLD);
+ request = smpi_mpi_isend(NULL, size, MPI_CURRENT_TYPE, to, 0,MPI_COMM_WORLD);
#ifdef HAVE_TRACING
TRACE_smpi_ptp_out(rank, rank, dst_traced, __FUNCTION__);
double size=parse_double(action[3]);
double clock = smpi_process_simulated_elapsed();
MPI_Status status;
+
+ if(action[4]) MPI_CURRENT_TYPE=decode_datatype(action[4]);
+ else MPI_CURRENT_TYPE= MPI_DEFAULT_TYPE;
+
#ifdef HAVE_TRACING
int rank = smpi_comm_rank(MPI_COMM_WORLD);
int src_traced = smpi_group_rank(smpi_comm_group(MPI_COMM_WORLD), from);
TRACE_smpi_ptp_in(rank, src_traced, rank, __FUNCTION__);
#endif
- smpi_mpi_recv(NULL, size, MPI_DTYPE, from, 0, MPI_COMM_WORLD, &status);
+ smpi_mpi_recv(NULL, size, MPI_CURRENT_TYPE, from, 0, MPI_COMM_WORLD, &status);
#ifdef HAVE_TRACING
TRACE_smpi_ptp_out(rank, src_traced, rank, __FUNCTION__);
double size=parse_double(action[3]);
double clock = smpi_process_simulated_elapsed();
MPI_Request request;
+
smpi_replay_globals_t globals =
(smpi_replay_globals_t) smpi_process_get_user_data();
+
+ if(action[4]) MPI_CURRENT_TYPE=decode_datatype(action[4]);
+ else MPI_CURRENT_TYPE= MPI_DEFAULT_TYPE;
#ifdef HAVE_TRACING
int rank = smpi_comm_rank(MPI_COMM_WORLD);
TRACE_smpi_ptp_in(rank, src_traced, rank, __FUNCTION__);
#endif
- request = smpi_mpi_irecv(NULL, size, MPI_DTYPE, from, 0, MPI_COMM_WORLD);
+ request = smpi_mpi_irecv(NULL, size, MPI_CURRENT_TYPE, from, 0, MPI_COMM_WORLD);
#ifdef HAVE_TRACING
TRACE_smpi_ptp_out(rank, src_traced, rank, __FUNCTION__);
log_timed_action (action, clock);
}
+
static void action_bcast(const char *const *action)
{
double size = parse_double(action[2]);
double clock = smpi_process_simulated_elapsed();
+ int root=0;
+ /*
+ * Initialize MPI_CURRENT_TYPE in order to decrease
+ * the number of the checks
+ * */
+ MPI_CURRENT_TYPE= MPI_DEFAULT_TYPE;
+
+ if(action[3]) {
+ root= atoi(action[3]);
+ if(action[4]) {
+ MPI_CURRENT_TYPE=decode_datatype(action[4]);
+ }
+ }
+
#ifdef HAVE_TRACING
int rank = smpi_comm_rank(MPI_COMM_WORLD);
TRACE_smpi_computing_out(rank);
TRACE_smpi_collective_in(rank, root_traced, __FUNCTION__);
#endif
- smpi_mpi_bcast(NULL, size, MPI_DTYPE, 0, MPI_COMM_WORLD);
+ smpi_mpi_bcast(NULL, size, MPI_CURRENT_TYPE, root, MPI_COMM_WORLD);
#ifdef HAVE_TRACING
TRACE_smpi_collective_out(rank, root_traced, __FUNCTION__);
TRACE_smpi_computing_in(rank);
static void action_reduce(const char *const *action)
{
- double size = parse_double(action[2]);
+ double comm_size = parse_double(action[2]);
+ double comp_size = parse_double(action[3]);
double clock = smpi_process_simulated_elapsed();
+ int root=0;
+ MPI_CURRENT_TYPE= MPI_DEFAULT_TYPE;
+
+ if(action[4]) {
+ root= atoi(action[4]);
+ if(action[5]) {
+ MPI_CURRENT_TYPE=decode_datatype(action[5]);
+ }
+ }
+
#ifdef HAVE_TRACING
int rank = smpi_comm_rank(MPI_COMM_WORLD);
TRACE_smpi_computing_out(rank);
int root_traced = smpi_group_rank(smpi_comm_group(MPI_COMM_WORLD), 0);
TRACE_smpi_collective_in(rank, root_traced, __FUNCTION__);
#endif
- smpi_mpi_reduce(NULL, NULL, size, MPI_DTYPE, MPI_OP_NULL, 0, MPI_COMM_WORLD);
+ smpi_mpi_reduce(NULL, NULL, comm_size, MPI_CURRENT_TYPE, MPI_OP_NULL, root, MPI_COMM_WORLD);
+ smpi_execute_flops(comp_size);
#ifdef HAVE_TRACING
TRACE_smpi_collective_out(rank, root_traced, __FUNCTION__);
TRACE_smpi_computing_in(rank);
static void action_allReduce(const char *const *action) {
double comm_size = parse_double(action[2]);
double comp_size = parse_double(action[3]);
+
+ if(action[4]) MPI_CURRENT_TYPE=decode_datatype(action[4]);
+ else MPI_CURRENT_TYPE= MPI_DEFAULT_TYPE;
+
double clock = smpi_process_simulated_elapsed();
#ifdef HAVE_TRACING
int rank = smpi_comm_rank(MPI_COMM_WORLD);
TRACE_smpi_computing_out(rank);
TRACE_smpi_collective_in(rank, -1, __FUNCTION__);
#endif
- smpi_mpi_reduce(NULL, NULL, comm_size, MPI_DTYPE, MPI_OP_NULL, 0, MPI_COMM_WORLD);
+ smpi_mpi_reduce(NULL, NULL, comm_size, MPI_CURRENT_TYPE, MPI_OP_NULL, 0, MPI_COMM_WORLD);
smpi_execute_flops(comp_size);
- smpi_mpi_bcast(NULL, comm_size, MPI_DTYPE, 0, MPI_COMM_WORLD);
+ smpi_mpi_bcast(NULL, comm_size, MPI_CURRENT_TYPE, 0, MPI_COMM_WORLD);
#ifdef HAVE_TRACING
TRACE_smpi_collective_out(rank, -1, __FUNCTION__);
TRACE_smpi_computing_in(rank);
static void action_allToAll(const char *const *action) {
double clock = smpi_process_simulated_elapsed();
int comm_size = smpi_comm_size(MPI_COMM_WORLD);
- int send_size = atoi(action[2]);
- int recv_size = atoi(action[3]);
+ int send_size = parse_double(action[2]);
+ int recv_size = parse_double(action[3]);
void *send = xbt_new0(int, send_size*comm_size);
void *recv = xbt_new0(int, send_size*comm_size);
-
+
+ if(action[4]) MPI_CURRENT_TYPE=decode_datatype(action[4]);
+ else MPI_CURRENT_TYPE= MPI_DEFAULT_TYPE;
#ifdef HAVE_TRACING
int rank = smpi_process_index();
#endif
if (send_size < 200 && comm_size > 12) {
- smpi_coll_tuned_alltoall_bruck(send, send_size, MPI_DTYPE,
- recv, recv_size, MPI_DTYPE,
+ smpi_coll_tuned_alltoall_bruck(send, send_size, MPI_CURRENT_TYPE,
+ recv, recv_size, MPI_CURRENT_TYPE,
MPI_COMM_WORLD);
- } else if (send_size < 3000) {
-
- smpi_coll_tuned_alltoall_basic_linear(send, send_size, MPI_DTYPE,
- recv, recv_size, MPI_DTYPE,
+ } else if (send_size < 3000) {
+ smpi_coll_tuned_alltoall_basic_linear(send, send_size, MPI_CURRENT_TYPE,
+ recv, recv_size, MPI_CURRENT_TYPE,
MPI_COMM_WORLD);
} else {
- smpi_coll_tuned_alltoall_pairwise(send, send_size, MPI_DTYPE,
- recv, recv_size, MPI_DTYPE,
+ smpi_coll_tuned_alltoall_pairwise(send, send_size, MPI_CURRENT_TYPE,
+ recv, recv_size, MPI_CURRENT_TYPE,
MPI_COMM_WORLD);
}
int *recvcounts = xbt_new0(int, comm_size);
int *senddisps = xbt_new0(int, comm_size);
int *recvdisps = xbt_new0(int, comm_size);
- MPI_Datatype sendtype,recvtype;
- send_buf_size=atoi(action[2]);
- recv_buf_size=atoi(action[3+2*comm_size]);
+ send_buf_size=parse_double(action[2]);
+ recv_buf_size=parse_double(action[3+2*comm_size]);
int *sendbuf = xbt_new0(int, send_buf_size);
int *recvbuf = xbt_new0(int, recv_buf_size);
- sendtype=MPI_DTYPE;
- recvtype=MPI_DTYPE;
-
+ if(action[4+4*comm_size]) MPI_CURRENT_TYPE=decode_datatype(action[4+4*comm_size]);
+ else MPI_CURRENT_TYPE= MPI_DEFAULT_TYPE;
+
for(i=0;i<comm_size;i++) {
sendcounts[i] = atoi(action[i+3]);
senddisps[i] = atoi(action[i+3+comm_size]);
TRACE_smpi_computing_out(rank);
TRACE_smpi_collective_in(rank, -1, __FUNCTION__);
#endif
- smpi_coll_basic_alltoallv(sendbuf, sendcounts, senddisps, sendtype,
- recvbuf, recvcounts, recvdisps, recvtype,
+ smpi_coll_basic_alltoallv(sendbuf, sendcounts, senddisps, MPI_CURRENT_TYPE,
+ recvbuf, recvcounts, recvdisps, MPI_CURRENT_TYPE,
MPI_COMM_WORLD);
#ifdef HAVE_TRACING
TRACE_smpi_collective_out(rank, -1, __FUNCTION__);
if ((min == -1.0) || (next_event_date > NOW + min)) break;
- XBT_DEBUG("Updating models");
+ XBT_DEBUG("Updating models (min = %g, NOW = %g, next_event_date = %g)",min, NOW, next_event_date);
while ((event =
tmgr_history_get_next_event_leq(history, next_event_date,
&value,
resource->model->name, min);
resource->model->model_private->update_resource_state(resource,
event, value,
- NOW + min);
+ next_event_date);
}
} while (1);
* sees it and react accordingly. This would kill that need for surf to call simix.
*
*/
+
+static void remove_watched_host(void *key)
+{
+ xbt_dict_remove(watched_hosts_lib, *(char**)key);
+}
+
void surf_watched_hosts(void)
{
char *key;
void *host;
xbt_dict_cursor_t cursor;
+ xbt_dynar_t hosts = xbt_dynar_new(sizeof(char*), NULL);
XBT_DEBUG("Check for host SURF_RESOURCE_ON on watched_hosts_lib");
xbt_dict_foreach(watched_hosts_lib,cursor,key,host)
if(SIMIX_host_get_state(host) == SURF_RESOURCE_ON){
XBT_INFO("Restart processes on host: %s",SIMIX_host_get_name(host));
SIMIX_host_autorestart(host);
- xbt_dict_remove(watched_hosts_lib,key);
+ xbt_dynar_push_as(hosts, char*, key);
}
else
XBT_DEBUG("See SURF_RESOURCE_OFF on host: %s",key);
}
+ xbt_dynar_map(hosts, remove_watched_host);
+ xbt_dynar_free(&hosts);
}
trace_event->idx++;
} else if (event->delta > 0) { /* Last element, checking for periodicity */
xbt_heap_push(h->heap, trace_event, event_date + event->delta);
- trace_event->idx = 0;
+ trace_event->idx = 1; /* not 0 as the first event is a placeholder to handle when events really start */
} else { /* We don't need this trace_event anymore */
trace_event->free_me = 1;
}
link_L07_t nw_link = id;
if (nw_link->type == SURF_WORKSTATION_RESOURCE_LINK) {
- XBT_DEBUG("Updating link %s (%p) with value=%f",
- surf_resource_name(nw_link), nw_link, value);
+ XBT_DEBUG("Updating link %s (%p) with value=%f for date=%g",
+ surf_resource_name(nw_link), nw_link, value, date);
if (event_type == nw_link->bw_event) {
nw_link->bw_current = value;
lmm_update_constraint_bound(ptask_maxmin_system, nw_link->constraint,
/* automaton - representation of büchi automaton */
-/* Copyright (c) 2011. The SimGrid Team.
- * All rights reserved. */
+/* Copyright (c) 2011-2013. The SimGrid Team. All rights reserved. */
/* This program is free software; you can redistribute it and/or modify it
* under the terms of the license (GNU LGPL) which comes with this package. */
#include "xbt/automaton.h"
+#include <stdio.h> /* printf */
xbt_automaton_t xbt_automaton_new(){
xbt_automaton_t automaton = NULL;
}
*(val++) = '\0';
- if (strncmp(name, "contexts/", strlen("contexts/")))
+ if (strncmp(name, "contexts/", strlen("contexts/")) && strncmp(name, "path", strlen("path")))
XBT_INFO("Configuration change: Set '%s' to '%s'", name, val);
TRY {
#include "xbt/misc.h"
#include "simgrid_config.h" /*HAVE_MMAP _XBT_WIN32 */
#include "internal_config.h" /* MMALLOC_WANT_OVERRIDE_LEGACY */
-#include "time.h" /* to seed the random generator */
#include "xbt/sysdep.h"
#include "xbt/log.h"
*/
static void xbt_preinit(void) _XBT_GNUC_CONSTRUCTOR(200);
static void xbt_postexit(void);
-static unsigned int seed = 2147483647;
#ifdef _XBT_WIN32
# undef _XBT_NEED_INIT_PRAGMA
#endif
-static void xbt_preinit(void)
-{
+static void xbt_preinit(void) {
+ unsigned int seed = 2147483647;
+
#ifdef MMALLOC_WANT_OVERRIDE_LEGACY
mmalloc_preinit();
#endif
xbt_os_thread_mod_preinit();
xbt_fifo_preinit();
xbt_dict_preinit();
- atexit(xbt_postexit);
+
+ srand(seed);
+ srand48(seed);
+
+ atexit(xbt_postexit);
}
static void xbt_postexit(void)
for (i=0;i<*argc;i++) {
xbt_dynar_push(xbt_cmdline,&(argv[i]));
}
-
- srand(seed);
- srand48(seed);
xbt_log_init(argc, argv);
}
/** @brief Get a single line from the stream (reimplementation of the GNU getline)
*
- * This is a redefinition of the GNU getline function, used on platforms where
- * it does not exists.
+ * This is a reimplementation of the GNU getline function, so that our code don't depends on the GNU libc.
*
* xbt_getline() reads an entire line from stream, storing the address of the
* buffer containing the text into *buf. The buffer is null-terminated and
* In either case, on a successful call, *buf and *n will be updated to reflect
* the buffer address and allocated size respectively.
*/
-ssize_t xbt_getline(char **buf, size_t * n, FILE * stream)
+ssize_t xbt_getline(char **buf, size_t *n, FILE *stream)
{
-#if !defined(SIMGRID_NEED_GETLINE)
- return getline(buf, n, stream);
-#else
- size_t i;
+ ssize_t i;
int ch;
+ ch = getc(stream);
+ if (ferror(stream) || feof(stream))
+ return -1;
+
if (!*buf) {
- *buf = xbt_malloc(512);
*n = 512;
+ *buf = xbt_malloc(*n);
}
- if (feof(stream))
- return (ssize_t) - 1;
-
- for (i = 0; (ch = fgetc(stream)) != EOF; i++) {
-
- if (i >= (*n) + 1)
+ i = 0;
+ do {
+ if (i == *n)
*buf = xbt_realloc(*buf, *n += 512);
-
- (*buf)[i] = ch;
-
- if ((*buf)[i] == '\n') {
- i++;
- (*buf)[i] = '\0';
- break;
- }
- }
+ (*buf)[i++] = ch;
+ } while (ch != '\n' && (ch = getc(stream)) != EOF);
if (i == *n)
*buf = xbt_realloc(*buf, *n += 1);
-
(*buf)[i] = '\0';
- return (ssize_t) i;
-#endif
+ return i;
}
/*
--- /dev/null
+cmake_minimum_required(VERSION 2.6)
+
+set(EXECUTABLE_OUTPUT_PATH "${CMAKE_CURRENT_BINARY_DIR}")
+
+add_executable(availability_test availability_test.c)
+
+### Add definitions for compile
+if(NOT WIN32)
+ target_link_libraries(availability_test simgrid m)
+else()
+ target_link_libraries(availability_test simgrid)
+endif()
+
+set(tesh_files
+ ${tesh_files}
+ ${CMAKE_CURRENT_SOURCE_DIR}/availability_test.tesh
+ PARENT_SCOPE
+ )
+set(xml_files
+ ${xml_files}
+ ${CMAKE_CURRENT_SOURCE_DIR}/simulacrum_7_hosts.xml
+ PARENT_SCOPE
+ )
+set(teshsuite_src
+ ${teshsuite_src}
+ ${CMAKE_CURRENT_SOURCE_DIR}/availability_test.c
+ PARENT_SCOPE
+ )
+set(bin_files
+ ${bin_files}
+ PARENT_SCOPE
+ )
+set(txt_files
+ ${txt_files}
+ ${CMAKE_CURRENT_SOURCE_DIR}/linkBandwidth7.bw
+ PARENT_SCOPE
+ )
--- /dev/null
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <stddef.h>
+#include <unistd.h>
+#include <simdag/simdag.h>
+#include <xbt/log.h>
+#include <xbt/ex.h>
+#include <signal.h>
+
+
+typedef struct {
+ FILE *daxFile;
+ FILE *envFile;
+} XMLfiles;
+
+
+static void usage(char *name)
+{
+ fprintf(stdout, "Error on parameters.\n");
+ fprintf(stdout, "usage: %s <XML environment file> <DAX file>\n", name);
+}
+
+static void checkParameters(int argc, char *argv[])
+{
+ if (argc != 3) {
+ int i;
+ printf("====%d===\n",argc);
+ for(i=0; i<argc; i++) {
+ printf("\t%s\n",argv[i]);
+ }
+ usage(argv[0]);
+ exit(-1);
+ }
+
+ /* Check that files exist */
+ XMLfiles xmlFiles;
+ if ((xmlFiles.envFile = fopen(argv[1], "r")) == NULL) {
+ fprintf(stderr, "Error while opening XML file %s.\n", argv[1]);
+ exit(-1);
+ }
+ fclose(xmlFiles.envFile);
+
+ if ((xmlFiles.daxFile = fopen(argv[2], "r")) == NULL) {
+ fprintf(stderr, "Error while opening DAX file %s.\n", argv[2]);
+ exit(-1);
+ }
+ fclose(xmlFiles.daxFile);
+}
+
+static int name_compare_hosts(const void *n1, const void *n2)
+{
+ char name1[80], name2[80];
+ strcpy(name1, SD_workstation_get_name(*((SD_workstation_t *) n1)));
+ strcpy(name2, SD_workstation_get_name(*((SD_workstation_t *) n2)));
+
+ return strcmp(name1, name2);
+}
+
+static void scheduleDAX(xbt_dynar_t dax)
+{
+ unsigned int cursor;
+ SD_task_t task;
+
+ const SD_workstation_t *ws_list = SD_workstation_get_list();
+ int totalHosts = SD_workstation_get_number();
+ qsort((void *) ws_list, totalHosts, sizeof(SD_workstation_t),
+ name_compare_hosts);
+
+ int count = SD_workstation_get_number();
+ //fprintf(stdout, "No. workstations: %d, %d\n", count, (dax != NULL));
+
+ xbt_dynar_foreach(dax, cursor, task) {
+ if (SD_task_get_kind(task) == SD_TASK_COMP_SEQ) {
+ if (!strcmp(SD_task_get_name(task), "end")
+ || !strcmp(SD_task_get_name(task), "root")) {
+ fprintf(stdout, "Scheduling %s to node: %s\n", SD_task_get_name(task),
+ SD_workstation_get_name(ws_list[0]));
+ SD_task_schedulel(task, 1, ws_list[0]);
+ } else {
+ fprintf(stdout, "Scheduling %s to node: %s\n", SD_task_get_name(task),
+ SD_workstation_get_name(ws_list[(cursor) % count]));
+ SD_task_schedulel(task, 1, ws_list[(cursor) % count]);
+ }
+ }
+ }
+}
+
+/* static void printTasks(xbt_dynar_t completedTasks) */
+/* { */
+/* unsigned int cursor; */
+/* SD_task_t task; */
+
+/* xbt_dynar_foreach(completedTasks, cursor, task) */
+/* { */
+/* if(SD_task_get_state(task) == SD_DONE) */
+/* { */
+/* fprintf(stdout, "Task done: %s, %f, %f\n", */
+/* SD_task_get_name(task), SD_task_get_start_time(task), SD_task_get_finish_time(task)); */
+/* } */
+/* } */
+/* } */
+
+
+/* void createDottyFile(xbt_dynar_t dax, char *filename) */
+/* { */
+/* char filename2[1000]; */
+/* unsigned int cursor; */
+/* SD_task_t task; */
+
+/* sprintf(filename2, "%s.dot", filename); */
+/* FILE *dotout = fopen(filename2, "w"); */
+/* fprintf(dotout, "digraph A {\n"); */
+/* xbt_dynar_foreach(dax, cursor, task) */
+/* { */
+/* SD_task_dotty(task, dotout); */
+/* } */
+/* fprintf(dotout, "}\n"); */
+/* fclose(dotout); */
+/* } */
+
+static xbt_dynar_t initDynamicThrottling(int *argc, char *argv[])
+{
+ /* Initialize SD */
+ SD_init(argc, argv);
+
+ /* Check parameters */
+ checkParameters(*argc,argv);
+
+ /* Create environment */
+ SD_create_environment(argv[1]);
+ /* Load DAX file */
+ xbt_dynar_t dax = SD_daxload(argv[2]);
+
+ // createDottyFile(dax, argv[2]);
+
+ // Schedule DAX
+ fprintf(stdout, "Scheduling DAX...\n");
+ scheduleDAX(dax);
+ fprintf(stdout, "DAX scheduled\n");
+ SD_simulate(-1);
+ fprintf(stdout, "Simulation end. Time: %f\n", SD_get_clock());
+
+ return dax;
+}
+
+/**
+ * Garbage collector :D
+ */
+static void garbageCollector(xbt_dynar_t dax)
+{
+ SD_task_t task;
+ unsigned int cursor;
+ xbt_dynar_foreach(dax, cursor, task) {
+ SD_task_destroy(task);
+ }
+ SD_exit();
+}
+
+
+
+/**
+ * Main procedure
+ * @param argc
+ * @param argv
+ * @return
+ */
+int main(int argc, char *argv[])
+{
+
+ /* Start process... */
+ xbt_dynar_t dax = initDynamicThrottling(&argc, argv);
+
+ // Free memory
+ garbageCollector(dax);
+ return 0;
+}
--- /dev/null
+$ simdag/availability/availability_test ${srcdir:=.}/simdag/availability/simulacrum_7_hosts.xml --cfg=path:${srcdir:=.}/simdag/availability/ ${srcdir:=.}/../examples/simdag/scheduling/Montage_25.xml --cfg=network/TCP_gamma:4194304 --log=sd_daxparse.thresh:critical
+> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'network/TCP_gamma' to '4194304'
+> [0.000000] [surf_workstation/INFO] surf_workstation_model_init_ptask_L07
+> Scheduling DAX...
+> Scheduling root to node: Host 26
+> Scheduling ID00000@mProjectPP to node: Host 27
+> Scheduling ID00001@mProjectPP to node: Host 28
+> Scheduling ID00002@mProjectPP to node: Host 29
+> Scheduling ID00003@mProjectPP to node: Host 30
+> Scheduling ID00004@mProjectPP to node: Host 31
+> Scheduling ID00005@mDiffFit to node: Host 32
+> Scheduling ID00006@mDiffFit to node: Host 26
+> Scheduling ID00007@mDiffFit to node: Host 27
+> Scheduling ID00008@mDiffFit to node: Host 28
+> Scheduling ID00009@mDiffFit to node: Host 29
+> Scheduling ID00010@mDiffFit to node: Host 30
+> Scheduling ID00011@mDiffFit to node: Host 31
+> Scheduling ID00012@mDiffFit to node: Host 32
+> Scheduling ID00013@mDiffFit to node: Host 26
+> Scheduling ID00014@mConcatFit to node: Host 27
+> Scheduling ID00015@mBgModel to node: Host 28
+> Scheduling ID00016@mBackground to node: Host 29
+> Scheduling ID00017@mBackground to node: Host 30
+> Scheduling ID00018@mBackground to node: Host 31
+> Scheduling ID00019@mBackground to node: Host 32
+> Scheduling ID00020@mBackground to node: Host 26
+> Scheduling ID00021@mImgTbl to node: Host 27
+> Scheduling ID00022@mAdd to node: Host 28
+> Scheduling ID00023@mShrink to node: Host 29
+> Scheduling ID00024@mJPEG to node: Host 30
+> Scheduling end to node: Host 26
+> DAX scheduled
+> Simulation end. Time: 164.052870
+
--- /dev/null
+PERIODICITY 8.0
+1.007044263744508 6.846527733924368E7
+4.199387092709633 1.0335587797993976E8
+5.319464737378834 1.0591433767387845E7
+7.237437222882919 7.037797434537312E7
--- /dev/null
+<?xml version='1.0'?>
+<!DOCTYPE platform SYSTEM "http://simgrid.gforge.inria.fr/simgrid.dtd">
+<platform version="3">
+ <AS id="AS0" routing="Full">
+ <host id="Host 26" power="3.300140519709234E9" />
+ <host id="Host 27" power="3.867398877553016E9" />
+ <host id="Host 28" power="1.6522665718098645E9" />
+ <host id="Host 29" power="1.0759376792481766E9" />
+ <host id="Host 30" power="2.4818410475340424E9" />
+ <host id="Host 31" power="1.773869555571436E9" />
+ <host id="Host 32" power="1.7843609176927505E9" />
+
+ <link id="l152" bandwidth="1.25E8" latency="1.0E-4" />
+ <link id="l153" bandwidth="1.25E8" latency="1.0E-4" />
+ <link id="l154" bandwidth="1.25E8" latency="1.0E-4" />
+ <link id="l155" bandwidth="1.25E8" latency="1.0E-4" />
+ <link id="l156" bandwidth="1.25E8" latency="1.0E-4" />
+ <link id="l157" bandwidth="1.25E8" latency="1.0E-4" />
+ <link id="l159" bandwidth="1.25E8" bandwidth_file="linkBandwidth7.bw" latency="1.0E-4" />
+ <link id="l160" bandwidth="1.25E8" latency="1.0E-4" />
+ <link id="l161" bandwidth="1.25E8" latency="1.0E-4" />
+ <link id="l162" bandwidth="1.25E8" latency="1.0E-4" />
+ <link id="l163" bandwidth="1.25E8" latency="1.0E-4" />
+ <link id="l164" bandwidth="1.25E8" latency="1.0E-4" />
+ <link id="l165" bandwidth="1.25E8" latency="1.0E-4" />
+ <link id="l166" bandwidth="1.25E8" latency="1.0E-4" />
+ <link id="l167" bandwidth="1.25E8" latency="1.0E-4" />
+ <link id="l168" bandwidth="1.25E8" latency="1.0E-4" />
+ <link id="l169" bandwidth="1.25E8" latency="1.0E-4" />
+ <link id="l170" bandwidth="1.25E8" latency="1.0E-4" />
+ <link id="l171" bandwidth="1.25E8" latency="1.0E-4" />
+ <link id="l172" bandwidth="1.25E8" latency="1.0E-4" />
+ <link id="l173" bandwidth="1.25E8" latency="1.0E-4" />
+
+ <route symmetrical="NO" src="Host 26" dst="Host 27">
+ <link_ctn id="l155"/>
+ </route>
+
+ <route symmetrical="NO" src="Host 26" dst="Host 28">
+ <link_ctn id="l155"/>
+ <link_ctn id="l154"/>
+ <link_ctn id="l156"/>
+ </route>
+
+ <route symmetrical="NO" src="Host 26" dst="Host 29">
+ <link_ctn id="l152"/>
+ <link_ctn id="l157"/>
+ </route>
+
+ <route symmetrical="NO" src="Host 26" dst="Host 30">
+ <link_ctn id="l152"/>
+ <link_ctn id="l161"/>
+ </route>
+
+ <route symmetrical="NO" src="Host 26" dst="Host 31">
+ <link_ctn id="l166"/>
+ </route>
+
+ <route symmetrical="NO" src="Host 26" dst="Host 32">
+ <link_ctn id="l152"/>
+ <link_ctn id="l169"/>
+ </route>
+
+ <route symmetrical="NO" src="Host 27" dst="Host 26">
+ <link_ctn id="l155"/>
+ </route>
+
+ <route symmetrical="NO" src="Host 27" dst="Host 28">
+ <link_ctn id="l154"/>
+ <link_ctn id="l156"/>
+ </route>
+
+ <route symmetrical="NO" src="Host 27" dst="Host 29">
+ <link_ctn id="l159"/>
+ </route>
+
+ <route symmetrical="NO" src="Host 27" dst="Host 30">
+ <link_ctn id="l162"/>
+ </route>
+
+ <route symmetrical="NO" src="Host 27" dst="Host 31">
+ <link_ctn id="l167"/>
+ </route>
+
+ <route symmetrical="NO" src="Host 27" dst="Host 32">
+ <link_ctn id="l154"/>
+ <link_ctn id="l170"/>
+ </route>
+
+ <route symmetrical="NO" src="Host 28" dst="Host 26">
+ <link_ctn id="l156"/>
+ <link_ctn id="l154"/>
+ <link_ctn id="l155"/>
+ </route>
+
+ <route symmetrical="NO" src="Host 28" dst="Host 27">
+ <link_ctn id="l156"/>
+ <link_ctn id="l154"/>
+ </route>
+
+ <route symmetrical="NO" src="Host 28" dst="Host 29">
+ <link_ctn id="l160"/>
+ </route>
+
+ <route symmetrical="NO" src="Host 28" dst="Host 30">
+ <link_ctn id="l163"/>
+ </route>
+
+ <route symmetrical="NO" src="Host 28" dst="Host 31">
+ <link_ctn id="l163"/>
+ <link_ctn id="l168"/>
+ </route>
+
+ <route symmetrical="NO" src="Host 28" dst="Host 32">
+ <link_ctn id="l156"/>
+ <link_ctn id="l170"/>
+ </route>
+
+ <route symmetrical="NO" src="Host 29" dst="Host 26">
+ <link_ctn id="l157"/>
+ <link_ctn id="l152"/>
+ </route>
+
+ <route symmetrical="NO" src="Host 29" dst="Host 27">
+ <link_ctn id="l159"/>
+ </route>
+
+ <route symmetrical="NO" src="Host 29" dst="Host 28">
+ <link_ctn id="l160"/>
+ </route>
+
+ <route symmetrical="NO" src="Host 29" dst="Host 30">
+ <link_ctn id="l164"/>
+ </route>
+
+ <route symmetrical="NO" src="Host 29" dst="Host 31">
+ <link_ctn id="l159"/>
+ <link_ctn id="l167"/>
+ </route>
+
+ <route symmetrical="NO" src="Host 29" dst="Host 32">
+ <link_ctn id="l171"/>
+ </route>
+
+ <route symmetrical="NO" src="Host 30" dst="Host 26">
+ <link_ctn id="l161"/>
+ <link_ctn id="l152"/>
+ </route>
+
+ <route symmetrical="NO" src="Host 30" dst="Host 27">
+ <link_ctn id="l162"/>
+ </route>
+
+ <route symmetrical="NO" src="Host 30" dst="Host 28">
+ <link_ctn id="l163"/>
+ </route>
+
+ <route symmetrical="NO" src="Host 30" dst="Host 29">
+ <link_ctn id="l164"/>
+ </route>
+
+ <route symmetrical="NO" src="Host 30" dst="Host 31">
+ <link_ctn id="l168"/>
+ </route>
+
+ <route symmetrical="NO" src="Host 30" dst="Host 32">
+ <link_ctn id="l172"/>
+ </route>
+
+ <route symmetrical="NO" src="Host 31" dst="Host 26">
+ <link_ctn id="l166"/>
+ </route>
+
+ <route symmetrical="NO" src="Host 31" dst="Host 27">
+ <link_ctn id="l167"/>
+ </route>
+
+ <route symmetrical="NO" src="Host 31" dst="Host 28">
+ <link_ctn id="l168"/>
+ <link_ctn id="l163"/>
+ </route>
+
+ <route symmetrical="NO" src="Host 31" dst="Host 29">
+ <link_ctn id="l167"/>
+ <link_ctn id="l159"/>
+ </route>
+
+ <route symmetrical="NO" src="Host 31" dst="Host 30">
+ <link_ctn id="l168"/>
+ </route>
+
+ <route symmetrical="NO" src="Host 31" dst="Host 32">
+ <link_ctn id="l173"/>
+ </route>
+
+ <route symmetrical="NO" src="Host 32" dst="Host 26">
+ <link_ctn id="l169"/>
+ <link_ctn id="l152"/>
+ </route>
+
+ <route symmetrical="NO" src="Host 32" dst="Host 27">
+ <link_ctn id="l170"/>
+ <link_ctn id="l154"/>
+ </route>
+
+ <route symmetrical="NO" src="Host 32" dst="Host 28">
+ <link_ctn id="l170"/>
+ <link_ctn id="l156"/>
+ </route>
+
+ <route symmetrical="NO" src="Host 32" dst="Host 29">
+ <link_ctn id="l171"/>
+ </route>
+
+ <route symmetrical="NO" src="Host 32" dst="Host 30">
+ <link_ctn id="l172"/>
+ </route>
+
+ <route symmetrical="NO" src="Host 32" dst="Host 31">
+ <link_ctn id="l173"/>
+ </route>
+
+</AS>
+</platform>
+
-$ simdag/basic0 ${srcdir:=.}/simdag/basic_platform.xml --surf-path=${srcdir} "--log=root.fmt:[%10.6r]%e%m%n"
+$ simdag/basic0 ${srcdir:=.}/simdag/basic_platform.xml --cfg=path:${srcdir} "--log=root.fmt:[%10.6r]%e%m%n"
> [ 0.000000] surf_workstation_model_init_ptask_L07
> [ 0.800100] Simulation time: 0.800100
-$ simdag/basic1 ${srcdir:=.}/simdag/basic_platform.xml --surf-path=${srcdir} "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ simdag/basic1 ${srcdir:=.}/simdag/basic_platform.xml --cfg=path:${srcdir} "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (0:@) surf_workstation_model_init_ptask_L07
> [ 16.000100] (0:@) Simulation time: 16.000100
-$ simdag/basic2 ${srcdir:=.}/simdag/basic_platform.xml --surf-path=${srcdir} "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ simdag/basic2 ${srcdir:=.}/simdag/basic_platform.xml --cfg=path:${srcdir} "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (0:@) surf_workstation_model_init_ptask_L07
> [ 8.800100] (0:@) Simulation time: 8.800100
\ No newline at end of file
-$ simdag/basic3 ${srcdir:=.}/simdag/basic_platform.xml --surf-path=${srcdir} "--log=root.fmt:[%10.6r]%e%m%n" --log=sd_kernel.thresh:verbose
+$ simdag/basic3 ${srcdir:=.}/simdag/basic_platform.xml --cfg=path:${srcdir} "--log=root.fmt:[%10.6r]%e%m%n" --log=sd_kernel.thresh:verbose
> [ 0.000000] surf_workstation_model_init_ptask_L07
> [ 0.000000] Starting simulation...
> [ 0.000000] Run simulation for -1.000000 seconds
-$ simdag/basic4 ${srcdir:=.}/simdag/basic_platform.xml --surf-path=${srcdir} "--log=root.fmt:[%10.6r]%e%m%n" --log=sd_kernel.thresh:verbose
+$ simdag/basic4 ${srcdir:=.}/simdag/basic_platform.xml --cfg=path:${srcdir} "--log=root.fmt:[%10.6r]%e%m%n" --log=sd_kernel.thresh:verbose
> [ 0.000000] surf_workstation_model_init_ptask_L07
> [ 0.000000] Starting simulation...
> [ 0.000000] Run simulation for -1.000000 seconds
-$ simdag/basic5 ${srcdir:=.}/simdag/basic_platform.xml --surf-path=${srcdir} "--log=root.fmt:[%10.6r]%e%m%n"
+$ simdag/basic5 ${srcdir:=.}/simdag/basic_platform.xml --cfg=path:${srcdir} "--log=root.fmt:[%10.6r]%e%m%n"
> [ 0.000000] surf_workstation_model_init_ptask_L07
> [ 0.002500] Simulation time: 0.002500
-$ simdag/basic6 ${srcdir:=.}/simdag/network/p2p/platform_2p_1sl.xml --surf-path=${srcdir} "--log=root.fmt:[%10.6r]%e%m%n"
+$ simdag/basic6 ${srcdir:=.}/simdag/network/p2p/platform_2p_1sl.xml --cfg=path:${srcdir} "--log=root.fmt:[%10.6r]%e%m%n"
> [ 0.000000] surf_workstation_model_init_ptask_L07
> [ 2.000000] Simulation time: 2.000000
-$ simdag/incomplete ${srcdir:=.}/simdag/basic_platform.xml --surf-path=${srcdir} "--log=root.fmt:[%10.6r]%e%m%n"
+$ simdag/incomplete ${srcdir:=.}/simdag/basic_platform.xml --cfg=path:${srcdir} "--log=root.fmt:[%10.6r]%e%m%n"
> [ 0.000000] surf_workstation_model_init_ptask_L07
> [ 8.000100] Simulation is finished but 3 tasks are still not done
> [ 8.000100] Task C is in SD_NOT_SCHEDULED state
p all 2 all test, only fat pipe switch is used concurrently
-$ simdag/network/mxn/test_intra_all2all ${srcdir:=.}/simdag/network/mxn/platform_4p_1switch.xml --surf-path=${srcdir} --log=sd_kernel.thres=warning "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ simdag/network/mxn/test_intra_all2all ${srcdir:=.}/simdag/network/mxn/platform_4p_1switch.xml --cfg=path:${srcdir} --log=sd_kernel.thres=warning "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (0:@) surf_workstation_model_init_ptask_L07
> 8
p sending on different paths test
-$ simdag/network/mxn/test_intra_independent_comm ${srcdir:=.}/simdag/network/mxn/platform_4p_1switch.xml --surf-path=${srcdir} --log=sd_kernel.thres=warning "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ simdag/network/mxn/test_intra_independent_comm ${srcdir:=.}/simdag/network/mxn/platform_4p_1switch.xml --cfg=path:${srcdir} --log=sd_kernel.thres=warning "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (0:@) surf_workstation_model_init_ptask_L07
> 3
p scatter test
-$ simdag/network/mxn/test_intra_scatter ${srcdir:=.}/simdag/network/mxn/platform_4p_1switch.xml --surf-path=${srcdir} --log=sd_kernel.thres=warning "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ simdag/network/mxn/test_intra_scatter ${srcdir:=.}/simdag/network/mxn/platform_4p_1switch.xml --cfg=path:${srcdir} --log=sd_kernel.thres=warning "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (0:@) surf_workstation_model_init_ptask_L07
> 8
p latency check, 1 byte, shared link
-$ simdag/network/p2p/test_latency1 ${srcdir:=.}/simdag/network/p2p/platform_2p_1sl.xml --surf-path=${srcdir} --log=sd_kernel.thres=warning "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ simdag/network/p2p/test_latency1 ${srcdir:=.}/simdag/network/p2p/platform_2p_1sl.xml --cfg=path:${srcdir} --log=sd_kernel.thres=warning "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (0:@) surf_workstation_model_init_ptask_L07
> 1.5
p latency check, 1 byte, fat pipe
-$ simdag/network/p2p/test_latency1 ${srcdir:=.}/simdag/network/p2p/platform_2p_1fl.xml --surf-path=${srcdir} --log=sd_kernel.thres=warning "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ simdag/network/p2p/test_latency1 ${srcdir:=.}/simdag/network/p2p/platform_2p_1fl.xml --cfg=path:${srcdir} --log=sd_kernel.thres=warning "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (0:@) surf_workstation_model_init_ptask_L07
> 1.5
p latency check, 1 byte, link - switch - link
-$ simdag/network/p2p/test_latency1 ${srcdir:=.}/simdag/network/p2p/platform_2p_1switch.xml --surf-path=${srcdir} --log=sd_kernel.thres=warning "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ simdag/network/p2p/test_latency1 ${srcdir:=.}/simdag/network/p2p/platform_2p_1switch.xml --cfg=path:${srcdir} --log=sd_kernel.thres=warning "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (0:@) surf_workstation_model_init_ptask_L07
> 3
p latency check, 2 x 1 byte, same direction, shared link
-$ simdag/network/p2p/test_latency2 ${srcdir:=.}/simdag/network/p2p/platform_2p_1sl.xml --surf-path=${srcdir} --log=sd_kernel.thres=warning "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ simdag/network/p2p/test_latency2 ${srcdir:=.}/simdag/network/p2p/platform_2p_1sl.xml --cfg=path:${srcdir} --log=sd_kernel.thres=warning "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (0:@) surf_workstation_model_init_ptask_L07
> 2.5
p latency check, 2 x 1 byte, same direction, fat pipe
-$ simdag/network/p2p/test_latency2 ${srcdir:=.}/simdag/network/p2p/platform_2p_1fl.xml --surf-path=${srcdir} --log=sd_kernel.thres=warning "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ simdag/network/p2p/test_latency2 ${srcdir:=.}/simdag/network/p2p/platform_2p_1fl.xml --cfg=path:${srcdir} --log=sd_kernel.thres=warning "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (0:@) surf_workstation_model_init_ptask_L07
> 1.5
p latency check, 2 x 1 byte, same direction, link - switch - link
-$ simdag/network/p2p/test_latency2 ${srcdir:=.}/simdag/network/p2p/platform_2p_1switch.xml --surf-path=${srcdir} --log=sd_kernel.thres=warning "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ simdag/network/p2p/test_latency2 ${srcdir:=.}/simdag/network/p2p/platform_2p_1switch.xml --cfg=path:${srcdir} --log=sd_kernel.thres=warning "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (0:@) surf_workstation_model_init_ptask_L07
> 4
p latency check, 2 x 1 byte, opposite direction, shared link
-$ simdag/network/p2p/test_latency3 ${srcdir:=.}/simdag/network/p2p/platform_2p_1sl.xml --surf-path=${srcdir} --log=sd_kernel.thres=warning "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ simdag/network/p2p/test_latency3 ${srcdir:=.}/simdag/network/p2p/platform_2p_1sl.xml --cfg=path:${srcdir} --log=sd_kernel.thres=warning "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (0:@) surf_workstation_model_init_ptask_L07
> 2.5
p latency check, 2 x 1 byte, opposite direction, fat pipe
-$ simdag/network/p2p/test_latency3 ${srcdir:=.}/simdag/network/p2p/platform_2p_1fl.xml --surf-path=${srcdir} --log=sd_kernel.thres=warning "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ simdag/network/p2p/test_latency3 ${srcdir:=.}/simdag/network/p2p/platform_2p_1fl.xml --cfg=path:${srcdir} --log=sd_kernel.thres=warning "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (0:@) surf_workstation_model_init_ptask_L07
> 1.5
p latency check, 2 x 1 byte, opposite direction, link - switch - link
-$ simdag/network/p2p/test_latency3 ${srcdir:=.}/simdag/network/p2p/platform_2p_1switch.xml --surf-path=${srcdir} --log=sd_kernel.thres=warning "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ simdag/network/p2p/test_latency3 ${srcdir:=.}/simdag/network/p2p/platform_2p_1switch.xml --cfg=path:${srcdir} --log=sd_kernel.thres=warning "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (0:@) surf_workstation_model_init_ptask_L07
> 4
p latency bounded by large latency link
-$ simdag/network/p2p/test_latency_bound ${srcdir:=.}/simdag/network/p2p/platform_2p_1bb.xml --surf-path=${srcdir} --log=sd_kernel.thres=warning "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ simdag/network/p2p/test_latency_bound ${srcdir:=.}/simdag/network/p2p/platform_2p_1bb.xml --cfg=path:${srcdir} --log=sd_kernel.thres=warning "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (0:@) surf_workstation_model_init_ptask_L07
> 10001.5
p Reinitialization test
-$ simdag/network/test_reinit_costs ${srcdir:=.}/simdag/network/platform_2p_1sl.xml --surf-path=${srcdir} --log=sd_kernel.thres=warning "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ simdag/network/test_reinit_costs ${srcdir:=.}/simdag/network/platform_2p_1sl.xml --cfg=path:${srcdir} --log=sd_kernel.thres=warning "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (0:@) surf_workstation_model_init_ptask_L07
> 0
> [ 0.000000] (0:@) surf_workstation_model_init_ptask_L07
p par task comp only, no comm, homogeneous
-$ simdag/partask/test_comp_only_par ${srcdir:=.}/simdag/partask/platform_2p_1sl.xml --surf-path=${srcdir} --log=sd_kernel.thres=warning "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ simdag/partask/test_comp_only_par ${srcdir:=.}/simdag/partask/platform_2p_1sl.xml --cfg=path:${srcdir} --log=sd_kernel.thres=warning "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (0:@) surf_workstation_model_init_ptask_L07
> 1
p par task comp only, no comm, heterogeneous
-$ simdag/partask/test_comp_only_par ${srcdir:=.}/simdag/partask/platform_2p_1sl_hetero.xml --surf-path=${srcdir} --log=sd_kernel.thres=warning "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ simdag/partask/test_comp_only_par ${srcdir:=.}/simdag/partask/platform_2p_1sl_hetero.xml --cfg=path:${srcdir} --log=sd_kernel.thres=warning "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (0:@) surf_workstation_model_init_ptask_L07
> 1
p seq task comp only, no comm
-$ simdag/partask/test_comp_only_seq ${srcdir:=.}/simdag/partask/platform_2p_1sl.xml --surf-path=${srcdir} --log=sd_kernel.thres=warning "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ simdag/partask/test_comp_only_seq ${srcdir:=.}/simdag/partask/platform_2p_1sl.xml --cfg=path:${srcdir} --log=sd_kernel.thres=warning "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (0:@) surf_workstation_model_init_ptask_L07
> 1
> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'maxmin/precision' to '0.000010'
> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'workstation/model' to 'compound'
> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'network/model' to 'Vegas'
-> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'path' to '~/'
> Workstation number: 1, link number: 1
$ ${bindir:=.}/basic_parsing_test ./properties.xml --cfg=cpu/optim:TI
> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'maxmin/precision' to '0.000010'
> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'workstation/model' to 'compound'
> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'network/model' to 'Vegas'
-> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'path' to '~/'
> Workstation number: 1, link number: 1
$ ${bindir:=.}/basic_parsing_test ./one_cluster_file.xml
MPI_Send(&data,1,MPI_INT,(rank+1)%2,666,MPI_COMM_WORLD);
// smpi_sleep(1000);
} else {
- MPI_Recv(&data,1,MPI_INT,-1,666,MPI_COMM_WORLD,NULL);
+ MPI_Recv(&data,1,MPI_INT,MPI_ANY_SOURCE,666,MPI_COMM_WORLD,NULL);
if (data !=22) {
printf("rank %d: Damn, data does not match (got %d)\n",rank, data);
}
This is the TESH tool. It constitutes a testing shell, ie a sort of shell
specialized to run tests. The list of actions to take is parsed from files
-files called testsuite.
+files called testsuite.
Testsuites syntax
-----------------
The kind of each line is given by the first char (the second char should be
blank and is ignored):
-
+
`$' command to run in foreground
`&' command to run in background
`<' input to pass to the command
Tesh accepts several command line arguments:
--cd some/directory: ask tesh to switch the working directory before
launching the tests
- --setenv var=value: set a specific environment variable
+ --setenv var=value: set a specific environment variable
IO orders
---------
It is also possible to specify that a given command must raise a given
signal. For this, use the "expect signal" metacommand. It takes the signal name
as argument. The change only apply to the next command (cf. set-signal.tesh).
-
+
TIMEOUTS
--------
By default, the commands output is matched against the one expected,
and an error is raised on discrepancy. Metacommands to change this:
- "output ignore" -> output completely discarded
+ "output ignore" -> output completely discarded
"output display" -> output displayed (but not verified)
"output sort" -> sorts the display before verifying it (see below)
SimGrid since the processes run out of order at any scheduling point
(ie, every processes ready to run at simulated time t run in
parallel). To ensure that the simulator outputs still match, we have
-to sort the output back before comparing it.
+to sort the output back before comparing it.
We expect the simulators to run with that log formatting argument:
-log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n
Then, tesh sorts string on the 19 first chars only, and is stable when
line beginnings are equal. This should ensure that:
(1) tesh is effective (no false positive, no false negative)
- (2) scheduling points are separated from each other
- (3) at each scheduling point, processes are separated from each other
+ (2) scheduling points are separated from each other
+ (3) at each scheduling point, processes are separated from each other
(4) the order of what a given process says at a given scheduling
point is preserved.
-
+
This is of course very SimGrid oriented, breaking the generality of
-tesh, but who cares, actually?
+tesh, but who cares, actually?
If you want to change the length of the prefix used for the sort,
simply specify it after the output sort directive, like this:
! output sort 22
-
+
ENVIRONMENT
-----------
You can add some content to the tested processes environment with the
.B tesh
[\fIOPTION\fR]... [\fIFILE\fR]...
.SH DESCRIPTION
-This is the TESH tool. It constitutes a testing shell, ie a sort of shell specialized to run tests. The list of actions to take is parsed from files files called testsuite.
+This is the TESH tool. It constitutes a testing shell, ie a sort of shell specialized to run tests. The list of actions to take is parsed from files files called testsuite.
.SH OPTIONS
--cd some/directory: ask tesh to switch the working directory before
launching the tests
The kind of each line is given by the first char (the second char should be
blank and is ignored):
-
+
`$' command to run in foreground
`&' command to run in background
`<' input to pass to the command
.SH OUTPUT
By default, the commands output is matched against the one expected,
and an error is raised on discrepancy. Metacommands to change this:
- "output ignore" -> output completely discarded
+ "output ignore" -> output completely discarded
"output display" -> output displayed (but not verified)
"output sort" -> sorts the display before verifying it (see below)
.SH SORTING OUTPUT
SimGrid since the processes run out of order at any scheduling point
(ie, every processes ready to run at simulated time t run in
parallel). To ensure that the simulator outputs still match, we have
-to sort the output back before comparing it.
+to sort the output back before comparing it.
We expect the simulators to run with that log formatting argument:
-log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n
Then, tesh sorts string on the 19 first chars only, and is stable when
line beginnings are equal. This should ensure that:
(1) tesh is effective (no false positive, no false negative)
- (2) scheduling points are separated from each other
- (3) at each scheduling point, processes are separated from each other
+ (2) scheduling points are separated from each other
+ (3) at each scheduling point, processes are separated from each other
(4) the order of what a given process says at a given scheduling
point is preserved.
-
+
This is of course very SimGrid oriented, breaking the generality of
-tesh, but who cares, actually?
+tesh, but who cares, actually?
If you want to change the length of the prefix used for the sort,
simply specify it after the output sort directive, like this: