Gabriel Corona <gabriel.corona@loria.fr> <coron00b@barbecue.loria.fr>
Gabriel Corona <gabriel.corona@loria.fr> <gabriel.corona@enst-bretagne.fr>
Augustin Degomme <adegomme@gmail.com>
+Augustin Degomme <adegomme@gmail.com> <ad254919@cardamome.intra.cea.fr>
Augustin Degomme <adegomme@gmail.com> <adegomme@users.noreply.github.com>
Augustin Degomme <adegomme@gmail.com> <augustin.degomme@imag.fr>
Augustin Degomme <adegomme@gmail.com> <augustin.degomme@unibas.ch>
if(NOT "${CMAKE_BINARY_DIR}" STREQUAL "${CMAKE_HOME_DIRECTORY}")
configure_file(${CMAKE_HOME_DIRECTORY}/examples/smpi/replay/actions0.txt ${CMAKE_BINARY_DIR}/examples/smpi/replay/actions0.txt COPYONLY)
configure_file(${CMAKE_HOME_DIRECTORY}/examples/smpi/replay/actions1.txt ${CMAKE_BINARY_DIR}/examples/smpi/replay/actions1.txt COPYONLY)
- configure_file(${CMAKE_HOME_DIRECTORY}/examples/smpi/replay/actions_allReduce.txt ${CMAKE_BINARY_DIR}/examples/smpi/replay/actions_allReduce.txt COPYONLY)
+ configure_file(${CMAKE_HOME_DIRECTORY}/examples/smpi/replay/actions_allreduce.txt ${CMAKE_BINARY_DIR}/examples/smpi/replay/actions_allreduce.txt COPYONLY)
configure_file(${CMAKE_HOME_DIRECTORY}/examples/smpi/replay/actions_barrier.txt ${CMAKE_BINARY_DIR}/examples/smpi/replay/actions_barrier.txt COPYONLY)
configure_file(${CMAKE_HOME_DIRECTORY}/examples/smpi/replay/actions_bcast.txt ${CMAKE_BINARY_DIR}/examples/smpi/replay/actions_bcast.txt COPYONLY)
configure_file(${CMAKE_HOME_DIRECTORY}/examples/smpi/replay/actions_with_isend.txt ${CMAKE_BINARY_DIR}/examples/smpi/replay/actions_with_isend.txt COPYONLY)
${generated_files_to_clean}
${CMAKE_BINARY_DIR}/examples/smpi/replay/actions0.txt
${CMAKE_BINARY_DIR}/examples/smpi/replay/actions1.txt
- ${CMAKE_BINARY_DIR}/examples/smpi/replay/actions_allReduce.txt
+ ${CMAKE_BINARY_DIR}/examples/smpi/replay/actions_allreduce.txt
${CMAKE_BINARY_DIR}/examples/smpi/replay/actions_barrier.txt
${CMAKE_BINARY_DIR}/examples/smpi/replay/actions_bcast.txt
${CMAKE_BINARY_DIR}/examples/smpi/replay/actions_with_isend.txt
TRACE
- Change --cfg=tracing/msg/vm to --cfg=tracing/vm as virtual machine
behavior tracing is no longer limited to MSG
+ - TIT (Time Independent Traces): We finally support tags. Unfortunately,
+ this means that traces now need to be updated or re-obtained; both Irecv
+ and Isend lines in your traces have now in total 5 mandatory fields:
+ <rankid> <command> <to/from rankid> <tag> <size>
+ To update your traces, it suffices to add a 0 for the tag here.
+ - TIT now also supports waiting for a distinct request via MPI_Wait.
+ Wait/Test now wait for a specific request, not just the last one that was
+ issued. This unfortunately means another update, because we need to
+ identify which request you want to wait for. We do this via the
+ triplet (sender, receiver, tag), which needs to be added:
+ <rankid> <command> <sender> <receiver> <tag>
+ - We lowercased all actions: For instance, instead of allReduce, we now
+ use allreduce.
+
+MSG
+ - The deprecation of MSG is ongoing (but this should not impact you).
+ Many MSG functions are now simple wrappers on the C API of S4U. If
+ you wish to convert your code to S4U, find the S4U counterparts of
+ your MSG calls in src/msg/msg_legacy.cpp.
+ - MSG can still be used, but won't evolve anymore.
S4U:
- Introduced new function simgrid::s4u::Host::get_actor_count. This function
returns the number of actors running on a specific host.
Plugins:
- - Allow to run the Link energy plugin from the command line with
+ - Allow to run the Link energy plugin from the command line with
--cfg=plugin:link_energy
- Rename Energy plugin into host_energy
- Rename Load plugin into host_load
simix:
- Add parameter --cfg=simix/breakpoint to raise a SIGTRAP at given time.
- kill simix::onDeadlock() that was somewhat dupplicating s4u::on_deadlock()
+ - Improve performance when handling timeouts of simix synchros.
SMPI:
+ - SMPI is now tested with ~45 proxy apps from various sources, with none or
+ only minor patching needed: check https://github.com/simgrid/SMPI-proxy-apps
- Replay: The replay file has been re-written in C++.
- Replay: Tags used for messages sent via MPI_Send / MPI_Recv are now
supported. They are stored in the trace and used when replayed.
+ - Basic support of MPI_Cancel. Robustness not guaranteed.
+ - Support of MPI_Win_allocate_shared, MPI_Win_shared_query, MPI_Comm_split_type
+ (only for MPI_COMM_TYPE_SHARED).
+ - New option: smpi/privatize-libs, to add external shared libs to be privatized
+ by SMPI. They will be copied locally and loaded separately by each process.
+ Example --cfg=smpi/privatize-libs:"libgfortran.so.3;libscalapack.so".
+ - Tracing: add tracing for MPI_Start, Startall, Testall, Testany
+ - Interception of getopt, getopt_long and getopt_long_only calls to avoid issues
+ with internal index optind with multiple processes. Only works if MPI_Init has
+ already been called.
+ - Fortran: SMPI builds a mpi.mod file which should allow use of "use mpi"
+ syntax without preprocessing tricks.
XBT:
- Config: the C API is now deprecated (will be removed in 3.23), and
the C++ API has been slightly improved.
Other:
+ - Fix several build issues on OSX.
- Move simgrid_config.h to simgrid/config.h (old header still working)
Fixed bugs:
- #143: Setting a breakpoint at a given time
+ - #150: Inconsistent event names in SMPI replay
- #258: daemonized actors hang after all non-daemonized actors have completed
- #267: Linker error on unit_tmgr
- - #269: SMPI : tracing of MPI_Wait/all/any broken
+ - #269: SMPI: tracing of MPI_Wait/all/any broken
+ - SMPI: Fix various crashes with combined use of MPI_PROC_NULL and MPI_IGNORE_STATUS
----------------------------------------------------------------------------
Fixed bugs:
- #194: Feature request: simgrid::s4u::Comm::test_any()
- #245: migrating an actor does not migrate its execution
+ - #253: Feature Request: expose clusters as objects
- #254: Something seems wrong with s4u::Actor::kill(aid_t)
+ - #255: Tesh broken on Windows
- #256: Modernize FindSimGrid.cmake
- #257: Fix (ab)use of CMake install
Virtual Machines
- Live migration is getting moved to a plugin. Dirty page tracking is
the first part of this plugin. This imply that VM migration is now
- only possible if one this function is called :
+ only possible if one this function is called:
- C/MSG: MSG_vm_live_migration_plugin_init()
- C/C++: sg_vm_live_migration_plugin_init()
- Java: Msg.liveMigrationInit()
SMPI
- New algorithm to privatize globals: dlopen, with dynamic loading tricks
- New option: smpi/keep-temps to not cleanup temp files
- - New option : smpi/shared-malloc-blocksize . Relevant only when global shared
+ - New option: smpi/shared-malloc-blocksize . Relevant only when global shared
mallocs mode is used, allows to change the size of the fake file used
(default 1MB), to potentially limit the number of mappings for large runs.
- Support for sparse privatized malloc with SMPI_PARTIAL_SHARED_MALLOC()
- Fortran ifort and flang compilers support
- - New RMA calls supported (experimental) :
+ - New RMA calls supported (experimental):
- MPI_Win_allocate, MPI_Win_create_dynamic, MPI_Win_attach
- MPI_Win_detach, MPI_Win_set_info, MPI_Win_get_info
- MPI_Win_lock_all, MPI_Win_unlock_all, MPI_Win_flush
* smpirun script should be (much) faster for large deployments.
- * SMPI tracing : fixed issue with poor matching of send/receives.
+ * SMPI tracing: fixed issue with poor matching of send/receives.
- * Replay : Fix broken waitall
+ * Replay: Fix broken waitall
New functions and features
* MSG_parallel_task_execute_with_timeout, to timeout computations.
SMPI:
* New functions
- - Onesided early support for : MPI_Win_(create, free, fence, get_name, set_name, get_group), MPI_Get, MPI_Put, MPI_Accumulate, MPI_Alloc_mem, MPI_Free_mem.
+ - Onesided early support for: MPI_Win_(create, free, fence, get_name, set_name, get_group), MPI_Get, MPI_Put, MPI_Accumulate, MPI_Alloc_mem, MPI_Free_mem.
- MPI_Keyval*, MPI_Attr* functions, as well as MPI_Comm_attr*, MPI_Type_attr* variants (C only, no Fortran support yet)
- MPI_Type_set_name, MPI_Type_get_name
- MPI_*_c2f and MPI_*_f2c functions
- Activate a lot of new tests from the mpich 3 testsuite
* Features
- Constant times can be injected inside MPI_Wtime and MPI_Test through options smpi/wtime and smpi/test
- - InfiniBand network model added : Based on the works of Jerome Vienne
+ - InfiniBand network model added: Based on the works of Jerome Vienne
http://mescal.imag.fr/membres/jean-marc.vincent/index.html/PhD/Vienne.pdf
- When smpi/display_timing is set, also display global simulation time and application times
- Have smpirun, smpicc and friends display the simgrid git hash version on --git-version
* Collective communications
- SMP-aware algorithms are now dynamically handled. An internal communicator is created for each node, and an external one to handle communications between "leaders" of each node
- - MVAPICH2 (1.9) collective algorithms selector : normal and SMP algorithms are handled, and selection logic is based on the one used on TACC's Stampede cluster (https://www.tacc.utexas.edu/stampede/).
+ - MVAPICH2 (1.9) collective algorithms selector: normal and SMP algorithms are handled, and selection logic is based on the one used on TACC's Stampede cluster (https://www.tacc.utexas.edu/stampede/).
- Support for Rabenseifner Reduce/Allreduce algorithms (https://fs.hlrs.de/projects/par/mpi//myreduce.html)
* Replay
- Replay now uses algorithms from wanted collective selector
- Memory occupation of replay should now be contained (temporary buffers allocated in collective algorithms should be shared between processes)
- Replay can now replay several traces at the same time (check examples/smpi/replay_multiple example), to simulate interactions between several applications on a given platform. User can specify the start time of each instance. This should also allow replay + actual applications to run.
* Bug fixes
- - [#17799] : have mpi_group_range_incl and mpi_group_range_excl better test some corner cases
+ - [#17799]: have mpi_group_range_incl and mpi_group_range_excl better test some corner cases
- Correctly use loopback on fat-tree clusters
- Asynchronous small messages shouldn't trigger deadlocks anymore
* Energy/DVFS cleanup and improvement
* New functions
- Add a xbt_heap_update function, to avoid costly xbt_heap_remove+xbt_heap_insert use
- Add a xbt wrapper for simcall_mutex_trylock (asked in [#17878])
- - Add two new log appenders : rollfile and splitfile. Patch by Fabien Chaix.
+ - Add two new log appenders: rollfile and splitfile. Patch by Fabien Chaix.
- xbt_dirname and xbt_basename for non-POSIX systems
MC
* The model checker now runs as a separate process.
one node.
* Collective communication algorithms should not crash if used with
improper number of nodes and report the error.
- * SMPI now partially supports MPI_Topologies : MPI_Cart_create, MPI_Cart_shift,
+ * SMPI now partially supports MPI_Topologies: MPI_Cart_create, MPI_Cart_shift,
MPI_Cart_rank, MPI_Cart_get, MPI_Cart_coords, MPI_Cartdim_get,
MPI_Dims_create, MPI_Cart_sub are supported.
* New interface to use SMPI programmatically (still depends on MSG for
- some parts, see examples/smpi/smpi_msg_masterslave) :
+ some parts, see examples/smpi/smpi_msg_masterslave):
- SMPI_app_instance_register(const char *name, xbt_main_func_t code,
int num_processes)
- SMPI_init()
- SMPI_finalize();
* Global variables privatization in MPI executables is now performed at runtime
with the option smpi/privatize_global_variables (default:no).
- Limitations : Linux/BSD only, with mmap enabled. Global variables inside
+ Limitations: Linux/BSD only, with mmap enabled. Global variables inside
dynamic libraries loaded by the application are not privatized (static
linking with these libraries is advised in this case)
- allows to select one in particular with --cfg=smpi/coll_name:algorithm
- allows to use the decision logic of OpenMPI(1.7) or MPICH(3.0.4) by setting
--cfg=smpi/coll_selector:(mpich/ompi)
- * Support for new functions : MPI_Issend, MPI_Ssend, Commutative operations in
+ * Support for new functions: MPI_Issend, MPI_Ssend, Commutative operations in
Reduce
* Add a --cfg:tracing/smpi/internals option, to trace internal communications
happening inside a collective SMPI call.
by a SD_TASK_COMM_E2E typed task. This rate depends on both the nominal
bandwidth on the route onto which the task is scheduled and the amount of
data to transfer.
- To divide the nominal bandwidth by 2, the rate then has to be :
+ To divide the nominal bandwidth by 2, the rate then has to be:
rate = bandwidth/(2*amount)
* Compute tasks that have failed can now be rescheduled and executed again
(from their beginning)
action_free ~> action_unref
action_change_state ~> action_state_set
action_get_state ~> action_state_get
- - Change model methods into functions :
+ - Change model methods into functions:
(model)->common_public->action_use ~> surf_action_ref
* Implement a generic resource; use it as ancestor to specific ones
* After a (long ?) discussion on simgrid-devel, we have decided that the
convention we had on units was stupid. That is why it has been decided
to move from (MBits, MFlops, seconds) to (Bits, Flops, seconds).
- WARNING : This means that all previous platform files will not work as
+ WARNING: This means that all previous platform files will not work as
such with this version! A warning is issued to ask users to update
their files. [AL]
A conversion script can be found in the contrib module of the CVS, under
* REVOLUTION 1: The SimGrid project has merged with the GRAS project
lead by Martin Quinson. As a consequence SimGrid gains a lot in
portability, speed, and a lot more but you'll figure it out later.
- SimGrid now comprises 3 different projects : MSG, GRAS and SMPI.
+ SimGrid now comprises 3 different projects: MSG, GRAS and SMPI.
I wanted to release the new MSG as soon as possible and I have
broken GRAS, which is the reason why, for now, only MSG is fully
functional. A laconic description of these projects is available
* REVOLUTION 3: I have tried to change a little as possible the API of
MSG but a few things really had to disappear. The main differences
- with the previous version are :
+ with the previous version are:
1) no more m_links_t and the corresponding functions. Platforms are
directly read from a XML description and cannot be hard-coded
anymore. The same format is used for application deployment
- 2dmesh: organizes the nodes as a two dimensional mesh, and perform allgather
along the dimensions
- 3dmesh: adds a third dimension to the previous algorithm
- - rdb: recursive doubling : extends the mesh to a nth dimension, each one
+ - rdb: recursive doubling: extends the mesh to a nth dimension, each one
containing two nodes
- pair: pairwise exchange, only works for power of 2 procs, size-1 steps,
each process sends and receives from the same process at each step
- \c smpi/os: \ref options_model_smpi_os
- \c smpi/papi-events: \ref options_smpi_papi_events
- \c smpi/privatization: \ref options_smpi_privatization
+- \c smpi/privatize-libs: \ref options_smpi_privatize_libs
- \c smpi/send-is-detached-thresh: \ref options_model_smpi_detached
- \c smpi/shared-malloc: \ref options_model_smpi_shared_malloc
- \c smpi/shared-malloc-hugepage: \ref options_model_smpi_shared_malloc
\warning
This configuration option cannot be set in your platform file. You can only
pass it as an argument to smpirun.
+
+\subsection options_smpi_privatize_libs smpi/privatize-libs: Automatic privatization of
+ global variables inside external libraries
+
+Linux/BSD only: When using dlopen (default) privatization, privatize specific
+shared libraries with internal global variables, if they can't be linked statically.
+For example libgfortran is usually used for Fortran I/O and indexes in files
+can be mixed up.
+
+\warning
+ This configuration option can only use either full paths to libraries, or full names.
+ Check with ldd the name of the library you want to use.
+ Example:
+ ldd allpairf90
+ libgfortran.so.3 => /usr/lib/x86_64-linux-gnu/libgfortran.so.3 (0x00007fbb4d91b000)
+ Then you can use --cfg=smpi/privatize-libs:"libgfortran.so.3" or --cfg=smpi/privatize-libs:"/usr/lib/x86_64-linux-gnu/libgfortran.so.3", but not "libgfortran" or "libgfortran.so".
+ Multiple libraries can be given, semicolon separated.
+
\subsection options_model_smpi_detached Simulating MPI detached send
\subsubsection pf_router <router/>
As said before, <b>router</b> is used only to give some information
-for routing algorithms. So, it does not have any attributes except :
+for routing algorithms. So, it does not have any attributes except:
#### Attributes ####
\verbinclude example_filelist_routing_dijkstra
-Dijkstra example :
+Dijkstra example:
\verbatim
<zone id="zone_2" routing="Dijkstra">
<host id="zone_2_host1" speed="1000000000"/>
\anchor pf_routing_model_full
### Full ###
-Full example :
+Full example:
\verbatim
<zone id="zone0" routing="Full">
<host id="host1" speed="1000000000"/>
<b>bypasszoneroute</b> is the tag you're looking for. It allows to
bypass routes defined between already defined between zone (if you want
to bypass route for a specific host, you should just use byPassRoute).
-The principle is the same as zoneroute : <b>bypasszoneroute</b> contains
+The principle is the same as zoneroute: <b>bypasszoneroute</b> contains
list of links that are in the path between src and dst.
#### Attributes ####
As said before, once you choose
a model, it (most likely; the constant network model, for example, doesn't) calculates routes for you. But maybe you want to
define some of your routes, which will be specific. You may also want
-to bypass some routes defined in lower level zone at an upper stage :
+to bypass some routes defined in lower level zone at an upper stage:
<b>bypassRoute</b> is the tag you're looking for. It allows to bypass
routes defined between <b>host/router</b>. The principle is the same
-as route : <b>bypassRoute</b> contains list of links references of
+as route: <b>bypassRoute</b> contains list of links references of
links that are in the path between src and dst.
#### Attributes ####
defined inside zone_Big. If you choose some shortest-path model,
this route will be computed automatically.
-As said before, there are mainly 2 tags for routing :
+As said before, there are mainly 2 tags for routing:
\li <b>zoneroute</b>: to define routes between two <b>zone</b>
\li <b>route</b>: to define routes between two <b>host/router</b>
routing (as we don't want to bother with defining all routes). As
we're using some shortest path algorithms to route into zone_2, we'll
then have to define some <b>route</b> to gives some topological
-information to SimGrid. Here is a file doing it all :
+information to SimGrid. Here is a file doing it all:
\verbatim
<zone id="zone_Big" routing="Dijkstra">
\subsection pf_exit_zone Exit Zone: why and how
Users that have looked at some of our platforms may have notice a
-non-intuitive schema ... Something like that :
+non-intuitive schema ... Something like that:
\verbatim
Choosing wisely the routing model to use can significantly fasten your
simulation/save your time when writing the platform/save tremendous
disk space. Here is the list of available model and their
-characteristics (lookup : time to resolve a route):
+characteristics (lookup: time to resolve a route):
\li <b>Full</b>: Full routing data (fast, large memory requirements,
fully expressive)
const double cpu_speed = host->getSpeed();
const double computation_amount = cpu_speed * 10;
- XBT_INFO("### Test: with/without MSG_task_set_bound");
+ XBT_INFO("### Test: with/without task set_bound");
XBT_INFO("### Test: no bound for Task1@%s", host->get_cname());
simgrid::s4u::Actor::create("worker0", host, worker, computation_amount, false, 0);
! output sort
$ $SG_TEST_EXENV ${bindir:=.}/s4u-cloud-capping ${platfdir}/small_platform.xml --log=no_loc "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (1:master_@Fafard) # 1. Put a single task on a PM.
-> [ 0.000000] (1:master_@Fafard) ### Test: with/without MSG_task_set_bound
+> [ 0.000000] (1:master_@Fafard) ### Test: with/without task set_bound
> [ 0.000000] (1:master_@Fafard) ### Test: no bound for Task1@Fafard
> [ 10.000000] (2:worker0@Fafard) not bound => duration 10.000000 (76296000.000000 flops/s)
> [1000.000000] (1:master_@Fafard) ### Test: 50% for Task1@Fafard
> [11040.000000] (20:worker1@Fafard) bound to 19074000.000000 => duration 40.000000 (19074000.000000 flops/s)
> [12000.000000] (1:master_@Fafard)
> [12000.000000] (1:master_@Fafard) # 3. Put a single task on a VM.
-> [12000.000000] (1:master_@Fafard) ### Test: with/without MSG_task_set_bound
+> [12000.000000] (1:master_@Fafard) ### Test: with/without task set_bound
> [12000.000000] (1:master_@Fafard) ### Test: no bound for Task1@VM0
> [12010.000000] (21:worker0@VM0) not bound => duration 10.000000 (76296000.000000 flops/s)
> [13000.000000] (1:master_@Fafard) ### Test: 50% for Task1@VM0
> [30040.000000] (53:worker1@VM0) bound to 19074000.000000 => duration 40.000000 (19074000.000000 flops/s)
> [31000.000000] (1:master_@Fafard)
> [31000.000000] (1:master_@Fafard) # 7. Put a single task on the VM capped by 10%.
-> [31000.000000] (1:master_@Fafard) ### Test: with/without MSG_task_set_bound
+> [31000.000000] (1:master_@Fafard) ### Test: with/without task set_bound
> [31000.000000] (1:master_@Fafard) ### Test: no bound for Task1@VM0
> [31100.000000] (54:worker0@VM0) not bound => duration 100.000000 (7629600.000000 flops/s)
> [32000.000000] (1:master_@Fafard) ### Test: 50% for Task1@VM0
#!/usr/bin/env tesh
-p Testing the Chord implementation with MSG
+p Testing the Chord implementation with S4U
! output sort 19
$ $SG_TEST_EXENV ${bindir:=.}/s4u-dht-chord$EXEEXT -nb_bits=3 ${platfdir}/cluster.xml s4u-dht-chord_d.xml --log=s4u_chord.thres:verbose "--log=root.fmt:[%10.5r]%e(%P@%h)%e%m%n"
#!/usr/bin/env tesh
-p Testing the Kademlia implementation with MSG
+p Testing the Kademlia implementation with S4U
! output sort 19
$ $SG_TEST_EXENV ${bindir:=.}/s4u-dht-kademlia ${platfdir}/cluster.xml ${srcdir:=.}/s4u-dht-kademlia_d.xml "--log=root.fmt:[%10.6r]%e(%02i:%P@%h)%e%m%n"
file(MAKE_DIRECTORY "${CMAKE_CURRENT_BINARY_DIR}/mc/")
- foreach(x replay)
- add_executable (smpi_${x} ${CMAKE_CURRENT_SOURCE_DIR}/${x}/${x}.cpp)
+ foreach(x replay
+ trace trace_simple trace_call_location energy)
+ add_executable (smpi_${x} ${CMAKE_CURRENT_SOURCE_DIR}/${x}/${x})
target_link_libraries(smpi_${x} simgrid)
set_target_properties(smpi_${x} PROPERTIES RUNTIME_OUTPUT_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}/${x})
- set(examples_src ${examples_src} ${CMAKE_CURRENT_SOURCE_DIR}/${x}/${x}.cpp)
- endforeach()
-
- foreach(x trace trace_simple trace_call_location energy)
- add_executable (smpi_${x} ${CMAKE_CURRENT_SOURCE_DIR}/${x}/${x}.c)
- target_link_libraries(smpi_${x} simgrid)
- set_target_properties(smpi_${x} PROPERTIES RUNTIME_OUTPUT_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}/${x})
- set(examples_src ${examples_src} ${CMAKE_CURRENT_SOURCE_DIR}/${x}/${x}.c)
endforeach()
set_target_properties(smpi_trace_call_location PROPERTIES COMPILE_FLAGS "-trace-call-location")
target_link_libraries(smpi_${x} simgrid)
set_target_properties(smpi_${x} PROPERTIES RUNTIME_OUTPUT_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}/mc)
endif()
- set(examples_src ${examples_src} ${CMAKE_CURRENT_SOURCE_DIR}/mc/${x}.c)
endforeach()
endif()
+foreach(x replay)
+ set(examples_src ${examples_src} ${CMAKE_CURRENT_SOURCE_DIR}/${x}/${x}.cpp)
+endforeach()
+foreach(x trace trace_simple trace_call_location energy)
+ set(examples_src ${examples_src} ${CMAKE_CURRENT_SOURCE_DIR}/${x}/${x}.c)
+endforeach()
+foreach(x bugged1 bugged2 bugged1_liveness only_send_deterministic mutual_exclusion non_termination1
+ non_termination2 non_termination3 non_termination4)
+ set(examples_src ${examples_src} ${CMAKE_CURRENT_SOURCE_DIR}/mc/${x}.c)
+endforeach()
+
set(examples_src ${examples_src} PARENT_SCOPE)
set(tesh_files ${tesh_files} ${CMAKE_CURRENT_SOURCE_DIR}/energy/energy.tesh
${CMAKE_CURRENT_SOURCE_DIR}/trace/trace.tesh
${CMAKE_CURRENT_SOURCE_DIR}/mc/hostfile_non_termination PARENT_SCOPE)
set(txt_files ${txt_files} ${CMAKE_CURRENT_SOURCE_DIR}/replay/actions0.txt
${CMAKE_CURRENT_SOURCE_DIR}/replay/actions1.txt
- ${CMAKE_CURRENT_SOURCE_DIR}/replay/actions_allReduce.txt
+ ${CMAKE_CURRENT_SOURCE_DIR}/replay/actions_allreduce.txt
${CMAKE_CURRENT_SOURCE_DIR}/replay/actions_allgatherv.txt
${CMAKE_CURRENT_SOURCE_DIR}/replay/actions_alltoall.txt
${CMAKE_CURRENT_SOURCE_DIR}/replay/actions_alltoallv.txt
1 init
1 recv 0 0 1e6
1 compute 1e9
-1 Isend 0 1 1e6
-1 Irecv 0 2 1e6
+1 isend 0 1 1e6
+1 irecv 0 2 1e6
1 wait 0 1 2
1 finalize
2 init
3 init
-0 allGatherV 275427 275427 275427 275427 204020 0 0
-1 allGatherV 275427 275427 275427 275427 204020 0 0
-2 allGatherV 275427 275427 275427 275427 204020 0 0
-3 allGatherV 204020 275427 275427 275427 204020 0 0
+0 allgatherv 275427 275427 275427 275427 204020 0 0
+1 allgatherv 275427 275427 275427 275427 204020 0 0
+2 allgatherv 275427 275427 275427 275427 204020 0 0
+3 allgatherv 204020 275427 275427 275427 204020 0 0
0 finalize
1 finalize
1 init
2 init
-0 allReduce 5e4 5e8
-1 allReduce 5e4 5e8
-2 allReduce 5e4 5e8
+0 allreduce 5e4 5e8
+1 allreduce 5e4 5e8
+2 allreduce 5e4 5e8
0 compute 5e8
1 compute 5e8
1 init
2 init
-0 allToAll 500 500
-1 allToAll 500 500
-2 allToAll 500 500
+0 alltoall 500 500
+1 alltoall 500 500
+2 alltoall 500 500
0 finalize
1 init
2 init
-0 allToAllV 100 1 40 30 1000 1 80 100
-1 allToAllV 1000 80 1 40 1000 40 1 30
-2 allToAllV 1000 100 30 1 1000 30 40 1
+0 alltoallv 100 1 40 30 1000 1 80 100
+1 alltoallv 1000 80 1 40 1000 40 1 30
+2 alltoallv 1000 100 30 1 1000 30 40 1
0 finalize
1 finalize
2 init
3 init
-0 reduceScatter 275427 275427 275427 204020 11349173 0
-1 reduceScatter 275427 275427 275427 204020 12396024 0
-2 reduceScatter 275427 275427 275427 204020 12501522 0
-3 reduceScatter 275427 275427 275427 204020 12403123 0
+0 reducescatter 275427 275427 275427 204020 11349173 0
+1 reducescatter 275427 275427 275427 204020 12396024 0
+2 reducescatter 275427 275427 275427 204020 12501522 0
+3 reducescatter 275427 275427 275427 204020 12403123 0
0 finalize
1 finalize
1 init
2 init
-0 Irecv 1 0 2000
-1 Isend 0 0 2000
-2 Irecv 1 1 3000
+0 irecv 1 0 2000
+1 isend 0 0 2000
+2 irecv 1 1 3000
-0 Irecv 2 2 3000
-1 Isend 2 1 3000
-2 Isend 0 2 3000
+0 irecv 2 2 3000
+1 isend 2 1 3000
+2 isend 0 2 3000
-0 waitAll
-1 waitAll
-2 waitAll
+0 waitall
+1 waitall
+2 waitall
0 finalize
1 finalize
0 compute 1e9
0 recv 2 2 1e6
-1 Irecv 0 0 1e6
+1 irecv 0 0 1e6
1 compute 5e8
1 test 0 1 0
1 compute 5e8
1 send 2 1 1e6
2 compute 2e9
-2 Irecv 1 1 1e6
+2 irecv 1 1 1e6
2 compute 2.5e8
2 test 1 2 1
2 compute 2.5e8
2 wait 1 2 1
-2 Isend 0 2 1e6
+2 isend 0 2 1e6
2 compute 5e8
0 finalize
> [Tremblay:0:(1) 0.167158] [smpi_replay/VERBOSE] 0 send 1 0 1e6 0.167158
> [Jupiter:1:(2) 0.167158] [smpi_replay/VERBOSE] 1 recv 0 0 1e6 0.167158
> [Jupiter:1:(2) 13.274005] [smpi_replay/VERBOSE] 1 compute 1e9 13.106847
-> [Jupiter:1:(2) 13.274005] [smpi_replay/VERBOSE] 1 Isend 0 1 1e6 0.000000
-> [Jupiter:1:(2) 13.274005] [smpi_replay/VERBOSE] 1 Irecv 0 2 1e6 0.000000
+> [Jupiter:1:(2) 13.274005] [smpi_replay/VERBOSE] 1 isend 0 1 1e6 0.000000
+> [Jupiter:1:(2) 13.274005] [smpi_replay/VERBOSE] 1 irecv 0 2 1e6 0.000000
> [Tremblay:0:(1) 13.441162] [smpi_replay/VERBOSE] 0 recv 1 1 1e6 13.274005
> [Jupiter:1:(2) 13.608320] [smpi_replay/VERBOSE] 1 wait 0 1 2 0.334315
> [Tremblay:0:(1) 13.608320] [smpi_replay/VERBOSE] 0 send 1 2 1e6 0.167158
$ rm -f replay/one_trace
-p Test of Isend replay with SMPI (one trace for all processes)
+p Test of isend replay with SMPI (one trace for all processes)
< replay/actions_with_isend.txt
$ mkfile replay/one_trace
$ ../../smpi_script/bin/smpirun -no-privatize -ext smpi_replay --log=replay.thresh:critical --log=smpi_replay.thresh:verbose --log=no_loc --cfg=smpi/simulate-computation:no -np 3 -platform ${srcdir:=.}/../platforms/small_platform.xml -hostfile ${srcdir:=.}/hostfile ./replay/smpi_replay replay/one_trace --log=smpi_kernel.thres:warning --log=xbt_cfg.thres:warning
-> [Jupiter:1:(2) 0.000000] [smpi_replay/VERBOSE] 1 Irecv 0 0 1e6 0.000000
+> [Jupiter:1:(2) 0.000000] [smpi_replay/VERBOSE] 1 irecv 0 0 1e6 0.000000
> [Jupiter:1:(2) 6.553424] [smpi_replay/VERBOSE] 1 compute 5e8 6.553424
> [Jupiter:1:(2) 6.553524] [smpi_replay/VERBOSE] 1 test 0 1 0 0.000100
> [Tremblay:0:(1) 10.194200] [smpi_replay/VERBOSE] 0 compute 1e9 10.194200
> [Jupiter:1:(2) 13.106947] [smpi_replay/VERBOSE] 1 wait 0 1 0 0.000000
> [Tremblay:0:(1) 20.555557] [smpi_replay/VERBOSE] 0 compute 1e9 10.194200
> [Fafard:2:(3) 26.213694] [smpi_replay/VERBOSE] 2 compute 2e9 26.213694
-> [Fafard:2:(3) 26.213694] [smpi_replay/VERBOSE] 2 Irecv 1 1 1e6 0.000000
+> [Fafard:2:(3) 26.213694] [smpi_replay/VERBOSE] 2 irecv 1 1 1e6 0.000000
> [Jupiter:1:(2) 26.403860] [smpi_replay/VERBOSE] 1 send 2 1 1e6 13.296913
> [Fafard:2:(3) 29.490406] [smpi_replay/VERBOSE] 2 compute 2.5e8 3.276712
> [Fafard:2:(3) 29.490606] [smpi_replay/VERBOSE] 2 test 1 2 1 0.000200
> [Fafard:2:(3) 32.767318] [smpi_replay/VERBOSE] 2 compute 2.5e8 3.276712
> [Fafard:2:(3) 32.767318] [smpi_replay/VERBOSE] 2 wait 1 2 1 0.000000
-> [Fafard:2:(3) 32.767318] [smpi_replay/VERBOSE] 2 Isend 0 2 1e6 0.000000
+> [Fafard:2:(3) 32.767318] [smpi_replay/VERBOSE] 2 isend 0 2 1e6 0.000000
> [Tremblay:0:(1) 32.923014] [smpi_replay/VERBOSE] 0 recv 2 2 1e6 12.367458
> [Fafard:2:(3) 39.320741] [smpi_replay/VERBOSE] 2 compute 5e8 6.553424
> [Fafard:2:(3) 39.320741] [smpi_replay/INFO] Simulation time 39.320741
p Test of AllReduce replay with SMPI (one trace for all processes)
-< replay/actions_allReduce.txt
+< replay/actions_allreduce.txt
$ mkfile replay/one_trace
$ ../../smpi_script/bin/smpirun -no-privatize -ext smpi_replay --log=replay.thresh:critical --log=smpi_replay.thresh:verbose --log=no_loc --cfg=smpi/simulate-computation:no -np 3 -platform ${srcdir:=.}/../platforms/small_platform.xml -hostfile ${srcdir:=.}/hostfile ./replay/smpi_replay replay/one_trace --log=smpi_kernel.thres:warning --log=xbt_cfg.thres:warning
-> [Tremblay:0:(1) 5.112775] [smpi_replay/VERBOSE] 0 allReduce 5e4 5e8 5.112775
-> [Jupiter:1:(2) 6.584135] [smpi_replay/VERBOSE] 1 allReduce 5e4 5e8 6.584135
-> [Fafard:2:(3) 6.584775] [smpi_replay/VERBOSE] 2 allReduce 5e4 5e8 6.584775
+> [Tremblay:0:(1) 5.112775] [smpi_replay/VERBOSE] 0 allreduce 5e4 5e8 5.112775
+> [Jupiter:1:(2) 6.584135] [smpi_replay/VERBOSE] 1 allreduce 5e4 5e8 6.584135
+> [Fafard:2:(3) 6.584775] [smpi_replay/VERBOSE] 2 allreduce 5e4 5e8 6.584775
> [Tremblay:0:(1) 10.209875] [smpi_replay/VERBOSE] 0 compute 5e8 5.097100
> [Jupiter:1:(2) 13.137559] [smpi_replay/VERBOSE] 1 compute 5e8 6.553424
> [Fafard:2:(3) 13.138198] [smpi_replay/VERBOSE] 2 compute 5e8 6.553424
$ mkfile replay/one_trace
$ ../../smpi_script/bin/smpirun -no-privatize -ext smpi_replay --log=replay.thresh:critical --log=smpi_replay.thresh:verbose --log=no_loc --cfg=smpi/simulate-computation:no -np 3 -platform ${srcdir:=.}/../platforms/small_platform.xml -hostfile ${srcdir:=.}/hostfile ./replay/smpi_replay replay/one_trace --log=smpi_kernel.thres:warning --log=xbt_cfg.thres:warning
-> [Tremblay:0:(1) 0.004041] [smpi_replay/VERBOSE] 0 allToAll 500 500 0.004041
-> [Fafard:2:(3) 0.006920] [smpi_replay/VERBOSE] 2 allToAll 500 500 0.006920
-> [Jupiter:1:(2) 0.006920] [smpi_replay/VERBOSE] 1 allToAll 500 500 0.006920
+> [Tremblay:0:(1) 0.004041] [smpi_replay/VERBOSE] 0 alltoall 500 500 0.004041
+> [Fafard:2:(3) 0.006920] [smpi_replay/VERBOSE] 2 alltoall 500 500 0.006920
+> [Jupiter:1:(2) 0.006920] [smpi_replay/VERBOSE] 1 alltoall 500 500 0.006920
> [Jupiter:1:(2) 0.006920] [smpi_replay/INFO] Simulation time 0.006920
$ mkfile replay/one_trace
$ ../../smpi_script/bin/smpirun -no-privatize -ext smpi_replay --log=replay.thresh:critical --log=smpi_replay.thresh:verbose --log=no_loc --cfg=smpi/simulate-computation:no -np 3 -platform ${srcdir:=.}/../platforms/small_platform.xml -hostfile ${srcdir:=.}/hostfile ./replay/smpi_replay replay/one_trace --log=smpi_kernel.thres:warning --log=xbt_cfg.thres:warning
-> [Tremblay:0:(1) 0.004000] [smpi_replay/VERBOSE] 0 allToAllV 100 1 40 30 1000 1 80 100 0.004000
-> [Jupiter:1:(2) 0.006935] [smpi_replay/VERBOSE] 1 allToAllV 1000 80 1 40 1000 40 1 30 0.006935
-> [Fafard:2:(3) 0.006936] [smpi_replay/VERBOSE] 2 allToAllV 1000 100 30 1 1000 30 40 1 0.006936
+> [Tremblay:0:(1) 0.004000] [smpi_replay/VERBOSE] 0 alltoallv 100 1 40 30 1000 1 80 100 0.004000
+> [Jupiter:1:(2) 0.006935] [smpi_replay/VERBOSE] 1 alltoallv 1000 80 1 40 1000 40 1 30 0.006935
+> [Fafard:2:(3) 0.006936] [smpi_replay/VERBOSE] 2 alltoallv 1000 100 30 1 1000 30 40 1 0.006936
> [Fafard:2:(3) 0.006936] [smpi_replay/INFO] Simulation time 0.006936
$ rm -f replay/one_trace
$ mkfile replay/one_trace
$ ../../smpi_script/bin/smpirun -no-privatize -ext smpi_replay --log=replay.thresh:critical --log=smpi_replay.thresh:verbose --log=no_loc --cfg=smpi/simulate-computation:no -np 4 -platform ${srcdir:=.}/../platforms/small_platform.xml -hostfile ${srcdir:=.}/hostfile ./replay/smpi_replay replay/one_trace --log=smpi_kernel.thres:warning --log=xbt_cfg.thres:warning
-> [Tremblay:0:(1) 1.397261] [smpi_replay/VERBOSE] 0 allGatherV 275427 275427 275427 275427 204020 0 0 1.397261
-> [Ginette:3:(4) 1.760421] [smpi_replay/VERBOSE] 3 allGatherV 204020 275427 275427 275427 204020 0 0 1.760421
-> [Fafard:2:(3) 1.941986] [smpi_replay/VERBOSE] 2 allGatherV 275427 275427 275427 275427 204020 0 0 1.941986
-> [Jupiter:1:(2) 1.941986] [smpi_replay/VERBOSE] 1 allGatherV 275427 275427 275427 275427 204020 0 0 1.941986
+> [Tremblay:0:(1) 1.397261] [smpi_replay/VERBOSE] 0 allgatherv 275427 275427 275427 275427 204020 0 0 1.397261
+> [Ginette:3:(4) 1.760421] [smpi_replay/VERBOSE] 3 allgatherv 204020 275427 275427 275427 204020 0 0 1.760421
+> [Fafard:2:(3) 1.941986] [smpi_replay/VERBOSE] 2 allgatherv 275427 275427 275427 275427 204020 0 0 1.941986
+> [Jupiter:1:(2) 1.941986] [smpi_replay/VERBOSE] 1 allgatherv 275427 275427 275427 275427 204020 0 0 1.941986
> [Jupiter:1:(2) 1.941986] [smpi_replay/INFO] Simulation time 1.941986
$ rm -f replay/one_trace
! output sort 19
$ ../../smpi_script/bin/smpirun -no-privatize -ext smpi_replay --log=replay.thresh:critical --log=smpi_replay.thresh:verbose --log=no_loc --cfg=smpi/simulate-computation:no -np 3 -platform ${srcdir:=.}/../platforms/small_platform.xml -hostfile ${srcdir:=.}/hostfile ./replay/smpi_replay replay/one_trace --log=smpi_kernel.thres:warning --log=xbt_cfg.thres:warning
-> [Fafard:2:(3) 0.000000] [smpi_replay/VERBOSE] 2 Irecv 1 1 3000 0.000000
-> [Fafard:2:(3) 0.000000] [smpi_replay/VERBOSE] 2 Isend 0 2 3000 0.000000
-> [Jupiter:1:(2) 0.000000] [smpi_replay/VERBOSE] 1 Isend 0 0 2000 0.000000
-> [Jupiter:1:(2) 0.000000] [smpi_replay/VERBOSE] 1 Isend 2 1 3000 0.000000
-> [Jupiter:1:(2) 0.000000] [smpi_replay/VERBOSE] 1 waitAll 0.000000
-> [Tremblay:0:(1) 0.000000] [smpi_replay/VERBOSE] 0 Irecv 1 0 2000 0.000000
-> [Tremblay:0:(1) 0.000000] [smpi_replay/VERBOSE] 0 Irecv 2 2 3000 0.000000
-> [Tremblay:0:(1) 0.003787] [smpi_replay/VERBOSE] 0 waitAll 0.003787
-> [Fafard:2:(3) 0.006220] [smpi_replay/VERBOSE] 2 waitAll 0.006220
+> [Fafard:2:(3) 0.000000] [smpi_replay/VERBOSE] 2 irecv 1 1 3000 0.000000
+> [Fafard:2:(3) 0.000000] [smpi_replay/VERBOSE] 2 isend 0 2 3000 0.000000
+> [Jupiter:1:(2) 0.000000] [smpi_replay/VERBOSE] 1 isend 0 0 2000 0.000000
+> [Jupiter:1:(2) 0.000000] [smpi_replay/VERBOSE] 1 isend 2 1 3000 0.000000
+> [Jupiter:1:(2) 0.000000] [smpi_replay/VERBOSE] 1 waitall 0.000000
+> [Tremblay:0:(1) 0.000000] [smpi_replay/VERBOSE] 0 irecv 1 0 2000 0.000000
+> [Tremblay:0:(1) 0.000000] [smpi_replay/VERBOSE] 0 irecv 2 2 3000 0.000000
+> [Tremblay:0:(1) 0.003787] [smpi_replay/VERBOSE] 0 waitall 0.003787
+> [Fafard:2:(3) 0.006220] [smpi_replay/VERBOSE] 2 waitall 0.006220
> [Fafard:2:(3) 0.006220] [smpi_replay/INFO] Simulation time 0.006220
$ rm -f replay/one_trace
0 compute 131738
0 gather 795 795 0
0 compute 302221294
-0 allToAll 1746 1746
+0 alltoall 1746 1746
0 compute 276029
0 barrier
0 compute 409757278
0 comm_size 32
-0 allReduce 32 12513009 1
+0 allreduce 32 12513009 1
0 compute 3035449395
0 compute 1525
-0 allToAllV 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
+0 alltoallv 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
0 compute 1058645731
0 barrier
0 compute 7153
0 compute 332
-0 allToAllV 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450
+0 alltoallv 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450
0 compute 915153801
0 barrier
0 compute 1455003037
-0 allToAll 13824 13824
+0 alltoall 13824 13824
0 compute 72649027
-0 allToAll 13824 13824
+0 alltoall 13824 13824
0 compute 20523056
0 comm_size 32
-0 allReduce 1 11040132
+0 allreduce 1 11040132
0 compute 1383678084
0 comm_size 32
-0 allReduce 795 56429098
+0 allreduce 795 56429098
0 compute 1518182851
0 comm_size 32
-0 allReduce 2 67666587
+0 allreduce 2 67666587
0 compute 21668953
-0 allToAll 13824 13824
+0 alltoall 13824 13824
0 compute 72648705
-0 allToAll 13824 13824
+0 alltoall 13824 13824
0 compute 20522147
0 comm_size 32
-0 allReduce 1 3081964
+0 allreduce 1 3081964
0 compute 47498994
0 comm_size 32
-0 allReduce 32 13171326 1
+0 allreduce 32 13171326 1
0 compute 62566160216
0 finalize
\ No newline at end of file
1 compute 124787
1 gather 795 795 0
1 compute 228879934
-1 allToAll 1746 1746
+1 alltoall 1746 1746
1 compute 276028
1 barrier
1 compute 409315679
1 comm_size 32
-1 allReduce 32 21827181 1
+1 allreduce 32 21827181 1
1 compute 3038127257
1 compute 1525
-1 allToAllV 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
+1 alltoallv 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
1 compute 1058645731
1 barrier
1 compute 7154
1 compute 332
-1 allToAllV 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450
+1 alltoallv 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450
1 compute 915153808
1 barrier
1 compute 1455231051
-1 allToAll 13824 13824
+1 alltoall 13824 13824
1 compute 72647798
-1 allToAll 13824 13824
+1 alltoall 13824 13824
1 compute 20518087
1 comm_size 32
-1 allReduce 1 10444633
+1 allreduce 1 10444633
1 compute 1384067914
1 comm_size 32
-1 allReduce 795 58255749
+1 allreduce 795 58255749
1 compute 1517885375
1 comm_size 32
-1 allReduce 2 68351849
+1 allreduce 2 68351849
1 compute 21530357
-1 allToAll 13824 13824
+1 alltoall 13824 13824
1 compute 72647756
-1 allToAll 13824 13824
+1 alltoall 13824 13824
1 compute 20517481
1 comm_size 32
-1 allReduce 1 1808047
+1 allreduce 1 1808047
1 compute 53953284
1 comm_size 32
-1 allReduce 32 5776377 1
+1 allreduce 32 5776377 1
1 compute 62566095398
1 finalize
\ No newline at end of file
10 compute 123090
10 gather 795 795 0
10 compute 237860371
-10 allToAll 1746 1746
+10 alltoall 1746 1746
10 compute 276028
10 barrier
10 compute 409270211
10 comm_size 32
-10 allReduce 32 21656181 1
+10 allreduce 32 21656181 1
10 compute 3038108208
10 compute 1525
-10 allToAllV 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
+10 alltoallv 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
10 compute 1058646373
10 barrier
10 compute 7748
10 compute 332
-10 allToAllV 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450
+10 alltoallv 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450
10 compute 915154398
10 barrier
10 compute 1453147816
-10 allToAll 13824 13824
+10 alltoall 13824 13824
10 compute 72649003
-10 allToAll 13824 13824
+10 alltoall 13824 13824
10 compute 20518692
10 comm_size 32
-10 allReduce 1 6543263
+10 allreduce 1 6543263
10 compute 1382081052
10 comm_size 32
-10 allReduce 795 58749470
+10 allreduce 795 58749470
10 compute 1508885748
10 comm_size 32
-10 allReduce 2 66543554
+10 allreduce 2 66543554
10 compute 21530874
-10 allToAll 13824 13824
+10 alltoall 13824 13824
10 compute 72648683
-10 allToAll 13824 13824
+10 alltoall 13824 13824
10 compute 20517934
10 comm_size 32
-10 allReduce 1 4179296
+10 allreduce 1 4179296
10 compute 75547890
10 comm_size 32
-10 allReduce 32 1690483 1
+10 allreduce 32 1690483 1
10 compute 60481270200
10 finalize
\ No newline at end of file
11 compute 123074
11 gather 795 795 0
11 compute 237795802
-11 allToAll 1746 1746
+11 alltoall 1746 1746
11 compute 276029
11 barrier
11 compute 409270454
11 comm_size 32
-11 allReduce 32 21078260 1
+11 allreduce 32 21078260 1
11 compute 3038108987
11 compute 1525
-11 allToAllV 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
+11 alltoallv 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
11 compute 1058646373
11 barrier
11 compute 7748
11 compute 332
-11 allToAllV 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450
+11 alltoallv 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450
11 compute 915154399
11 barrier
11 compute 1452184865
-11 allToAll 13824 13824
+11 alltoall 13824 13824
11 compute 72648818
-11 allToAll 13824 13824
+11 alltoall 13824 13824
11 compute 20518600
11 comm_size 32
-11 allReduce 1 6623555
+11 allreduce 1 6623555
11 compute 1381107033
11 comm_size 32
-11 allReduce 795 59025436
+11 allreduce 795 59025436
11 compute 1506911402
11 comm_size 32
-11 allReduce 2 67305727
+11 allreduce 2 67305727
11 compute 21530742
-11 allToAll 13824 13824
+11 alltoall 13824 13824
11 compute 72648500
-11 allToAll 13824 13824
+11 alltoall 13824 13824
11 compute 20517845
11 comm_size 32
-11 allReduce 1 3273339
+11 allreduce 1 3273339
11 compute 73026425
11 comm_size 32
-11 allReduce 32 2103365 1
+11 allreduce 32 2103365 1
11 compute 60481269681
11 finalize
\ No newline at end of file
12 compute 123084
12 gather 795 795 0
12 compute 238369956
-12 allToAll 1746 1746
+12 alltoall 1746 1746
12 compute 276029
12 barrier
12 compute 409270143
12 comm_size 32
-12 allReduce 32 25565173 1
+12 allreduce 32 25565173 1
12 compute 3038107993
12 compute 1525
-12 allToAllV 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
+12 alltoallv 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
12 compute 1058646378
12 barrier
12 compute 7748
12 compute 332
-12 allToAllV 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450
+12 alltoallv 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450
12 compute 915154396
12 barrier
12 compute 1451516381
-12 allToAll 13824 13824
+12 alltoall 13824 13824
12 compute 72647798
-12 allToAll 13824 13824
+12 alltoall 13824 13824
12 compute 20518135
12 comm_size 32
-12 allReduce 1 12073536
+12 allreduce 1 12073536
12 compute 1380438209
12 comm_size 32
-12 allReduce 795 67953607
+12 allreduce 795 67953607
12 compute 1505264885
12 comm_size 32
-12 allReduce 2 76015831
+12 allreduce 2 76015831
12 compute 21530249
-12 allToAll 13824 13824
+12 alltoall 13824 13824
12 compute 72647572
-12 allToAll 13824 13824
+12 alltoall 13824 13824
12 compute 20517146
12 comm_size 32
-12 allReduce 1 4487443
+12 allreduce 1 4487443
12 compute 72922086
12 comm_size 32
-12 allReduce 32 2563964 1
+12 allreduce 32 2563964 1
12 compute 60481266895
12 finalize
\ No newline at end of file
13 compute 123092
13 gather 795 795 0
13 compute 240038222
-13 allToAll 1746 1746
+13 alltoall 1746 1746
13 compute 276028
13 barrier
13 compute 409270018
13 comm_size 32
-13 allReduce 32 7746985 1
+13 allreduce 32 7746985 1
13 compute 3038107988
13 compute 1525
-13 allToAllV 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
+13 alltoallv 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
13 compute 1058646386
13 barrier
13 compute 7748
13 compute 332
-13 allToAllV 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450
+13 alltoallv 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450
13 compute 915154400
13 barrier
13 compute 1451801158
-13 allToAll 13824 13824
+13 alltoall 13824 13824
13 compute 72647798
-13 allToAll 13824 13824
+13 alltoall 13824 13824
13 compute 20518127
13 comm_size 32
-13 allReduce 1 9105419
+13 allreduce 1 9105419
13 compute 1380758960
13 comm_size 32
-13 allReduce 795 14210244
+13 allreduce 795 14210244
13 compute 1504573388
13 comm_size 32
-13 allReduce 2 22024401
+13 allreduce 2 22024401
13 compute 21530249
-13 allToAll 13824 13824
+13 alltoall 13824 13824
13 compute 72647572
-13 allToAll 13824 13824
+13 alltoall 13824 13824
13 compute 20517143
13 comm_size 32
-13 allReduce 1 2831799
+13 allreduce 1 2831799
13 compute 80870940
13 comm_size 32
-13 allReduce 32 12028 1
+13 allreduce 32 12028 1
13 compute 60481266716
13 finalize
\ No newline at end of file
14 compute 123086
14 gather 795 795 0
14 compute 242344291
-14 allToAll 1746 1746
+14 alltoall 1746 1746
14 compute 276030
14 barrier
14 compute 409270145
14 comm_size 32
-14 allReduce 32 21679149 1
+14 allreduce 32 21679149 1
14 compute 3038107618
14 compute 1525
-14 allToAllV 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
+14 alltoallv 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
14 compute 1058646389
14 barrier
14 compute 7748
14 compute 332
-14 allToAllV 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450
+14 alltoallv 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450
14 compute 915154403
14 barrier
14 compute 1450254991
-14 allToAll 13824 13824
+14 alltoall 13824 13824
14 compute 72648708
-14 allToAll 13824 13824
+14 alltoall 13824 13824
14 compute 20518595
14 comm_size 32
-14 allReduce 1 9822
+14 allreduce 1 9822
14 compute 1379180697
14 comm_size 32
-14 allReduce 795 60372594
+14 allreduce 795 60372594
14 compute 1501942786
14 comm_size 32
-14 allReduce 2 64877886
+14 allreduce 2 64877886
14 compute 21530677
-14 allToAll 13824 13824
+14 alltoall 13824 13824
14 compute 72648532
-14 allToAll 13824 13824
+14 alltoall 13824 13824
14 compute 20517895
14 comm_size 32
-14 allReduce 1 3409934
+14 allreduce 1 3409934
14 compute 73960084
14 comm_size 32
-14 allReduce 32 2319543 1
+14 allreduce 32 2319543 1
14 compute 60481266385
14 finalize
\ No newline at end of file
15 compute 123090
15 gather 795 795 0
15 compute 248083384
-15 allToAll 1746 1746
+15 alltoall 1746 1746
15 compute 276028
15 barrier
15 compute 409270090
15 comm_size 32
-15 allReduce 32 19421153 1
+15 allreduce 32 19421153 1
15 compute 3038107647
15 compute 1525
-15 allToAllV 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
+15 alltoallv 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
15 compute 1058646397
15 barrier
15 compute 7748
15 compute 332
-15 allToAllV 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450
+15 alltoallv 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450
15 compute 915154396
15 barrier
15 compute 1449498615
-15 allToAll 13824 13824
+15 alltoall 13824 13824
15 compute 72648698
-15 allToAll 13824 13824
+15 alltoall 13824 13824
15 compute 20518595
15 comm_size 32
-15 allReduce 1 5301084
+15 allreduce 1 5301084
15 compute 1378420591
15 comm_size 32
-15 allReduce 795 59634324
+15 allreduce 795 59634324
15 compute 1500197233
15 comm_size 32
-15 allReduce 2 64461369
+15 allreduce 2 64461369
15 compute 21530676
-15 allToAll 13824 13824
+15 alltoall 13824 13824
15 compute 72648532
-15 allToAll 13824 13824
+15 alltoall 13824 13824
15 compute 20517894
15 comm_size 32
-15 allReduce 1 2260549
+15 allreduce 1 2260549
15 compute 73034913
15 comm_size 32
-15 allReduce 32 2746349 1
+15 allreduce 32 2746349 1
15 compute 60481266209
15 finalize
\ No newline at end of file
16 compute 124216
16 gather 795 795 0
16 compute 248019689
-16 allToAll 1746 1746
+16 alltoall 1746 1746
16 compute 276028
16 barrier
16 compute 409314252
16 comm_size 32
-16 allReduce 32 18101827 1
+16 allreduce 32 18101827 1
16 compute 3038128476
16 compute 1525
-16 allToAllV 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
+16 alltoallv 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
16 compute 1058645807
16 barrier
16 compute 7155
16 compute 332
-16 allToAllV 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450
+16 alltoallv 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450
16 compute 915153808
16 barrier
16 compute 1449835669
-16 allToAll 13824 13824
+16 alltoall 13824 13824
16 compute 72647803
-16 allToAll 13824 13824
+16 alltoall 13824 13824
16 compute 20518093
16 comm_size 32
-16 allReduce 1 5226292
+16 allreduce 1 5226292
16 compute 1378796665
16 comm_size 32
-16 allReduce 795 72159977
+16 allreduce 795 72159977
16 compute 1499557201
16 comm_size 32
-16 allReduce 2 63798502
+16 allreduce 2 63798502
16 compute 21530343
-16 allToAll 13824 13824
+16 alltoall 13824 13824
16 compute 72647758
-16 allToAll 13824 13824
+16 alltoall 13824 13824
16 compute 20517492
16 comm_size 32
-16 allReduce 1 3823764
+16 allreduce 1 3823764
16 compute 81457367
16 comm_size 32
-16 allReduce 32 1411377 1
+16 allreduce 32 1411377 1
16 compute 60481274473
16 finalize
\ No newline at end of file
17 compute 124783
17 gather 795 795 0
17 compute 247125067
-17 allToAll 1746 1746
+17 alltoall 1746 1746
17 compute 276029
17 barrier
17 compute 409314675
17 comm_size 32
-17 allReduce 32 22370115 1
+17 allreduce 32 22370115 1
17 compute 3038128753
17 compute 1525
-17 allToAllV 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
+17 alltoallv 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
17 compute 1058645810
17 barrier
17 compute 7155
17 compute 332
-17 allToAllV 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450
+17 alltoallv 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450
17 compute 915153803
17 barrier
17 compute 1448145232
-17 allToAll 13824 13824
+17 alltoall 13824 13824
17 compute 72647798
-17 allToAll 13824 13824
+17 alltoall 13824 13824
17 compute 20518070
17 comm_size 32
-17 allReduce 1 11132375
+17 allreduce 1 11132375
17 compute 1377069218
17 comm_size 32
-17 allReduce 795 71401648
+17 allreduce 795 71401648
17 compute 1496774529
17 comm_size 32
-17 allReduce 2 78649106
+17 allreduce 2 78649106
17 compute 21530303
-17 allToAll 13824 13824
+17 alltoall 13824 13824
17 compute 72647756
-17 allToAll 13824 13824
+17 alltoall 13824 13824
17 compute 20517446
17 comm_size 32
-17 allReduce 1 4068354
+17 allreduce 1 4068354
17 compute 73311170
17 comm_size 32
-17 allReduce 32 4382080 1
+17 allreduce 32 4382080 1
17 compute 60481274164
17 finalize
\ No newline at end of file
18 compute 124220
18 gather 795 795 0
18 compute 245938721
-18 allToAll 1746 1746
+18 alltoall 1746 1746
18 compute 276028
18 barrier
18 compute 409314088
18 comm_size 32
-18 allReduce 32 25414757 1
+18 allreduce 32 25414757 1
18 compute 3038128233
18 compute 1525
-18 allToAllV 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
+18 alltoallv 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
18 compute 1058646229
18 barrier
18 compute 7563
18 compute 332
-18 allToAllV 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450
+18 alltoallv 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450
18 compute 915154213
18 barrier
18 compute 1447518376
-18 allToAll 13824 13824
+18 alltoall 13824 13824
18 compute 72648622
-18 allToAll 13824 13824
+18 alltoall 13824 13824
18 compute 20518479
18 comm_size 32
-18 allReduce 1 11692905
+18 allreduce 1 11692905
18 compute 1376442859
18 comm_size 32
-18 allReduce 795 70015898
+18 allreduce 795 70015898
18 compute 1495165972
18 comm_size 32
-18 allReduce 2 77379595
+18 allreduce 2 77379595
18 compute 21530774
-18 allToAll 13824 13824
+18 alltoall 13824 13824
18 compute 72648820
-18 allToAll 13824 13824
+18 alltoall 13824 13824
18 compute 20517985
18 comm_size 32
-18 allReduce 1 5117785
+18 allreduce 1 5117785
18 compute 73632276
18 comm_size 32
-18 allReduce 32 3373668 1
+18 allreduce 32 3373668 1
18 compute 60481274381
18 finalize
\ No newline at end of file
19 compute 124765
19 gather 795 795 0
19 compute 245277367
-19 allToAll 1746 1746
+19 alltoall 1746 1746
19 compute 276030
19 barrier
19 compute 409314528
19 comm_size 32
-19 allReduce 32 21818681 1
+19 allreduce 32 21818681 1
19 compute 3038130250
19 compute 1525
-19 allToAllV 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
+19 alltoallv 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
19 compute 1058646413
19 barrier
19 compute 7749
19 compute 332
-19 allToAllV 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450
+19 alltoallv 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450
19 compute 915154395
19 barrier
19 compute 1447127347
-19 allToAll 13824 13824
+19 alltoall 13824 13824
19 compute 72648659
-19 allToAll 13824 13824
+19 alltoall 13824 13824
19 compute 20518521
19 comm_size 32
-19 allReduce 1 10938129
+19 allreduce 1 10938129
19 compute 1376061784
19 comm_size 32
-19 allReduce 795 71878923
+19 allreduce 795 71878923
19 compute 1493790470
19 comm_size 32
-19 allReduce 2 75179146
+19 allreduce 2 75179146
19 compute 21530919
-19 allToAll 13824 13824
+19 alltoall 13824 13824
19 compute 72648815
-19 allToAll 13824 13824
+19 alltoall 13824 13824
19 compute 20518034
19 comm_size 32
-19 allReduce 1 4342658
+19 allreduce 1 4342658
19 compute 75851167
19 comm_size 32
-19 allReduce 32 2469080 1
+19 allreduce 32 2469080 1
19 compute 60481269761
19 finalize
\ No newline at end of file
2 compute 123092
2 gather 795 795 0
2 compute 231016664
-2 allToAll 1746 1746
+2 alltoall 1746 1746
2 compute 276030
2 barrier
2 compute 409271032
2 comm_size 32
-2 allReduce 32 21600798 1
+2 allreduce 32 21600798 1
2 compute 3038108638
2 compute 1525
-2 allToAllV 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
+2 alltoallv 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
2 compute 1058646336
2 barrier
2 compute 7748
2 compute 332
-2 allToAllV 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450
+2 alltoallv 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450
2 compute 915154403
2 barrier
2 compute 1453971872
-2 allToAll 13824 13824
+2 alltoall 13824 13824
2 compute 72648604
-2 allToAll 13824 13824
+2 alltoall 13824 13824
2 compute 20518491
2 comm_size 32
-2 allReduce 1 9133457
+2 allreduce 1 9133457
2 compute 1382778051
2 comm_size 32
-2 allReduce 795 56590206
+2 allreduce 795 56590206
2 compute 1515905980
2 comm_size 32
-2 allReduce 2 68499832
+2 allreduce 2 68499832
2 compute 21531910
-2 allToAll 13824 13824
+2 alltoall 13824 13824
2 compute 72648598
-2 allToAll 13824 13824
+2 alltoall 13824 13824
2 compute 20518490
2 comm_size 32
-2 allReduce 1 2165903
+2 allreduce 1 2165903
2 compute 47383939
2 comm_size 32
-2 allReduce 32 7340509 1
+2 allreduce 32 7340509 1
2 compute 62566097135
2 finalize
\ No newline at end of file
20 compute 124204
20 gather 795 795 0
20 compute 244616488
-20 allToAll 1746 1746
+20 alltoall 1746 1746
20 compute 276028
20 barrier
20 compute 409313968
20 comm_size 32
-20 allReduce 32 5510267 1
+20 allreduce 32 5510267 1
20 compute 3038130868
20 compute 1525
-20 allToAllV 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
+20 alltoallv 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
20 compute 1058646421
20 barrier
20 compute 7748
20 compute 332
-20 allToAllV 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450
+20 alltoallv 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450
20 compute 915154401
20 barrier
20 compute 1445555133
-20 allToAll 13824 13824
+20 alltoall 13824 13824
20 compute 72648538
-20 allToAll 13824 13824
+20 alltoall 13824 13824
20 compute 20518471
20 comm_size 32
-20 allReduce 1 9691440
+20 allreduce 1 9691440
20 compute 1374455267
20 comm_size 32
-20 allReduce 795 51860
+20 allreduce 795 51860
20 compute 1491203354
20 comm_size 32
-20 allReduce 2 3286786
+20 allreduce 2 3286786
20 compute 21530547
-20 allToAll 13824 13824
+20 alltoall 13824 13824
20 compute 72648392
-20 allToAll 13824 13824
+20 alltoall 13824 13824
20 compute 20517562
20 comm_size 32
-20 allReduce 1 2308549
+20 allreduce 1 2308549
20 compute 68196979
20 comm_size 32
-20 allReduce 32 6148766 1
+20 allreduce 32 6148766 1
20 compute 60481269975
20 finalize
\ No newline at end of file
21 compute 124765
21 gather 795 795 0
21 compute 244537166
-21 allToAll 1746 1746
+21 alltoall 1746 1746
21 compute 276028
21 barrier
21 compute 409314436
21 comm_size 32
-21 allReduce 32 7208831 1
+21 allreduce 32 7208831 1
21 compute 3038131051
21 compute 1525
-21 allToAllV 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
+21 alltoallv 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
21 compute 1058646427
21 barrier
21 compute 7748
21 compute 332
-21 allToAllV 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450
+21 alltoallv 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450
21 compute 915154402
21 barrier
21 compute 1444252854
-21 allToAll 13824 13824
+21 alltoall 13824 13824
21 compute 72648659
-21 allToAll 13824 13824
+21 alltoall 13824 13824
21 compute 20518520
21 comm_size 32
-21 allReduce 1 10479272
+21 allreduce 1 10479272
21 compute 1373126126
21 comm_size 32
-21 allReduce 795 3035528
+21 allreduce 795 3035528
21 compute 1488989422
21 comm_size 32
-21 allReduce 2 5357055
+21 allreduce 2 5357055
21 compute 21530648
-21 allToAll 13824 13824
+21 alltoall 13824 13824
21 compute 72648512
-21 allToAll 13824 13824
+21 alltoall 13824 13824
21 compute 20517622
21 comm_size 32
-21 allReduce 1 2161693
+21 allreduce 1 2161693
21 compute 62152034
21 comm_size 32
-21 allReduce 32 11552421 1
+21 allreduce 32 11552421 1
21 compute 60481265726
21 finalize
\ No newline at end of file
22 compute 124196
22 gather 795 795 0
22 compute 244528768
-22 allToAll 1746 1746
+22 alltoall 1746 1746
22 compute 276028
22 barrier
22 compute 409313850
22 comm_size 32
-22 allReduce 32 21210529 1
+22 allreduce 32 21210529 1
22 compute 3038129064
22 compute 1525
-22 allToAllV 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
+22 alltoallv 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
22 compute 1058646432
22 barrier
22 compute 7748
22 compute 332
-22 allToAllV 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450
+22 alltoallv 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450
22 compute 915154399
22 barrier
22 compute 1444306844
-22 allToAll 13824 13824
+22 alltoall 13824 13824
22 compute 72648999
-22 allToAll 13824 13824
+22 alltoall 13824 13824
22 compute 20518738
22 comm_size 32
-22 allReduce 1 8631733
+22 allreduce 1 8631733
22 compute 1373203364
22 comm_size 32
-22 allReduce 795 60367712
+22 allreduce 795 60367712
22 compute 1488212129
22 comm_size 32
-22 allReduce 2 63829458
+22 allreduce 2 63829458
22 compute 21530897
-22 allToAll 13824 13824
+22 alltoall 13824 13824
22 compute 72648612
-22 allToAll 13824 13824
+22 alltoall 13824 13824
22 compute 20517980
22 comm_size 32
-22 allReduce 1 1900324
+22 allreduce 1 1900324
22 compute 67460267
22 comm_size 32
-22 allReduce 32 4780370 1
+22 allreduce 32 4780370 1
22 compute 60481266360
22 finalize
\ No newline at end of file
23 compute 124765
23 gather 795 795 0
23 compute 244359495
-23 allToAll 1746 1746
+23 alltoall 1746 1746
23 compute 276028
23 barrier
23 compute 409314315
23 comm_size 32
-23 allReduce 32 8004854 1
+23 allreduce 32 8004854 1
23 compute 3038128913
23 compute 1525
-23 allToAllV 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
+23 alltoallv 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
23 compute 1058646436
23 barrier
23 compute 7748
23 compute 332
-23 allToAllV 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450
+23 alltoallv 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450
23 compute 915154403
23 barrier
23 compute 1442892049
-23 allToAll 13824 13824
+23 alltoall 13824 13824
23 compute 72649058
-23 allToAll 13824 13824
+23 alltoall 13824 13824
23 compute 20518772
23 comm_size 32
-23 allReduce 1 11366080
+23 allreduce 1 11366080
23 compute 1371757234
23 comm_size 32
-23 allReduce 795 19651581
+23 allreduce 795 19651581
23 compute 1485895784
23 comm_size 32
-23 allReduce 2 26240201
+23 allreduce 2 26240201
23 compute 21530587
-23 allToAll 13824 13824
+23 alltoall 13824 13824
23 compute 72648412
-23 allToAll 13824 13824
+23 alltoall 13824 13824
23 compute 20517845
23 comm_size 32
-23 allReduce 1 10247
+23 allreduce 1 10247
23 compute 60435010
23 comm_size 32
-23 allReduce 32 11206099 1
+23 allreduce 32 11206099 1
23 compute 60481266309
23 finalize
\ No newline at end of file
24 compute 124220
24 gather 795 795 0
24 compute 244917352
-24 allToAll 1746 1746
+24 alltoall 1746 1746
24 compute 276028
24 barrier
24 compute 409313728
24 comm_size 32
-24 allReduce 32 20511086 1
+24 allreduce 32 20511086 1
24 compute 3038169943
24 compute 1525
-24 allToAllV 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
+24 alltoallv 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
24 compute 1058646440
24 barrier
24 compute 7748
24 compute 332
-24 allToAllV 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450
+24 alltoallv 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450
24 compute 915154401
24 barrier
24 compute 1441883451
-24 allToAll 13824 13824
+24 alltoall 13824 13824
24 compute 72648998
-24 allToAll 13824 13824
+24 alltoall 13824 13824
24 compute 20518690
24 comm_size 32
-24 allReduce 1 10902682
+24 allreduce 1 10902682
24 compute 1370730665
24 comm_size 32
-24 allReduce 795 61540948
+24 allreduce 795 61540948
24 compute 1484080041
24 comm_size 32
-24 allReduce 2 60816736
+24 allreduce 2 60816736
24 compute 21532130
-24 allToAll 13824 13824
+24 alltoall 13824 13824
24 compute 72648997
-24 allToAll 13824 13824
+24 alltoall 13824 13824
24 compute 20518693
24 comm_size 32
-24 allReduce 1 2120822
+24 allreduce 1 2120822
24 compute 56435059
24 comm_size 32
-24 allReduce 32 5105979 1
+24 allreduce 32 5105979 1
24 compute 60481270241
24 finalize
\ No newline at end of file
25 compute 124196
25 gather 795 795 0
25 compute 246170355
-25 allToAll 1746 1746
+25 alltoall 1746 1746
25 compute 276028
25 barrier
25 compute 409313755
25 comm_size 32
-25 allReduce 32 5669119 1
+25 allreduce 32 5669119 1
25 compute 2984230246
25 compute 1525
-25 allToAllV 12378660 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386820 386820 386820 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
+25 alltoallv 12378660 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386820 386820 386820 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
25 compute 1058646447
25 barrier
25 compute 7748
25 compute 332
-25 allToAllV 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 12378660 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386820 386820 386820
+25 alltoallv 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 12378660 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386820 386820 386820
25 compute 854144618
25 barrier
25 compute 1441635387
-25 allToAll 13824 13824
+25 alltoall 13824 13824
25 compute 72649040
-25 allToAll 13824 13824
+25 alltoall 13824 13824
25 compute 20518716
25 comm_size 32
-25 allReduce 1 12261945
+25 allreduce 1 12261945
25 compute 1370492323
25 comm_size 32
-25 allReduce 795 21773495
+25 allreduce 795 21773495
25 compute 1483078560
25 comm_size 32
-25 allReduce 2 154705
+25 allreduce 2 154705
25 compute 21532187
-25 allToAll 13824 13824
+25 alltoall 13824 13824
25 compute 72649043
-25 allToAll 13824 13824
+25 alltoall 13824 13824
25 compute 20518717
25 comm_size 32
-25 allReduce 1 3265650
+25 allreduce 1 3265650
25 compute 58721452
25 comm_size 32
-25 allReduce 32 10423698 1
+25 allreduce 32 10423698 1
25 compute 60481270126
25 finalize
\ No newline at end of file
26 compute 124198
26 gather 795 795 0
26 compute 247190645
-26 allToAll 1746 1746
+26 alltoall 1746 1746
26 compute 276028
26 barrier
26 compute 409313648
26 comm_size 32
-26 allReduce 32 2008745 1
+26 allreduce 32 2008745 1
26 compute 2984229928
26 compute 1525
-26 allToAllV 12378660 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386820 386820 386820 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
+26 alltoallv 12378660 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386820 386820 386820 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
26 compute 1058646452
26 barrier
26 compute 7748
26 compute 332
-26 allToAllV 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 12378660 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386820 386820 386820
+26 alltoallv 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 12378660 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386820 386820 386820
26 compute 854144617
26 barrier
26 compute 1439789686
-26 allToAll 13824 13824
+26 alltoall 13824 13824
26 compute 60542846
-26 allToAll 13824 13824
+26 alltoall 13824 13824
26 compute 20518690
26 comm_size 32
-26 allReduce 1 13508570
+26 allreduce 1 13508570
26 compute 1368595739
26 comm_size 32
-26 allReduce 795 21460724
+26 allreduce 795 21460724
26 compute 1480450203
26 comm_size 32
-26 allReduce 2 175917
+26 allreduce 2 175917
26 compute 21532129
-26 allToAll 13824 13824
+26 alltoall 13824 13824
26 compute 60542843
-26 allToAll 13824 13824
+26 alltoall 13824 13824
26 compute 20518690
26 comm_size 32
-26 allReduce 1 2415378
+26 allreduce 1 2415378
26 compute 47321791
26 comm_size 32
-26 allReduce 32 18752319 1
+26 allreduce 32 18752319 1
26 compute 60481270254
26 finalize
\ No newline at end of file
27 compute 124781
27 gather 795 795 0
27 compute 249045970
-27 allToAll 1746 1746
+27 alltoall 1746 1746
27 compute 276028
27 barrier
27 compute 409314179
27 comm_size 32
-27 allReduce 32 20892423 1
+27 allreduce 32 20892423 1
27 compute 2984229524
27 compute 1525
-27 allToAllV 12378660 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386820 386820 386820 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
+27 alltoallv 12378660 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386820 386820 386820 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
27 compute 1058646456
27 barrier
27 compute 7748
27 compute 332
-27 allToAllV 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 12378660 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386820 386820 386820
+27 alltoallv 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 12378660 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386820 386820 386820
27 compute 854144616
27 barrier
27 compute 1439369571
-27 allToAll 13824 13824
+27 alltoall 13824 13824
27 compute 11627
-27 allToAll 13824 13824
+27 alltoall 13824 13824
27 compute 20518472
27 comm_size 32
-27 allReduce 1 11959015
+27 allreduce 1 11959015
27 compute 1368175777
27 comm_size 32
-27 allReduce 795 64450365
+27 allreduce 795 64450365
27 compute 1479400301
27 comm_size 32
-27 allReduce 2 73325118
+27 allreduce 2 73325118
27 compute 21531861
-27 allToAll 13824 13824
+27 alltoall 13824 13824
27 compute 11627
-27 allToAll 13824 13824
+27 alltoall 13824 13824
27 compute 20518471
27 comm_size 32
-27 allReduce 1 2864414
+27 allreduce 1 2864414
27 compute 47255321
27 comm_size 32
-27 allReduce 32 8761373 1
+27 allreduce 32 8761373 1
27 compute 60481269758
27 finalize
\ No newline at end of file
28 compute 124218
28 gather 795 795 0
28 compute 251211915
-28 allToAll 1746 1746
+28 alltoall 1746 1746
28 compute 276028
28 barrier
28 compute 409313589
28 comm_size 32
-28 allReduce 32 23989 1
+28 allreduce 32 23989 1
28 compute 2984190851
28 compute 1525
-28 allToAllV 12378660 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386820 386820 386820 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
+28 alltoallv 12378660 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386820 386820 386820 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
28 compute 1058646461
28 barrier
28 compute 7748
28 compute 332
-28 allToAllV 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 12378660 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386820 386820 386820
+28 alltoallv 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 12378660 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386820 386820 386820
28 compute 854144614
28 barrier
28 compute 1439729605
-28 allToAll 13824 13824
+28 alltoall 13824 13824
28 compute 12007
-28 allToAll 13824 13824
+28 alltoall 13824 13824
28 compute 20518662
28 comm_size 32
-28 allReduce 1 10348621
+28 allreduce 1 10348621
28 compute 1368565748
28 comm_size 32
-28 allReduce 795 15729812
+28 allreduce 795 15729812
28 compute 1479129984
28 comm_size 32
-28 allReduce 2 4751179
+28 allreduce 2 4751179
28 compute 21530860
-28 allToAll 13824 13824
+28 alltoall 13824 13824
28 compute 11643
-28 allToAll 13824 13824
+28 alltoall 13824 13824
28 compute 20517914
28 comm_size 32
-28 allReduce 1 1949975
+28 allreduce 1 1949975
28 compute 53951331
28 comm_size 32
-28 allReduce 32 11843209 1
+28 allreduce 32 11843209 1
28 compute 60481275100
28 finalize
\ No newline at end of file
29 compute 124771
29 gather 795 795 0
29 compute 254905873
-29 allToAll 1746 1746
+29 alltoall 1746 1746
29 compute 276028
29 barrier
29 compute 409314112
29 comm_size 32
-29 allReduce 32 9721390 1
+29 allreduce 32 9721390 1
29 compute 2984189170
29 compute 1525
-29 allToAllV 12378660 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386820 386820 386820 13096620 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 386820 386820 386820 386820 386820 386820 386820
+29 alltoallv 12378660 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386820 386820 386820 13096620 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 386820 386820 386820 386820 386820 386820 386820
29 compute 1058608153
29 barrier
29 compute 7748
29 compute 332
-29 allToAllV 13096620 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 386820 386820 386820 386820 386820 386820 386820 12378660 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386820 386820 386820
+29 alltoallv 13096620 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 386820 386820 386820 386820 386820 386820 386820 12378660 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386820 386820 386820
29 compute 854144616
29 barrier
29 compute 1438469865
-29 allToAll 13824 13824
+29 alltoall 13824 13824
29 compute 12107
-29 allToAll 13824 13824
+29 alltoall 13824 13824
29 compute 20518712
29 comm_size 32
-29 allReduce 1 11245758
+29 allreduce 1 11245758
29 compute 1367275873
29 comm_size 32
-29 allReduce 795 5153979
+29 allreduce 795 5153979
29 compute 1477150179
29 comm_size 32
-29 allReduce 2 348477
+29 allreduce 2 348477
29 compute 21530956
-29 allToAll 13824 13824
+29 alltoall 13824 13824
29 compute 11743
-29 allToAll 13824 13824
+29 alltoall 13824 13824
29 compute 20517964
29 comm_size 32
-29 allReduce 1 1709170
+29 allreduce 1 1709170
29 compute 47317512
29 comm_size 32
-29 allReduce 32 18350318 1
+29 allreduce 32 18350318 1
29 compute 60481274853
29 finalize
\ No newline at end of file
3 compute 123084
3 gather 795 795 0
3 compute 232272172
-3 allToAll 1746 1746
+3 alltoall 1746 1746
3 compute 276028
3 barrier
3 compute 409340097
3 comm_size 32
-3 allReduce 32 26788025 1
+3 allreduce 32 26788025 1
3 compute 3038128971
3 compute 1525
-3 allToAllV 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
+3 alltoallv 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
3 compute 1058646334
3 barrier
3 compute 7748
3 compute 332
-3 allToAllV 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450
+3 alltoallv 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450
3 compute 915154397
3 barrier
3 compute 1453551932
-3 allToAll 13824 13824
+3 alltoall 13824 13824
3 compute 72648918
-3 allToAll 13824 13824
+3 alltoall 13824 13824
3 compute 20518658
3 comm_size 32
-3 allReduce 1 10088694
+3 allreduce 1 10088694
3 compute 1382358093
3 comm_size 32
-3 allReduce 795 71440182
+3 allreduce 795 71440182
3 compute 1514856084
3 comm_size 32
-3 allReduce 2 82679426
+3 allreduce 2 82679426
3 compute 21532011
-3 allToAll 13824 13824
+3 alltoall 13824 13824
3 compute 72648918
-3 allToAll 13824 13824
+3 alltoall 13824 13824
3 compute 20518650
3 comm_size 32
-3 allReduce 1 2381618
+3 allreduce 1 2381618
3 compute 47351593
3 comm_size 32
-3 allReduce 32 8075911 1
+3 allreduce 32 8075911 1
3 compute 62566098113
3 finalize
\ No newline at end of file
30 compute 124224
30 gather 795 795 0
30 compute 87539379
-30 allToAll 1746 1746
+30 alltoall 1746 1746
30 compute 276029
30 barrier
30 compute 409313447
30 comm_size 32
-30 allReduce 32 18389809 1
+30 allreduce 32 18389809 1
30 compute 2984187726
30 compute 1525
-30 allToAllV 12378660 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386820 386820 386820 13096620 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 386820 386820 386820 386820 386820 386820 386820
+30 alltoallv 12378660 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386820 386820 386820 13096620 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 386820 386820 386820 386820 386820 386820 386820
30 compute 1058608125
30 barrier
30 compute 7712
30 compute 332
-30 allToAllV 13096620 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 386820 386820 386820 386820 386820 386820 386820 12378660 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386820 386820 386820
+30 alltoallv 13096620 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 386820 386820 386820 386820 386820 386820 386820 12378660 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386820 386820 386820
30 compute 854144576
30 barrier
30 compute 866371
-30 allToAll 13824 13824
+30 alltoall 13824 13824
30 compute 10867
-30 allToAll 13824 13824
+30 alltoall 13824 13824
30 compute 26015
30 comm_size 32
-30 allReduce 1 18926466
+30 allreduce 1 18926466
30 compute 37317
30 comm_size 32
-30 allReduce 795 794203941
+30 allreduce 795 794203941
30 compute 40864
30 comm_size 32
-30 allReduce 2 830615079
+30 allreduce 2 830615079
30 compute 835216
-30 allToAll 13824 13824
+30 alltoall 13824 13824
30 compute 10827
-30 allToAll 13824 13824
+30 alltoall 13824 13824
30 compute 25978
30 comm_size 32
-30 allReduce 1 10224036
+30 allreduce 1 10224036
30 compute 415815
30 comm_size 32
-30 allReduce 32 30205336 1
+30 allreduce 32 30205336 1
30 compute 60481268189
30 finalize
\ No newline at end of file
31 compute 124781
31 gather 795 795 0
31 compute 1572422
-31 allToAll 1746 1746
+31 alltoall 1746 1746
31 compute 276028
31 barrier
31 compute 409314016
31 comm_size 32
-31 allReduce 32 9196581 1
+31 allreduce 32 9196581 1
31 compute 2984188007
31 compute 1525
-31 allToAllV 12378660 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386820 386820 386820 13096620 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 386820 386820 386820 386820 386820 386820 386820
+31 alltoallv 12378660 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386820 386820 386820 13096620 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 386820 386820 386820 386820 386820 386820 386820
31 compute 1058608127
31 barrier
31 compute 7712
31 compute 332
-31 allToAllV 13096620 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 386820 386820 386820 386820 386820 386820 386820 12378660 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386820 386820 386820
+31 alltoallv 13096620 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 414450 386820 386820 386820 386820 386820 386820 386820 12378660 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386834 386820 386820 386820
31 compute 854144579
31 barrier
31 compute 866778
-31 allToAll 13824 13824
+31 alltoall 13824 13824
31 compute 12127
-31 allToAll 13824 13824
+31 alltoall 13824 13824
31 compute 26660
31 comm_size 32
-31 allReduce 1 16408053
+31 allreduce 1 16408053
31 compute 37317
31 comm_size 32
-31 allReduce 795 790700995
+31 allreduce 795 790700995
31 compute 40864
31 comm_size 32
-31 allReduce 2 821059536
+31 allreduce 2 821059536
31 compute 835351
-31 allToAll 13824 13824
+31 alltoall 13824 13824
31 compute 11527
-31 allToAll 13824 13824
+31 alltoall 13824 13824
31 compute 26358
31 comm_size 32
-31 allReduce 1 8581230
+31 allreduce 1 8581230
31 compute 415699
31 comm_size 32
-31 allReduce 32 39149334 1
+31 allreduce 32 39149334 1
31 compute 60481268000
31 finalize
\ No newline at end of file
4 compute 123084
4 gather 795 795 0
4 compute 234577369
-4 allToAll 1746 1746
+4 alltoall 1746 1746
4 compute 276028
4 barrier
4 compute 409270911
4 comm_size 32
-4 allReduce 32 18332025 1
+4 allreduce 32 18332025 1
4 compute 3038108202
4 compute 1525
-4 allToAllV 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
+4 alltoallv 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
4 compute 1058646339
4 barrier
4 compute 7748
4 compute 332
-4 allToAllV 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450
+4 alltoallv 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450
4 compute 915154399
4 barrier
4 compute 1454298529
-4 allToAll 13824 13824
+4 alltoall 13824 13824
4 compute 72649038
-4 allToAll 13824 13824
+4 alltoall 13824 13824
4 compute 20518710
4 comm_size 32
-4 allReduce 1 8764901
+4 allreduce 1 8764901
4 compute 1383149406
4 comm_size 32
-4 allReduce 795 58555205
+4 allreduce 795 58555205
4 compute 1514972225
4 comm_size 32
-4 allReduce 2 69773224
+4 allreduce 2 69773224
4 compute 21530912
-4 allToAll 13824 13824
+4 alltoall 13824 13824
4 compute 72648716
-4 allToAll 13824 13824
+4 alltoall 13824 13824
4 compute 20517953
4 comm_size 32
-4 allReduce 1 1849838
+4 allreduce 1 1849838
4 compute 57241690
4 comm_size 32
-4 allReduce 32 5143014 1
+4 allreduce 32 5143014 1
4 compute 62566104180
4 finalize
\ No newline at end of file
5 compute 123092
5 gather 795 795 0
5 compute 234047138
-5 allToAll 1746 1746
+5 alltoall 1746 1746
5 compute 276029
5 barrier
5 compute 409270813
5 comm_size 32
-5 allReduce 32 26075048 1
+5 allreduce 32 26075048 1
5 compute 3038108674
5 compute 1525
-5 allToAllV 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
+5 alltoallv 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
5 compute 1058646347
5 barrier
5 compute 7748
5 compute 332
-5 allToAllV 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450
+5 alltoallv 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450
5 compute 915154397
5 barrier
5 compute 1453660307
-5 allToAll 13824 13824
+5 alltoall 13824 13824
5 compute 72648498
-5 allToAll 13824 13824
+5 alltoall 13824 13824
5 compute 20518451
5 comm_size 32
-5 allReduce 1 9630351
+5 allreduce 1 9630351
5 compute 1382506563
5 comm_size 32
-5 allReduce 795 86398358
+5 allreduce 795 86398358
5 compute 1513570165
5 comm_size 32
-5 allReduce 2 82995562
+5 allreduce 2 82995562
5 compute 21531740
-5 allToAll 13824 13824
+5 alltoall 13824 13824
5 compute 72648502
-5 allToAll 13824 13824
+5 alltoall 13824 13824
5 compute 20518455
5 comm_size 32
-5 allReduce 1 5339502
+5 allreduce 1 5339502
5 compute 56257081
5 comm_size 32
-5 allReduce 32 5420716 1
+5 allreduce 32 5420716 1
5 compute 62566097632
5 finalize
\ No newline at end of file
6 compute 122556
6 gather 795 795 0
6 compute 234561584
-6 allToAll 1746 1746
+6 alltoall 1746 1746
6 compute 276028
6 barrier
6 compute 409318441
6 comm_size 32
-6 allReduce 32 26071846 1
+6 allreduce 32 26071846 1
6 compute 3038127610
6 compute 1525
-6 allToAllV 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
+6 alltoallv 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
6 compute 1058646353
6 barrier
6 compute 7748
6 compute 332
-6 allToAllV 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450
+6 alltoallv 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450
6 compute 915154402
6 barrier
6 compute 1453541410
-6 allToAll 13824 13824
+6 alltoall 13824 13824
6 compute 72647797
-6 allToAll 13824 13824
+6 alltoall 13824 13824
6 compute 20518134
6 comm_size 32
-6 allReduce 1 10500073
+6 allreduce 1 10500073
6 compute 1382402404
6 comm_size 32
-6 allReduce 795 68340267
+6 allreduce 795 68340267
6 compute 1512701245
6 comm_size 32
-6 allReduce 2 78355589
+6 allreduce 2 78355589
6 compute 21530601
-6 allToAll 13824 13824
+6 alltoall 13824 13824
6 compute 72648538
-6 allToAll 13824 13824
+6 alltoall 13824 13824
6 compute 20517863
6 comm_size 32
-6 allReduce 1 4895476
+6 allreduce 1 4895476
6 compute 59401613
6 comm_size 32
-6 allReduce 32 5355812 1
+6 allreduce 32 5355812 1
6 compute 60481271077
6 finalize
\ No newline at end of file
7 compute 122554
7 gather 795 795 0
7 compute 237329701
-7 allToAll 1746 1746
+7 alltoall 1746 1746
7 compute 276028
7 barrier
7 compute 409318380
7 comm_size 32
-7 allReduce 32 26584387 1
+7 allreduce 32 26584387 1
7 compute 3038128727
7 compute 1525
-7 allToAllV 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
+7 alltoallv 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
7 compute 1058646357
7 barrier
7 compute 7748
7 compute 332
-7 allToAllV 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450
+7 alltoallv 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450
7 compute 915154397
7 barrier
7 compute 1453990364
-7 allToAll 13824 13824
+7 alltoall 13824 13824
7 compute 72648818
-7 allToAll 13824 13824
+7 alltoall 13824 13824
7 compute 20518600
7 comm_size 32
-7 allReduce 1 8126143
+7 allreduce 1 8126143
7 compute 1382888665
7 comm_size 32
-7 allReduce 795 69286550
+7 allreduce 795 69286550
7 compute 1512356592
7 comm_size 32
-7 allReduce 2 81727397
+7 allreduce 2 81727397
7 compute 21532031
-7 allToAll 13824 13824
+7 alltoall 13824 13824
7 compute 72648817
-7 allToAll 13824 13824
+7 alltoall 13824 13824
7 compute 20518605
7 comm_size 32
-7 allReduce 1 6044829
+7 allreduce 1 6044829
7 compute 67755022
7 comm_size 32
-7 allReduce 32 6756150 1
+7 allreduce 32 6756150 1
7 compute 60481269682
7 finalize
\ No newline at end of file
8 compute 123092
8 gather 795 795 0
8 compute 238954617
-8 allToAll 1746 1746
+8 alltoall 1746 1746
8 compute 276028
8 barrier
8 compute 409270545
8 comm_size 32
-8 allReduce 32 25745168 1
+8 allreduce 32 25745168 1
8 compute 3038107710
8 compute 1525
-8 allToAllV 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
+8 alltoallv 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
8 compute 1058646360
8 barrier
8 compute 7748
8 compute 332
-8 allToAllV 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450
+8 alltoallv 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450
8 compute 915154395
8 barrier
8 compute 1452608369
-8 allToAll 13824 13824
+8 alltoall 13824 13824
8 compute 72647804
-8 allToAll 13824 13824
+8 alltoall 13824 13824
8 compute 20518145
8 comm_size 32
-8 allReduce 1 8001472
+8 allreduce 1 8001472
8 compute 1381477053
8 comm_size 32
-8 allReduce 795 71042507
+8 allreduce 795 71042507
8 compute 1510067990
8 comm_size 32
-8 allReduce 2 81335041
+8 allreduce 2 81335041
8 compute 21530405
-8 allToAll 13824 13824
+8 alltoall 13824 13824
8 compute 72647763
-8 allToAll 13824 13824
+8 alltoall 13824 13824
8 compute 20517454
8 comm_size 32
-8 allReduce 1 6534793
+8 allreduce 1 6534793
8 compute 61162725
8 comm_size 32
-8 allReduce 32 5464575 1
+8 allreduce 32 5464575 1
8 compute 60481271862
8 finalize
\ No newline at end of file
9 compute 123078
9 gather 795 795 0
9 compute 237815920
-9 allToAll 1746 1746
+9 alltoall 1746 1746
9 compute 276028
9 barrier
9 compute 409270449
9 comm_size 32
-9 allReduce 32 25455903 1
+9 allreduce 32 25455903 1
9 compute 3038109146
9 compute 1525
-9 allToAllV 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
+9 alltoallv 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834
9 compute 1058646363
9 barrier
9 compute 7748
9 compute 332
-9 allToAllV 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450
+9 alltoallv 13097094 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 386834 386834 386834 386834 386834 386834 386834 13262850 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414465 414450 414450 414450
9 compute 915154399
9 barrier
9 compute 1452824428
-9 allToAll 13824 13824
+9 alltoall 13824 13824
9 compute 72648698
-9 allToAll 13824 13824
+9 alltoall 13824 13824
9 compute 20518540
9 comm_size 32
-9 allReduce 1 7592763
+9 allreduce 1 7592763
9 compute 1381722154
9 comm_size 32
-9 allReduce 795 70563282
+9 allreduce 795 70563282
9 compute 1509467030
9 comm_size 32
-9 allReduce 2 81813815
+9 allreduce 2 81813815
9 compute 21530882
-9 allToAll 13824 13824
+9 alltoall 13824 13824
9 compute 72648896
-9 allToAll 13824 13824
+9 alltoall 13824 13824
9 compute 20518044
9 comm_size 32
-9 allReduce 1 5855800
+9 allreduce 1 5855800
9 compute 67632473
9 comm_size 32
-9 allReduce 32 3946615 1
+9 allreduce 32 3946615 1
9 compute 60481273037
9 finalize
\ No newline at end of file
SG_BEGIN_DECL()
XBT_PUBLIC void sg_engine_load_platform(const char* filename);
XBT_PUBLIC void sg_engine_load_deployment(const char* filename);
+XBT_PUBLIC void sg_engine_run();
XBT_PUBLIC void sg_engine_register_function(const char* name, int (*code)(int, char**));
XBT_PUBLIC void sg_engine_register_default(int (*code)(int, char**));
XBT_PUBLIC double sg_engine_get_clock();
#ifndef JEDULE_HPP_
#define JEDULE_HPP_
-#include <simgrid/config.h>
#include <simgrid/jedule/jedule_events.hpp>
#include <simgrid/jedule/jedule_platform.hpp>
#include <cstdio>
-#if SIMGRID_HAVE_JEDULE
-
namespace simgrid {
namespace jedule{
public:
Jedule()=default;
~Jedule();
- std::vector<Event *> event_set;
- Container* root_container = nullptr;
- std::unordered_map<char*, char*> meta_info;
- void addMetaInfo(char* key, char* value);
- void cleanupOutput();
- void writeOutput(FILE *file);
+ std::vector<Event*> event_set_;
+ Container* root_container_ = nullptr;
+ void add_meta_info(char* key, char* value);
+ void cleanup_output();
+ void write_output(FILE* file);
+
+ // deprecated
+ XBT_ATTRIB_DEPRECATED_v323("Please use Jedule::add_meta_info()") void addMetaInfo(char* key, char* value)
+ {
+ add_meta_info(key, value);
+ }
+ XBT_ATTRIB_DEPRECATED_v323("Please use Jedule::cleanup_output()") void cleanupOutput() { cleanup_output(); }
+ XBT_ATTRIB_DEPRECATED_v323("Please use Jedule::write_output()") void writeOutput(FILE* file) { write_output(file); }
+
+private:
+ std::unordered_map<char*, char*> meta_info_;
};
}
}
typedef simgrid::jedule::Jedule *jedule_t;
-#endif
#endif /* JEDULE_HPP_ */
#include <simgrid/jedule/jedule_platform.hpp>
-#include <simgrid/config.h>
#include <simgrid/forward.h>
#include <vector>
#include <string>
#include <unordered_map>
-#if SIMGRID_HAVE_JEDULE
namespace simgrid {
namespace jedule{
public:
Event(std::string name, double start_time, double end_time, std::string type);
~Event();
- void addCharacteristic(char* characteristic);
- void addResources(std::vector<sg_host_t>* host_selection);
- void addInfo(char* key, char* value);
+ void add_characteristic(char* characteristic);
+ void add_resources(std::vector<sg_host_t>* host_selection);
+ void add_info(char* key, char* value);
void print(FILE* file);
+ // deprecated
+ XBT_ATTRIB_DEPRECATED_v323("Please use Event::add_characteristic()") void addCharacteristic(char* characteristic)
+ {
+ add_characteristic(characteristic);
+ }
+ XBT_ATTRIB_DEPRECATED_v323("Please use Event::add_resources()") void addResources(
+ std::vector<sg_host_t>* host_selection)
+ {
+ add_resources(host_selection);
+ }
+ XBT_ATTRIB_DEPRECATED_v323("Please use Event::add_info()") void addInfo(char* key, char* value)
+ {
+ add_info(key, value);
+ }
+
private:
- std::string name;
- double start_time;
- double end_time;
- std::string type;
- std::vector<jed_subset_t>* resource_subsets;
- std::vector<char*> characteristics_list; /* just a list of names (strings) */
- std::unordered_map<char*, char*> info_map; /* key/value pairs */
+ std::string name_;
+ double start_time_;
+ double end_time_;
+ std::string type_;
+ std::vector<jed_subset_t>* resource_subsets_;
+ std::vector<char*> characteristics_list_; /* just a list of names */
+ std::unordered_map<char*, char*> info_map_; /* key/value pairs */
};
}
}
typedef simgrid::jedule::Event* jed_event_t;
-#endif
#endif /* JEDULE_EVENTS_H_ */
#ifndef JED_SIMGRID_PLATFORM_H_
#define JED_SIMGRID_PLATFORM_H_
-#include <simgrid/config.h>
#include <simgrid/forward.h>
#include <xbt/dynar.h>
#include <vector>
#include <string>
-#if SIMGRID_HAVE_JEDULE
-
namespace simgrid {
namespace jedule{
class XBT_PUBLIC Container {
explicit Container(std::string name);
virtual ~Container();
private:
- int last_id;
- int is_lowest = 0;
+ int last_id_;
+ int is_lowest_ = 0;
+
public:
std::string name;
std::unordered_map<const char*, unsigned int> name2id;
Container *parent = nullptr;
std::vector<Container*> children;
std::vector<sg_host_t> resource_list;
- void addChild(Container* child);
- void addResources(std::vector<sg_host_t> hosts);
- void createHierarchy(sg_netzone_t from_as);
- std::vector<int> getHierarchy();
- std::string getHierarchyAsString();
+ void add_child(Container* child);
+ void add_resources(std::vector<sg_host_t> hosts);
+ void create_hierarchy(sg_netzone_t from_as);
+ std::vector<int> get_hierarchy();
+ std::string get_hierarchy_as_string();
void print(FILE *file);
- void printResources(FILE *file);
+ void print_resources(FILE* file);
+
+ // deprecated
+ XBT_ATTRIB_DEPRECATED_v323("Please use Container::add_child()") void addChild(Container* child) { add_child(child); }
+ XBT_ATTRIB_DEPRECATED_v323("Please use Container::add_resources()") void addResources(std::vector<sg_host_t> hosts)
+ {
+ add_resources(hosts);
+ }
+ XBT_ATTRIB_DEPRECATED_v323("Please use Container::create_hierarchy()") void createHierarchy(sg_netzone_t from_as)
+ {
+ create_hierarchy(from_as);
+ }
+ XBT_ATTRIB_DEPRECATED_v323("Please use Container::get_hierarchy()") std::vector<int> getHierarchy()
+ {
+ return get_hierarchy();
+ }
+ XBT_ATTRIB_DEPRECATED_v323("Please use Container::get_hierarchy_as_string()") std::string getHierarchyAsString()
+ {
+ return get_hierarchy_as_string();
+ }
+ XBT_ATTRIB_DEPRECATED_v323("Please use Container::print_resources()") void printResources(FILE* file)
+ {
+ print_resources(file);
+ }
};
class XBT_PUBLIC Subset {
typedef simgrid::jedule::Subset * jed_subset_t;
void get_resource_selection_by_hosts(std::vector<jed_subset_t>* subset_list, std::vector<sg_host_t> *host_list);
-#endif
-
#endif /* JED_SIMGRID_PLATFORM_H_ */
#ifndef JEDULE_SD_BINDING_H_
#define JEDULE_SD_BINDING_H_
-#include <simgrid/config.h>
#include <simgrid/simdag.h>
-#if SIMGRID_HAVE_JEDULE
SG_BEGIN_DECL()
XBT_PUBLIC void jedule_log_sd_event(SD_task_t task);
XBT_PUBLIC void jedule_sd_init(void);
XBT_PUBLIC void jedule_sd_exit(void);
XBT_PUBLIC void jedule_sd_dump(const char* filename);
SG_END_DECL()
-#endif
#endif /* JEDULE_SD_BINDING_H_ */
void seal() override;
~DijkstraZone() override;
- xbt_node_t routeGraphNewNode(int id, int graph_id);
- xbt_node_t nodeMapSearch(int id);
- void newRoute(int src_id, int dst_id, RouteCreationArgs* e_route);
+
+private:
+ xbt_node_t route_graph_new_node(int id, int graph_id);
+ xbt_node_t node_map_search(int id);
+ void new_route(int src_id, int dst_id, RouteCreationArgs* e_route);
+
+public:
/* For each vertex (node) already in the graph,
* make sure it also has a loopback link; this loopback
* can potentially already be in the graph, and in that
void get_local_route(NetPoint* src, NetPoint* dst, RouteCreationArgs* into, double* latency) override;
void parse_specific_arguments(ClusterCreationArgs* cluster) override;
void seal() override;
- void generateRouters();
- void generateLinks();
- void createLink(const std::string& id, int numlinks, resource::LinkImpl** linkup, resource::LinkImpl** linkdown);
void rankId_to_coords(int rank_id, unsigned int coords[4]);
private:
+ void generate_routers();
+ void generate_links();
+ void create_link(const std::string& id, int numlinks, resource::LinkImpl** linkup, resource::LinkImpl** linkdown);
+
simgrid::s4u::Link::SharingPolicy sharing_policy_;
double bw_ = 0;
double lat_ = 0;
*/
void parse_specific_arguments(ClusterCreationArgs* cluster) override;
void add_processing_node(int id);
- void generate_dot_file(const std::string& filename = "fatTree.dot") const;
+ void generate_dot_file(const std::string& filename = "fat_tree.dot") const;
private:
// description of a PGFT (TODO : better doc)
ClusterCreationArgs* cluster_ = nullptr;
void add_link(FatTreeNode* parent, unsigned int parent_port, FatTreeNode* child, unsigned int child_port);
- int getLevelPosition(const unsigned int level);
- void generateLabels();
- void generateSwitches();
- int connectNodeToParents(FatTreeNode* node);
- bool areRelated(FatTreeNode* parent, FatTreeNode* child);
- bool isInSubTree(FatTreeNode* root, FatTreeNode* node);
+ int get_level_position(const unsigned int level);
+ void generate_labels();
+ void generate_switches();
+ int connect_node_to_parents(FatTreeNode* node);
+ bool are_related(FatTreeNode* parent, FatTreeNode* child);
+ bool is_in_sub_tree(FatTreeNode* root, FatTreeNode* node);
};
} // namespace routing
} // namespace kernel
void get_graph(xbt_graph_t graph, std::map<std::string, xbt_node_t>* nodes,
std::map<std::string, xbt_edge_t>* edges) override;
- virtual RouteCreationArgs* newExtendedRoute(RoutingMode hierarchy, NetPoint* src, NetPoint* dst, NetPoint* gw_src,
- NetPoint* gw_dst, std::vector<resource::LinkImpl*>& link_list,
- bool symmetrical, bool change_order);
protected:
- void getRouteCheckParams(NetPoint* src, NetPoint* dst);
- void addRouteCheckParams(NetPoint* src, NetPoint* dst, NetPoint* gw_src, NetPoint* gw_dst,
- std::vector<resource::LinkImpl*>& link_list, bool symmetrical);
+ virtual RouteCreationArgs* new_extended_route(RoutingMode hierarchy, NetPoint* src, NetPoint* dst, NetPoint* gw_src,
+ NetPoint* gw_dst, std::vector<resource::LinkImpl*>& link_list,
+ bool symmetrical, bool change_order);
+ void get_route_check_params(NetPoint* src, NetPoint* dst);
+ void add_route_check_params(NetPoint* src, NetPoint* dst, NetPoint* gw_src, NetPoint* gw_dst,
+ std::vector<resource::LinkImpl*>& link_list, bool symmetrical);
+
+ // deprecated
+ XBT_ATTRIB_DEPRECATED_v323("Please use RoutedZone::new_extended_route()") virtual RouteCreationArgs* newExtendedRoute(
+ RoutingMode hierarchy, NetPoint* src, NetPoint* dst, NetPoint* gw_src, NetPoint* gw_dst,
+ std::vector<resource::LinkImpl*>& link_list, bool symmetrical, bool change_order)
+ {
+ return new_extended_route(hierarchy, src, dst, gw_src, gw_dst, link_list, symmetrical, change_order);
+ }
+ XBT_ATTRIB_DEPRECATED_v323("Please use RoutedZone::get_route_check_params()") void getRouteCheckParams(NetPoint* src,
+ NetPoint* dst)
+ {
+ get_route_check_params(src, dst);
+ }
+ XBT_ATTRIB_DEPRECATED_v323("Please use RoutedZone::add_route_check_params()") void addRouteCheckParams(
+ NetPoint* src, NetPoint* dst, NetPoint* gw_src, NetPoint* gw_dst, std::vector<resource::LinkImpl*>& link_list,
+ bool symmetrical)
+ {
+ add_route_check_params(src, dst, gw_src, gw_dst, link_list, symmetrical);
+ }
};
} // namespace routing
} // namespace kernel
public:
explicit VivaldiZone(NetZone* father, std::string name);
- void setPeerLink(NetPoint* netpoint, double bw_in, double bw_out, std::string coord);
+ void set_peer_link(NetPoint* netpoint, double bw_in, double bw_out, std::string coord);
void get_local_route(NetPoint* src, NetPoint* dst, RouteCreationArgs* into, double* latency) override;
+
+ // deprecated
+ XBT_ATTRIB_DEPRECATED_v323("Please use VivaldiZone::set_peer_link()") void setPeerLink(NetPoint* netpoint,
+ double bw_in, double bw_out,
+ std::string coord)
+ {
+ set_peer_link(netpoint, bw_in, bw_out, coord);
+ }
};
namespace vivaldi {
--- /dev/null
+/* Copyright (c) 2018. The SimGrid Team. All rights reserved. */
+
+/* This program is free software; you can redistribute it and/or modify it
+ * under the terms of the license (GNU LGPL) which comes with this package. */
+
+#ifndef INCLUDE_SIMGRID_MAILBOX_H_
+#define INCLUDE_SIMGRID_MAILBOX_H_
+
+#include <simgrid/forward.h>
+#include <xbt/base.h>
+
+/* C interface */
+SG_BEGIN_DECL()
+
+XBT_PUBLIC void sg_mailbox_set_receiver(const char* alias);
+XBT_PUBLIC int sg_mailbox_listen(const char* alias);
+
+SG_END_DECL()
+
+#endif /* INCLUDE_SIMGRID_MAILBOX_H_ */
#ifndef SIMGRID_MODELCHECKER_H
#define SIMGRID_MODELCHECKER_H
-#include <stddef.h> /* size_t */
-
#include <simgrid/config.h> /* SIMGRID_HAVE_MC ? */
-
#include <xbt/base.h>
+#include <stddef.h> /* size_t */
+
SG_BEGIN_DECL()
XBT_PUBLIC int MC_random(int min, int max);
#include <simgrid/forward.h>
#include <simgrid/host.h>
#include <simgrid/instr.h>
+#include <simgrid/mailbox.h>
#include <simgrid/plugins/live_migration.h>
#include <simgrid/storage.h>
#include <simgrid/vm.h>
+/* Copyright (c) 2009-2018. The SimGrid Team. All rights reserved. */
+
+/* This program is free software; you can redistribute it and/or modify it
+ * under the terms of the license (GNU LGPL) which comes with this package. */
+#ifndef SIMGRID_PLUGINS_LOAD_BALANCER_H_
+#define SIMGRID_PLUGINS_LOAD_BALANCER_H_
+
void sg_load_balancer_plugin_init();
+
+#endif
/* This program is free software; you can redistribute it and/or modify it
* under the terms of the license (GNU LGPL) which comes with this package. */
-#include "private.hpp"
+#ifndef SMPI_REPLAY_HPP_
+#define SMPI_REPLAY_HPP_
+
+#include <boost/algorithm/string/join.hpp>
+#include <src/smpi/include/smpi_process.hpp>
#include <xbt/replay.hpp>
+#include <xbt/ex.h>
+#include <memory>
#include <sstream>
#define CHECK_ACTION_PARAMS(action, mandatory, optional) \
} \
}
+XBT_PRIVATE void* smpi_get_tmp_sendbuffer(int size);
+XBT_PRIVATE void* smpi_get_tmp_recvbuffer(int size);
+XBT_PRIVATE void smpi_free_tmp_buffer(void* buf);
+XBT_PRIVATE void smpi_free_replay_tmp_buffers();
+
+XBT_PRIVATE void log_timed_action(simgrid::xbt::ReplayAction& action, double clock);
+
namespace simgrid {
namespace smpi {
namespace replay {
/**
* Base class for all ReplayActions.
* Note that this class actually implements the behavior of each action
- * while the parsing of the replay arguments is done in the ActionArgParser class.
+ * while the parsing of the replay arguments is done in the @ActionArgParser class.
* In other words: The logic goes here, the setup is done by the ActionArgParser.
*/
template <class T> class ReplayAction {
explicit ReplayAction(std::string name) : name(name), my_proc_id(simgrid::s4u::this_actor::get_pid()) {}
virtual ~ReplayAction() = default;
- virtual void execute(simgrid::xbt::ReplayAction& action);
+ void execute(simgrid::xbt::ReplayAction& action)
+ {
+ // Needs to be re-initialized for every action, hence here
+ double start_time = smpi_process()->simulated_elapsed();
+ args.parse(action, name);
+ kernel(action);
+ if (name != "Init")
+ log_timed_action(action, start_time);
+ }
+
virtual void kernel(simgrid::xbt::ReplayAction& action) = 0;
void* send_buffer(int size) { return smpi_get_tmp_sendbuffer(size); }
void* recv_buffer(int size) { return smpi_get_tmp_recvbuffer(size); }
RequestStorage& req_storage;
public:
- explicit WaitAllAction(RequestStorage& storage) : ReplayAction("waitAll"), req_storage(storage) {}
+ explicit WaitAllAction(RequestStorage& storage) : ReplayAction("waitall"), req_storage(storage) {}
void kernel(simgrid::xbt::ReplayAction& action) override;
};
class AllReduceAction : public ReplayAction<AllReduceArgParser> {
public:
- explicit AllReduceAction() : ReplayAction("allReduce") {}
+ explicit AllReduceAction() : ReplayAction("allreduce") {}
void kernel(simgrid::xbt::ReplayAction& action) override;
};
class AllToAllAction : public ReplayAction<AllToAllArgParser> {
public:
- explicit AllToAllAction() : ReplayAction("allToAll") {}
+ explicit AllToAllAction() : ReplayAction("alltoall") {}
void kernel(simgrid::xbt::ReplayAction& action) override;
};
class ScatterVAction : public ReplayAction<ScatterVArgParser> {
public:
- explicit ScatterVAction() : ReplayAction("scatterV") {}
+ explicit ScatterVAction() : ReplayAction("scatterv") {}
void kernel(simgrid::xbt::ReplayAction& action) override;
};
class ReduceScatterAction : public ReplayAction<ReduceScatterArgParser> {
public:
- explicit ReduceScatterAction() : ReplayAction("reduceScatter") {}
+ explicit ReduceScatterAction() : ReplayAction("reducescatter") {}
void kernel(simgrid::xbt::ReplayAction& action) override;
};
class AllToAllVAction : public ReplayAction<AllToAllVArgParser> {
public:
- explicit AllToAllVAction() : ReplayAction("allToAllV") {}
+ explicit AllToAllVAction() : ReplayAction("alltoallv") {}
void kernel(simgrid::xbt::ReplayAction& action) override;
};
}
}
}
+
+#endif
-#ifndef MPI_HELPERS_H
-#define MPI_HELPERS_H
+/* Copyright (c) 2018. The SimGrid Team. All rights reserved. */
-#ifndef _GNU_SOURCE
-#define _GNU_SOURCE
-#endif
+/* This program is free software; you can redistribute it and/or modify it
+ * under the terms of the license (GNU LGPL) which comes with this package. */
-#include <unistd.h>
-#include <sys/time.h> /* Load it before the define next line to not mess with the system headers */
-#if _POSIX_TIMERS
-#include <time.h>
-#endif
+#ifndef SMPI_HELPERS_H
+#define SMPI_HELPERS_H
-int smpi_usleep(useconds_t usecs);
-#if _POSIX_TIMERS > 0
-int smpi_nanosleep(const struct timespec* tp, struct timespec* t);
-int smpi_clock_gettime(clockid_t clk_id, struct timespec* tp);
+#ifndef _GNU_SOURCE
+#define _GNU_SOURCE
#endif
-unsigned int smpi_sleep(unsigned int secs);
-int smpi_gettimeofday(struct timeval* tv, struct timezone* tz);
-struct option;
-int smpi_getopt_long (int argc, char *const *argv, const char *options, const struct option *long_options, int *opt_index);
-int smpi_getopt (int argc, char *const *argv, const char *options);
+#include <smpi/smpi_helpers_internal.h>
#define sleep(x) smpi_sleep(x)
#define usleep(x) smpi_usleep(x)
#define getopt(x,y,z) smpi_getopt(x,y,z)
#define getopt_long(x,y,z,a,b) smpi_getopt_long(x,y,z,a,b)
+#define getopt_long_only(x,y,z,a,b) smpi_getopt_long_only(x,y,z,a,b)
#endif
--- /dev/null
+/* Copyright (c) 2018. The SimGrid Team. All rights reserved. */
+
+/* This program is free software; you can redistribute it and/or modify it
+ * under the terms of the license (GNU LGPL) which comes with this package. */
+
+#ifndef SMPI_HELPERS_INTERNAL_H
+#define SMPI_HELPERS_INTERNAL_H
+
+#include <unistd.h>
+
+#include <sys/time.h>
+#if _POSIX_TIMERS
+#include <time.h>
+#endif
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+int smpi_usleep(useconds_t usecs);
+#if _POSIX_TIMERS > 0
+int smpi_nanosleep(const struct timespec* tp, struct timespec* t);
+int smpi_clock_gettime(clockid_t clk_id, struct timespec* tp);
+#endif
+unsigned int smpi_sleep(unsigned int secs);
+int smpi_gettimeofday(struct timeval* tv, struct timezone* tz);
+
+struct option;
+int smpi_getopt_long_only(int argc, char* const* argv, const char* options, const struct option* long_options,
+ int* opt_index);
+int smpi_getopt_long(int argc, char* const* argv, const char* options, const struct option* long_options,
+ int* opt_index);
+int smpi_getopt(int argc, char* const* argv, const char* options);
+
+#ifdef __cplusplus
+} // extern "C"
+#endif
+#endif
-/* Copyright (c) 2011-2018. The SimGrid Team.
- * All rights reserved. */
+/* Copyright (c) 2011-2018. The SimGrid Team. All rights reserved. */
/* This program is free software; you can redistribute it and/or modify it
* under the terms of the license (GNU LGPL) which comes with this package. */
#define XBT_AUTOMATON_H
#include <xbt/dynar.h>
-#include <xbt/sysdep.h>
-#include <xbt/graph.h>
-#include <stdlib.h>
-#include <string.h>
SG_BEGIN_DECL()
JNIEXPORT void JNICALL Java_org_simgrid_msg_Host_setAsyncMailbox(JNIEnv * env, jclass cls_arg, jobject jname)
{
const char *name = env->GetStringUTFChars((jstring) jname, 0);
- MSG_mailbox_set_async(name);
+ sg_mailbox_set_receiver(name);
env->ReleaseStringUTFChars((jstring) jname, name);
}
else if ("x86_64".equalsIgnoreCase(arch) || "AMD64".equalsIgnoreCase(arch))
arch = "amd64";
- if (os.toLowerCase().startsWith("win")){
+ if (os.toLowerCase().startsWith("win")) {
os = "Windows";
- } else if (os.contains("OS X"))
+ } else if (os.contains("OS X")) {
os = "Darwin";
-
+ }
os = os.replace(' ', '_');
arch = arch.replace(' ', '_');
#define SIMGRID_MMALLOC_H 1
#include "src/internal_config.h"
-#if HAVE_MMALLOC
#include <stdio.h> /* for NULL */
#include <sys/types.h> /* for size_t */
typedef struct mdesc s_xbt_mheap_t;
typedef s_xbt_mheap_t* xbt_mheap_t;
+#if HAVE_MMALLOC
/* Allocate SIZE bytes of memory (and memset it to 0). */
XBT_PUBLIC void* mmalloc(xbt_mheap_t md, size_t size);
void* malloc_no_memset(size_t n);
+#endif
SG_END_DECL()
-#endif
#endif /* SIMGRID_MMALLOC_H */
class StateEvent : public PajeEvent {
EntityValue* value;
- std::string filename = "(null)";
#if HAVE_SMPI
+ std::string filename = "(null)";
int linenumber = -1;
#endif
TIData* extra_;
// NoOpTI: init, finalize, test, wait, barrier
explicit TIData(std::string name) : name_(name){};
- // CPuTI: compute, sleep (+ waitAny and waitAll out of laziness)
+ // CPuTI: compute, sleep (+ waitAny and waitall out of laziness)
explicit TIData(std::string name, double amount) : name_(name), amount_(amount){};
// Pt2PtTI: send, isend, sssend, issend, recv, irecv
explicit TIData(std::string name, int endpoint, int size, std::string datatype)
: name_(name), endpoint(endpoint), send_size(size), send_type(datatype){};
- // CollTI: bcast, reduce, allReduce, gather, scatter, allGather, allToAll
+ // CollTI: bcast, reduce, allreduce, gather, scatter, allgather, alltoall
explicit TIData(std::string name, int root, double amount, int send_size, int recv_size, std::string send_type,
std::string recv_type)
: name_(name)
, recv_size(recv_size)
, send_type(send_type)
, recv_type(recv_type){};
- // VarCollTI: gatherV, scatterV, allGatherV, allToAllV (+ reduceScatter out of laziness)
+ // VarCollTI: gatherv, scatterv, allgatherv, alltoallv (+ reducescatter out of laziness)
explicit TIData(std::string name, int root, int send_size, std::vector<int>* sendcounts, int recv_size,
std::vector<int>* recvcounts, std::string send_type, std::string recv_type)
: TIData(name, root, send_size, std::shared_ptr<std::vector<int>>(sendcounts), recv_size,
}
std::string display_size() override { return std::to_string(send_size > 0 ? send_size : recv_size); }
};
+
+/**
+ * If we want to wait for a request of asynchronous communication, we need to be able
+ * to identify this request. We do this by searching for a request identified by (src, dest, tag).
+ */
+class WaitTIData : public TIData {
+ int src;
+ int dest;
+ int tag;
+
+public:
+ explicit WaitTIData(int src, int dest, int tag) : TIData("wait"), src(src), dest(dest), tag(tag){};
+
+ std::string print() override
+ {
+ std::stringstream stream;
+ stream << getName() << " " << src << " " << dest << " " << tag;
+
+ return stream.str();
+ }
+
+ std::string display_size() override { return ""; }
+};
}
}
namespace jedule {
Jedule::~Jedule() {
- delete this->root_container;
- for (auto const& evt : this->event_set)
+ delete this->root_container_;
+ for (auto const& evt : this->event_set_)
delete evt;
- this->event_set.clear();
+ this->event_set_.clear();
}
-void Jedule::addMetaInfo(char *key, char *value) {
+void Jedule::add_meta_info(char* key, char* value)
+{
xbt_assert(key != nullptr);
xbt_assert(value != nullptr);
- this->meta_info.insert({key, value});
+ this->meta_info_.insert({key, value});
}
-void Jedule::writeOutput(FILE *file) {
- if (not this->event_set.empty()) {
+void Jedule::write_output(FILE* file)
+{
+ if (not this->event_set_.empty()) {
fprintf(file, "<jedule>\n");
- if (not this->meta_info.empty()) {
+ if (not this->meta_info_.empty()) {
fprintf(file, " <jedule_meta>\n");
- for (auto const& elm : this->meta_info)
+ for (auto const& elm : this->meta_info_)
fprintf(file, " <prop key=\"%s\" value=\"%s\" />\n",elm.first,elm.second);
fprintf(file, " </jedule_meta>\n");
}
fprintf(file, " <platform>\n");
- this->root_container->print(file);
+ this->root_container_->print(file);
fprintf(file, " </platform>\n");
fprintf(file, " <events>\n");
- for (auto const& event : this->event_set)
+ for (auto const& event : this->event_set_)
event->print(file);
fprintf(file, " </events>\n");
namespace jedule{
Event::Event(std::string name, double start_time, double end_time, std::string type)
- : name(name), start_time(start_time), end_time(end_time), type(type)
+ : name_(name), start_time_(start_time), end_time_(end_time), type_(type)
{
- this->resource_subsets = new std::vector<jed_subset_t>();
+ this->resource_subsets_ = new std::vector<jed_subset_t>();
}
Event::~Event()
{
- if (not this->resource_subsets->empty()) {
- for (auto const& subset : *this->resource_subsets)
+ if (not this->resource_subsets_->empty()) {
+ for (auto const& subset : *this->resource_subsets_)
delete subset;
- delete this->resource_subsets;
+ delete this->resource_subsets_;
}
}
-void Event::addResources(std::vector<sg_host_t> *host_selection)
+void Event::add_resources(std::vector<sg_host_t>* host_selection)
{
- get_resource_selection_by_hosts(this->resource_subsets, host_selection);
+ get_resource_selection_by_hosts(this->resource_subsets_, host_selection);
}
-void Event::addCharacteristic(char *characteristic)
+void Event::add_characteristic(char* characteristic)
{
xbt_assert( characteristic != nullptr );
- this->characteristics_list.push_back(characteristic);
+ this->characteristics_list_.push_back(characteristic);
}
-void Event::addInfo(char* key, char *value) {
+void Event::add_info(char* key, char* value)
+{
xbt_assert((key != nullptr) && value != nullptr);
- this->info_map.insert({key, value});
+ this->info_map_.insert({key, value});
}
void Event::print(FILE *jed_file)
{
fprintf(jed_file, " <event>\n");
- fprintf(jed_file, " <prop key=\"name\" value=\"%s\" />\n", this->name.c_str());
- fprintf(jed_file, " <prop key=\"start\" value=\"%g\" />\n", this->start_time);
- fprintf(jed_file, " <prop key=\"end\" value=\"%g\" />\n", this->end_time);
- fprintf(jed_file, " <prop key=\"type\" value=\"%s\" />\n", this->type.c_str());
+ fprintf(jed_file, " <prop key=\"name\" value=\"%s\" />\n", this->name_.c_str());
+ fprintf(jed_file, " <prop key=\"start\" value=\"%g\" />\n", this->start_time_);
+ fprintf(jed_file, " <prop key=\"end\" value=\"%g\" />\n", this->end_time_);
+ fprintf(jed_file, " <prop key=\"type\" value=\"%s\" />\n", this->type_.c_str());
- xbt_assert(not this->resource_subsets->empty());
+ xbt_assert(not this->resource_subsets_->empty());
fprintf(jed_file, " <res_util>\n");
- for (auto const& subset : *this->resource_subsets) {
+ for (auto const& subset : *this->resource_subsets_) {
fprintf(jed_file, " <select resources=\"");
- fprintf(jed_file, "%s", subset->parent->getHierarchyAsString().c_str());
+ fprintf(jed_file, "%s", subset->parent->get_hierarchy_as_string().c_str());
fprintf(jed_file, ".[%d-%d]", subset->start_idx, subset->start_idx + subset->nres-1);
fprintf(jed_file, "\" />\n");
}
fprintf(jed_file, " </res_util>\n");
- if (not this->characteristics_list.empty()) {
+ if (not this->characteristics_list_.empty()) {
fprintf(jed_file, " <characteristics>\n");
- for (auto const& ch : this->characteristics_list)
+ for (auto const& ch : this->characteristics_list_)
fprintf(jed_file, " <characteristic name=\"%s\" />\n", ch);
fprintf(jed_file, " </characteristics>\n");
}
- if (not this->info_map.empty()) {
+ if (not this->info_map_.empty()) {
fprintf(jed_file, " <info>\n");
- for (auto const& elm : this->info_map)
+ for (auto const& elm : this->info_map_)
fprintf(jed_file, " <prop key=\"%s\" value=\"%s\" />\n",elm.first,elm.second);
fprintf(jed_file, " </info>\n");
}
delete child;
}
-void Container::addChild(jed_container_t child)
+void Container::add_child(jed_container_t child)
{
xbt_assert(child != nullptr);
this->children.push_back(child);
child->parent = this;
}
-void Container::addResources(std::vector<sg_host_t> hosts)
+void Container::add_resources(std::vector<sg_host_t> hosts)
{
- this->is_lowest = 1;
+ this->is_lowest_ = 1;
this->children.clear();
- this->last_id = 0;
+ this->last_id_ = 0;
//FIXME do we need to sort?: xbt_dynar_sort_strings(host_names);
for (auto const& host : hosts) {
const char *host_name = sg_host_get_name(host);
- this->name2id.insert({host_name, this->last_id});
- (this->last_id)++;
+ this->name2id.insert({host_name, this->last_id_});
+ (this->last_id_)++;
host2_simgrid_parent_container.insert({host_name, this});
this->resource_list.push_back(host);
}
}
-void Container::createHierarchy(sg_netzone_t from_as)
+void Container::create_hierarchy(sg_netzone_t from_as)
{
if (from_as->get_children()->empty()) {
// I am no AS
// add hosts to jedule platform
std::vector<sg_host_t> table = from_as->get_all_hosts();
- this->addResources(table);
+ this->add_resources(table);
} else {
for (auto const& nz : *from_as->get_children()) {
jed_container_t child_container = new simgrid::jedule::Container(std::string(nz->get_cname()));
- this->addChild(child_container);
- child_container->createHierarchy(nz);
+ this->add_child(child_container);
+ child_container->create_hierarchy(nz);
}
}
}
-std::vector<int> Container::getHierarchy()
+std::vector<int> Container::get_hierarchy()
{
if(this->parent != nullptr ) {
if (not this->parent->children.empty()) {
// we are in the last level
- return this->parent->getHierarchy();
+ return this->parent->get_hierarchy();
} else {
unsigned int i =0;
int child_nb = -1;
}
xbt_assert( child_nb > - 1);
- std::vector<int> heir_list = this->parent->getHierarchy();
+ std::vector<int> heir_list = this->parent->get_hierarchy();
heir_list.insert(heir_list.begin(), child_nb);
return heir_list;
}
}
}
-std::string Container::getHierarchyAsString()
+std::string Container::get_hierarchy_as_string()
{
std::string output("");
- std::vector<int> heir_list = this->getHierarchy();
+ std::vector<int> heir_list = this->get_hierarchy();
unsigned int length = heir_list.size();
unsigned int i = 0;
return output;
}
-void Container::printResources(FILE * jed_file)
+void Container::print_resources(FILE* jed_file)
{
unsigned int i=0;
xbt_assert(not this->resource_list.empty());
unsigned int res_nb = this->resource_list.size();
- std::string resid = this->getHierarchyAsString();
+ std::string resid = this->get_hierarchy_as_string();
fprintf(jed_file, " <rset id=\"%s\" nb=\"%u\" names=\"", resid.c_str(), res_nb);
for (auto const& res : this->resource_list) {
child->print(jed_file);
}
} else {
- this->printResources(jed_file);
+ this->print_resources(jed_file);
}
fprintf(jed_file, " </res>\n");
}
jed_event_t event = new simgrid::jedule::Event(std::string(SD_task_get_name(task)),
SD_task_get_start_time(task), SD_task_get_finish_time(task), "SD");
- event->addResources(task->allocation);
- my_jedule->event_set.push_back(event);
+ event->add_resources(task->allocation);
+ my_jedule->event_set_.push_back(event);
}
void jedule_sd_init()
my_jedule = new simgrid::jedule::Jedule();
jed_container_t root_container = new simgrid::jedule::Container(std::string(root_comp->get_cname()));
- root_container->createHierarchy(root_comp);
- my_jedule->root_container = root_container;
+ root_container->create_hierarchy(root_comp);
+ my_jedule->root_container_ = root_container;
}
void jedule_sd_exit()
FILE *fh = fopen(fname, "w");
- my_jedule->writeOutput(fh);
+ my_jedule->write_output(fh);
fclose(fh);
xbt_free(fname);
}
}
-xbt_node_t DijkstraZone::routeGraphNewNode(int id, int graph_id)
+xbt_node_t DijkstraZone::route_graph_new_node(int id, int graph_id)
{
graph_node_data_t data = new s_graph_node_data_t;
data->id = id;
return node;
}
-xbt_node_t DijkstraZone::nodeMapSearch(int id)
+xbt_node_t DijkstraZone::node_map_search(int id)
{
auto ret = graph_node_map_.find(id);
return ret == graph_node_map_.end() ? nullptr : ret->second;
/* Parsing */
-void DijkstraZone::newRoute(int src_id, int dst_id, simgrid::kernel::routing::RouteCreationArgs* e_route)
+void DijkstraZone::new_route(int src_id, int dst_id, simgrid::kernel::routing::RouteCreationArgs* e_route)
{
XBT_DEBUG("Load Route from \"%d\" to \"%d\"", src_id, dst_id);
xbt_node_t src = nullptr;
xbt_node_t dst = nullptr;
- xbt_node_t src_elm = nodeMapSearch(src_id);
- xbt_node_t dst_elm = nodeMapSearch(dst_id);
+ xbt_node_t src_elm = node_map_search(src_id);
+ xbt_node_t dst_elm = node_map_search(dst_id);
if (src_elm)
src = src_elm;
/* add nodes if they don't exist in the graph */
if (src_id == dst_id && src == nullptr && dst == nullptr) {
- src = this->routeGraphNewNode(src_id, -1);
+ src = this->route_graph_new_node(src_id, -1);
dst = src;
} else {
if (src == nullptr) {
- src = this->routeGraphNewNode(src_id, -1);
+ src = this->route_graph_new_node(src_id, -1);
}
if (dst == nullptr) {
- dst = this->routeGraphNewNode(dst_id, -1);
+ dst = this->route_graph_new_node(dst_id, -1);
}
}
void DijkstraZone::get_local_route(NetPoint* src, NetPoint* dst, RouteCreationArgs* route, double* lat)
{
- getRouteCheckParams(src, dst);
+ get_route_check_params(src, dst);
int src_id = src->id();
int dst_id = dst->id();
xbt_dynar_t nodes = xbt_graph_get_nodes(route_graph_);
/* Use the graph_node id mapping set to quickly find the nodes */
- xbt_node_t src_elm = nodeMapSearch(src_id);
- xbt_node_t dst_elm = nodeMapSearch(dst_id);
+ xbt_node_t src_elm = node_map_search(src_id);
+ xbt_node_t dst_elm = node_map_search(dst_id);
int src_node_id = static_cast<graph_node_data_t>(xbt_graph_node_get_data(src_elm))->graph_id;
int dst_node_id = static_cast<graph_node_data_t>(xbt_graph_node_get_data(dst_elm))->graph_id;
const char* srcName = src->get_cname();
const char* dstName = dst->get_cname();
- addRouteCheckParams(src, dst, gw_src, gw_dst, link_list, symmetrical);
+ add_route_check_params(src, dst, gw_src, gw_dst, link_list, symmetrical);
/* Create the topology graph */
if (not route_graph_)
* nodes */
/* Add the route to the base */
- RouteCreationArgs* e_route = newExtendedRoute(hierarchy_, src, dst, gw_src, gw_dst, link_list, symmetrical, 1);
- newRoute(src->id(), dst->id(), e_route);
+ RouteCreationArgs* e_route = new_extended_route(hierarchy_, src, dst, gw_src, gw_dst, link_list, symmetrical, 1);
+ new_route(src->id(), dst->id(), e_route);
// Symmetrical YES
if (symmetrical == true) {
gw_dst = gw_tmp;
}
RouteCreationArgs* link_route_back =
- newExtendedRoute(hierarchy_, src, dst, gw_src, gw_dst, link_list, symmetrical, 0);
- newRoute(dst->id(), src->id(), link_route_back);
+ new_extended_route(hierarchy_, src, dst, gw_src, gw_dst, link_list, symmetrical, 0);
+ new_route(dst->id(), src->id(), link_route_back);
}
}
}
return;
}
- this->generateRouters();
- this->generateLinks();
+ this->generate_routers();
+ this->generate_links();
}
DragonflyRouter::DragonflyRouter(int group, int chassis, int blade) : group_(group), chassis_(chassis), blade_(blade)
delete blue_links_;
}
-void DragonflyZone::generateRouters()
+void DragonflyZone::generate_routers()
{
this->routers_ =
new DragonflyRouter*[this->num_groups_ * this->num_chassis_per_group_ * this->num_blades_per_chassis_];
}
}
-void DragonflyZone::createLink(const std::string& id, int numlinks, resource::LinkImpl** linkup,
- resource::LinkImpl** linkdown)
+void DragonflyZone::create_link(const std::string& id, int numlinks, resource::LinkImpl** linkup,
+ resource::LinkImpl** linkdown)
{
*linkup = nullptr;
*linkdown = nullptr;
}
}
-void DragonflyZone::generateLinks()
+void DragonflyZone::generate_links()
{
static int uniqueId = 0;
resource::LinkImpl* linkup;
for (unsigned int j = 0; j < num_links_per_link_ * this->num_nodes_per_blade_; j += num_links_per_link_) {
std::string id = "local_link_from_router_" + std::to_string(i) + "_to_node_" +
std::to_string(j / num_links_per_link_) + "_" + std::to_string(uniqueId);
- this->createLink(id, 1, &linkup, &linkdown);
+ this->create_link(id, 1, &linkup, &linkdown);
this->routers_[i]->my_nodes_[j] = linkup;
if (this->sharing_policy_ == s4u::Link::SharingPolicy::SPLITDUPLEX)
for (unsigned int k = j + 1; k < this->num_blades_per_chassis_; k++) {
std::string id = "green_link_in_chassis_" + std::to_string(i % num_chassis_per_group_) + "_between_routers_" +
std::to_string(j) + "_and_" + std::to_string(k) + "_" + std::to_string(uniqueId);
- this->createLink(id, this->num_links_green_, &linkup, &linkdown);
+ this->create_link(id, this->num_links_green_, &linkup, &linkdown);
this->routers_[i * num_blades_per_chassis_ + j]->green_links_[k] = linkup;
this->routers_[i * num_blades_per_chassis_ + k]->green_links_[j] = linkdown;
for (unsigned int l = 0; l < this->num_blades_per_chassis_; l++) {
std::string id = "black_link_in_group_" + std::to_string(i) + "_between_chassis_" + std::to_string(j) +
"_and_" + std::to_string(k) +"_blade_" + std::to_string(l) + "_" + std::to_string(uniqueId);
- this->createLink(id, this->num_links_black_, &linkup, &linkdown);
+ this->create_link(id, this->num_links_black_, &linkup, &linkdown);
this->routers_[i * num_blades_per_chassis_ * num_chassis_per_group_ + j * num_blades_per_chassis_ + l]
->black_links_[k] = linkup;
this->routers_[routernumj]->blue_links_ = new resource::LinkImpl*;
std::string id = "blue_link_between_group_"+ std::to_string(i) +"_and_" + std::to_string(j) +"_routers_" +
std::to_string(routernumi) + "_and_" + std::to_string(routernumj) + "_" + std::to_string(uniqueId);
- this->createLink(id, this->num_links_blue_, &linkup, &linkdown);
+ this->create_link(id, this->num_links_blue_, &linkup, &linkdown);
this->routers_[routernumi]->blue_links_[0] = linkup;
this->routers_[routernumj]->blue_links_[0] = linkdown;
}
}
-bool FatTreeZone::isInSubTree(FatTreeNode* root, FatTreeNode* node)
+bool FatTreeZone::is_in_sub_tree(FatTreeNode* root, FatTreeNode* node)
{
XBT_DEBUG("Is %d(%u,%u) in the sub tree of %d(%u,%u) ?", node->id, node->level, node->position, root->id, root->level,
root->position);
FatTreeNode* currentNode = source;
// up part
- while (not isInSubTree(currentNode, destination)) {
+ while (not is_in_sub_tree(currentNode, destination)) {
int d = destination->position; // as in d-mod-k
for (unsigned int i = 0; i < currentNode->level; i++)
if (this->levels_ == 0) {
return;
}
- this->generateSwitches();
+ this->generate_switches();
if (XBT_LOG_ISENABLED(surf_route_fat_tree, xbt_log_priority_debug)) {
std::stringstream msgBuffer;
XBT_DEBUG("%s", msgBuffer.str().c_str());
}
- this->generateLabels();
+ this->generate_labels();
unsigned int k = 0;
// Nodes are totally ordered, by level and then by position, in this->nodes
for (unsigned int i = 0; i < this->levels_; i++) {
for (unsigned int j = 0; j < this->nodes_by_level_[i]; j++) {
- this->connectNodeToParents(this->nodes_[k]);
+ this->connect_node_to_parents(this->nodes_[k]);
k++;
}
}
}
}
-int FatTreeZone::connectNodeToParents(FatTreeNode* node)
+int FatTreeZone::connect_node_to_parents(FatTreeNode* node)
{
std::vector<FatTreeNode*>::iterator currentParentNode = this->nodes_.begin();
int connectionsNumber = 0;
const int level = node->level;
XBT_DEBUG("We are connecting node %d(%u,%u) to his parents.", node->id, node->level, node->position);
- currentParentNode += this->getLevelPosition(level + 1);
+ currentParentNode += this->get_level_position(level + 1);
for (unsigned int i = 0; i < this->nodes_by_level_[level + 1]; i++) {
- if (this->areRelated(*currentParentNode, node)) {
+ if (this->are_related(*currentParentNode, node)) {
XBT_DEBUG("%d(%u,%u) and %d(%u,%u) are related,"
" with %u links between them.",
node->id, node->level, node->position, (*currentParentNode)->id, (*currentParentNode)->level,
return connectionsNumber;
}
-bool FatTreeZone::areRelated(FatTreeNode* parent, FatTreeNode* child)
+bool FatTreeZone::are_related(FatTreeNode* parent, FatTreeNode* child)
{
std::stringstream msgBuffer;
return true;
}
-void FatTreeZone::generateSwitches()
+void FatTreeZone::generate_switches()
{
XBT_DEBUG("Generating switches.");
this->nodes_by_level_.resize(this->levels_ + 1, 0);
}
}
-void FatTreeZone::generateLabels()
+void FatTreeZone::generate_labels()
{
XBT_DEBUG("Generating labels.");
// TODO : check if nodesByLevel and nodes are filled
}
}
-int FatTreeZone::getLevelPosition(const unsigned int level)
+int FatTreeZone::get_level_position(const unsigned int level)
{
xbt_assert(level <= this->levels_, "The impossible did happen. Yet again.");
int tempPosition = 0;
{
unsigned int table_size = get_table_size();
- getRouteCheckParams(src, dst);
+ get_route_check_params(src, dst);
/* create a result route */
std::vector<RouteCreationArgs*> route_stack;
/* set the size of table routing */
unsigned int table_size = get_table_size();
- addRouteCheckParams(src, dst, gw_src, gw_dst, link_list, symmetrical);
+ add_route_check_params(src, dst, gw_src, gw_dst, link_list, symmetrical);
if (not link_table_) {
/* Create Cost, Predecessor and Link tables */
dst->get_cname());
TO_FLOYD_LINK(src->id(), dst->id()) =
- newExtendedRoute(hierarchy_, src, dst, gw_src, gw_dst, link_list, symmetrical, 1);
+ new_extended_route(hierarchy_, src, dst, gw_src, gw_dst, link_list, symmetrical, 1);
TO_FLOYD_PRED(src->id(), dst->id()) = src->id();
TO_FLOYD_COST(src->id(), dst->id()) = (TO_FLOYD_LINK(src->id(), dst->id()))->link_list.size();
src->get_cname(), gw_dst->get_cname());
TO_FLOYD_LINK(dst->id(), src->id()) =
- newExtendedRoute(hierarchy_, src, dst, gw_src, gw_dst, link_list, symmetrical, 0);
+ new_extended_route(hierarchy_, src, dst, gw_src, gw_dst, link_list, symmetrical, 0);
TO_FLOYD_PRED(dst->id(), src->id()) = dst->id();
TO_FLOYD_COST(dst->id(), src->id()) =
(TO_FLOYD_LINK(dst->id(), src->id()))->link_list.size(); /* count of links, old model assume 1 */
void FullZone::add_route(NetPoint* src, NetPoint* dst, NetPoint* gw_src, NetPoint* gw_dst,
std::vector<resource::LinkImpl*>& link_list, bool symmetrical)
{
- addRouteCheckParams(src, dst, gw_src, gw_dst, link_list, symmetrical);
+ add_route_check_params(src, dst, gw_src, gw_dst, link_list, symmetrical);
unsigned int table_size = get_table_size();
/* Add the route to the base */
TO_ROUTE_FULL(src->id(), dst->id()) =
- newExtendedRoute(hierarchy_, src, dst, gw_src, gw_dst, link_list, symmetrical, true);
+ new_extended_route(hierarchy_, src, dst, gw_src, gw_dst, link_list, symmetrical, true);
if (symmetrical == true && src != dst) {
if (gw_dst && gw_src) {
dst->get_cname(), src->get_cname());
TO_ROUTE_FULL(dst->id(), src->id()) =
- newExtendedRoute(hierarchy_, src, dst, gw_src, gw_dst, link_list, symmetrical, false);
+ new_extended_route(hierarchy_, src, dst, gw_src, gw_dst, link_list, symmetrical, false);
}
}
}
/* ************************************************************************** */
/* ************************* GENERIC AUX FUNCTIONS ************************** */
/* change a route containing link names into a route containing link entities */
-RouteCreationArgs* RoutedZone::newExtendedRoute(RoutingMode hierarchy, NetPoint* src, NetPoint* dst, NetPoint* gw_src,
- NetPoint* gw_dst, std::vector<resource::LinkImpl*>& link_list,
- bool symmetrical, bool change_order)
+RouteCreationArgs* RoutedZone::new_extended_route(RoutingMode hierarchy, NetPoint* src, NetPoint* dst, NetPoint* gw_src,
+ NetPoint* gw_dst, std::vector<resource::LinkImpl*>& link_list,
+ bool symmetrical, bool change_order)
{
RouteCreationArgs* result = new RouteCreationArgs();
return result;
}
-void RoutedZone::getRouteCheckParams(NetPoint* src, NetPoint* dst)
+void RoutedZone::get_route_check_params(NetPoint* src, NetPoint* dst)
{
xbt_assert(src, "Cannot find a route from nullptr to %s", dst->get_cname());
xbt_assert(dst, "Cannot find a route from %s to nullptr", src->get_cname());
"%s@%s). Please report that bug.",
src->get_cname(), dst->get_cname(), src_as->get_cname(), dst_as->get_cname(), get_cname());
}
-void RoutedZone::addRouteCheckParams(NetPoint* src, NetPoint* dst, NetPoint* gw_src, NetPoint* gw_dst,
- std::vector<resource::LinkImpl*>& link_list, bool symmetrical)
+void RoutedZone::add_route_check_params(NetPoint* src, NetPoint* dst, NetPoint* gw_src, NetPoint* gw_dst,
+ std::vector<resource::LinkImpl*>& link_list, bool symmetrical)
{
const char* srcName = src->get_cname();
const char* dstName = dst->get_cname();
return (src_coord - dst_coord) * (src_coord - dst_coord);
}
-static std::vector<double>* getCoordsFromNetpoint(NetPoint* np)
+static std::vector<double>* netpoint_get_coords(NetPoint* np)
{
simgrid::kernel::routing::vivaldi::Coords* coords = np->extension<simgrid::kernel::routing::vivaldi::Coords>();
xbt_assert(coords, "Please specify the Vivaldi coordinates of %s %s (%p)",
{
}
-void VivaldiZone::setPeerLink(NetPoint* netpoint, double bw_in, double bw_out, std::string coord)
+void VivaldiZone::set_peer_link(NetPoint* netpoint, double bw_in, double bw_out, std::string coord)
{
xbt_assert(netpoint->get_englobing_zone() == this,
"Cannot add a peer link to a netpoint that is not in this netzone");
/* Compute the extra latency due to the euclidean distance if needed */
if (lat) {
- std::vector<double>* srcCoords = getCoordsFromNetpoint(src);
- std::vector<double>* dstCoords = getCoordsFromNetpoint(dst);
+ std::vector<double>* srcCoords = netpoint_get_coords(src);
+ std::vector<double>* dstCoords = netpoint_get_coords(dst);
double euclidean_dist =
sqrt(euclidean_dist_comp(0, srcCoords, dstCoords) + euclidean_dist_comp(1, srcCoords, dstCoords)) +
-/* Copyright (c) 2008-2018. The SimGrid Team.
- * All rights reserved. */
+/* Copyright (c) 2008-2018. The SimGrid Team. All rights reserved. */
/* This program is free software; you can redistribute it and/or modify it
* under the terms of the license (GNU LGPL) which comes with this package. */
#ifndef SIMGRID_MC_ADDRESS_SPACE_H
#define SIMGRID_MC_ADDRESS_SPACE_H
-#include <cassert>
-#include <cstddef>
-#include <cstdint>
-#include <cstring>
-#include <type_traits>
-
-#include <string>
-#include <vector>
-
#include "src/mc/mc_forward.hpp"
#include "src/mc/remote/RemotePtr.hpp"
#include "simgrid/sg_config.hpp"
-#include "src/mc/ModelChecker.hpp"
#include "src/mc/ModelChecker.hpp"
#include "src/mc/PageStore.hpp"
#include "src/mc/Transition.hpp"
#include "src/mc/mc_exit.hpp"
#include "src/mc/mc_private.hpp"
#include "src/mc/mc_record.hpp"
+#include "src/mc/remote/RemoteClient.hpp"
#include "src/mc/remote/mc_protocol.h"
XBT_LOG_NEW_DEFAULT_SUBCATEGORY(mc_ModelChecker, mc, "ModelChecker");
-/* Copyright (c) 2007-2018. The SimGrid Team.
- * All rights reserved. */
+/* Copyright (c) 2007-2018. The SimGrid Team. All rights reserved. */
/* This program is free software; you can redistribute it and/or modify it
* under the terms of the license (GNU LGPL) which comes with this package. */
#include <event2/event.h>
-#include "xbt/base.h"
#include <sys/types.h>
#include "src/mc/PageStore.hpp"
-#include "src/mc/Transition.hpp"
#include "src/mc/mc_forward.hpp"
-#include "src/mc/remote/RemoteClient.hpp"
#include "src/mc/remote/mc_protocol.h"
namespace simgrid {
* in things like waitany and for associating a given value of MC_random()
* calls.
*/
-struct Transition {
+class Transition {
+public:
int pid = 0;
/* Which transition was executed for this simcall
#include "mc/mc.h"
#include "src/mc/mc_base.h"
#include "src/mc/mc_config.hpp"
+#include "src/mc/mc_forward.hpp"
#include "src/mc/mc_replay.hpp"
+#include "src/mc/remote/RemoteClient.hpp"
#include "src/simix/smx_private.hpp"
#include "src/kernel/activity/MutexImpl.hpp"
-/* Copyright (c) 2007-2018. The SimGrid Team.
- * All rights reserved. */
+/* Copyright (c) 2007-2018. The SimGrid Team. All rights reserved. */
/* This program is free software; you can redistribute it and/or modify it
* under the terms of the license (GNU LGPL) which comes with this package. */
class Member;
class Type;
class Variable;
+class Transition;
class Frame;
class ActorInformation;
#ifndef SIMGRID_MC_REPLAY_H
#define SIMGRID_MC_REPLAY_H
-#include "xbt/base.h"
-#include <string>
-
#include "src/mc/mc_config.hpp"
/** Replay path (if any) in string representation
#ifndef SIMGRID_MC_SNAPSHOT_HPP
#define SIMGRID_MC_SNAPSHOT_HPP
-#include <memory>
-#include <set>
-#include <string>
-#include <vector>
-
#include "src/mc/ModelChecker.hpp"
#include "src/mc/RegionSnapshot.hpp"
-#include "src/mc/mc_forward.hpp"
#include "src/mc/mc_unw.hpp"
+#include "src/mc/remote/RemoteClient.hpp"
// ***** Snapshot region
-/* Copyright (c) 2015-2018. The SimGrid Team.
- * All rights reserved. */
+/* Copyright (c) 2015-2018. The SimGrid Team. All rights reserved. */
/* This program is free software; you can redistribute it and/or modify it
* under the terms of the license (GNU LGPL) which comes with this package. */
#ifndef SIMGRID_MC_CHANNEL_HPP
#define SIMGRID_MC_CHANNEL_HPP
-#include <unistd.h>
+#include "src/mc/remote/mc_protocol.h"
#include <type_traits>
-#include "src/mc/remote/mc_protocol.h"
-
namespace simgrid {
namespace mc {
-/* Copyright (c) 2015-2018. The SimGrid Team.
- * All rights reserved. */
+/* Copyright (c) 2015-2018. The SimGrid Team. All rights reserved. */
/* This program is free software; you can redistribute it and/or modify it
* under the terms of the license (GNU LGPL) which comes with this package. */
#ifndef SIMGRID_MC_CLIENT_H
#define SIMGRID_MC_CLIENT_H
-#include "src/internal_config.h"
+#include "src/mc/remote/Channel.hpp"
-#include <cstddef>
#include <memory>
-#include <xbt/base.h>
-
-#include <simgrid/simix.h>
-
-#include "src/mc/remote/Channel.hpp"
-#include "src/mc/remote/mc_protocol.h"
-
namespace simgrid {
namespace mc {
#ifndef SIMGRID_MC_PROCESS_H
#define SIMGRID_MC_PROCESS_H
-#include <cstddef>
-#include <cstdint>
-
-#include <memory>
-#include <string>
-#include <type_traits>
-#include <vector>
-
-#include <sys/types.h>
-
-#include <simgrid/config.h>
-
-#include "xbt/base.h"
-#include <xbt/mmalloc.h>
-
#include "src/xbt/mmalloc/mmprivate.h"
-
#include "src/mc/remote/Channel.hpp"
-#include "src/mc/remote/RemotePtr.hpp"
-
-#include "src/simix/popping_private.hpp"
-#include "src/simix/smx_private.hpp"
-#include <simgrid/simix.h>
-
-#include "src/xbt/memory_map.hpp"
-
-#include "src/mc/AddressSpace.hpp"
#include "src/mc/ObjectInformation.hpp"
-#include "src/mc/mc_base.h"
-#include "src/mc/mc_forward.hpp"
-#include "src/mc/remote/mc_protocol.h"
+
+#include <vector>
namespace simgrid {
namespace mc {
-/* Copyright (c) 2008-2018. The SimGrid Team.
- * All rights reserved. */
+/* Copyright (c) 2008-2018. The SimGrid Team. All rights reserved. */
/* This program is free software; you can redistribute it and/or modify it
* under the terms of the license (GNU LGPL) which comes with this package. */
#ifndef SIMGRID_MC_REMOTE_PTR_HPP
#define SIMGRID_MC_REMOTE_PTR_HPP
-#include <cstddef>
-#include <cstdint>
-#include <cstring>
-
-#include <stdexcept>
-#include <type_traits>
+#include "src/simix/smx_private.hpp"
namespace simgrid {
namespace mc {
#ifndef SIMGRID_MC_PROTOCOL_H
#define SIMGRID_MC_PROTOCOL_H
-#include <stdint.h>
-
-#include <xbt/base.h>
-
#include "mc/datatypes.h"
#include "simgrid/forward.h"
atexit(MSG_exit);
}
-/** \ingroup msg_simulation
- * \brief Launch the MSG simulation
- */
-msg_error_t MSG_main()
-{
- /* Clean IO before the run */
- fflush(stdout);
- fflush(stderr);
-
- if (MC_is_active()) {
- MC_run();
- } else {
- SIMIX_run();
- }
- return MSG_OK;
-}
-
/** \ingroup msg_simulation
* \brief set a configuration variable
*
return MSG_task_send_with_timeout(task, alias, timeout);
}
-/** \ingroup msg_task_usage
- * \brief Check if there is a communication going on in a mailbox.
- *
- * \param alias the name of the mailbox to be considered
- *
- * \return Returns 1 if there is a communication, 0 otherwise
- */
-int MSG_task_listen(const char *alias)
-{
- simgrid::s4u::MailboxPtr mbox = simgrid::s4u::Mailbox::by_name(alias);
- return mbox->listen() ? 1 : 0;
-}
-
/** \ingroup msg_task_usage
* \brief Look if there is a communication on a mailbox and return the PID of the sender process.
*
{
sg_engine_load_deployment(filename);
}
+msg_error_t MSG_main()
+{
+ sg_engine_run();
+ return MSG_OK;
+}
void MSG_function_register(const char* name, xbt_main_func_t code)
{
sg_engine_register_function(name, code);
{
return sg_engine_get_clock();
}
+
+/* ************************** Mailboxes ************************ */
+void MSG_mailbox_set_async(const char* alias)
+{
+ sg_mailbox_set_receiver(alias);
+}
+int MSG_task_listen(const char* alias)
+{
+ return sg_mailbox_listen(alias);
+}
+
/* ************************** Actors *************************** */
int MSG_process_get_PID(sg_actor_t actor)
{
+++ /dev/null
-/* Mailboxes in MSG */
-
-/* Copyright (c) 2008-2018. The SimGrid Team. All rights reserved. */
-
-/* This program is free software; you can redistribute it and/or modify it
- * under the terms of the license (GNU LGPL) which comes with this package. */
-
-#include "simgrid/s4u/Mailbox.hpp"
-#include "src/msg/msg_private.hpp"
-
-XBT_LOG_NEW_DEFAULT_SUBCATEGORY(msg_mailbox, msg, "Logging specific to MSG (mailbox)");
-
-/** \ingroup msg_mailbox_management
- * \brief Set the mailbox to receive in asynchronous mode
- *
- * All messages sent to this mailbox will be transferred to the receiver without waiting for the receive call.
- * The receive call will still be necessary to use the received data.
- * If there is a need to receive some messages asynchronously, and some not, two different mailboxes should be used.
- *
- * \param alias The name of the mailbox
- */
-void MSG_mailbox_set_async(const char *alias){
- simgrid::s4u::Mailbox::by_name(alias)->set_receiver(simgrid::s4u::Actor::self());
- XBT_VERB("%s mailbox set to receive eagerly for myself\n",alias);
-}
namespace simgrid {
namespace vm {
-class VmDirtyPageTrackingExt {
- bool dp_tracking = false;
- std::map<kernel::activity::ExecImplPtr, double> dp_objs;
- double dp_updated_by_deleted_tasks = 0.0;
+class DirtyPageTrackingExt {
+ bool dp_tracking_ = false;
+ std::map<kernel::activity::ExecImplPtr, double> dp_objs_;
+ double dp_updated_by_deleted_tasks_ = 0.0;
// Percentage of pages that get dirty compared to netspeed [0;1] bytes per 1 flop execution
- double dp_intensity = 0.0;
- sg_size_t working_set_memory = 0.0;
- double max_downtime = 0.03;
- double mig_speed = 0.0;
+ double dp_intensity_ = 0.0;
+ sg_size_t working_set_memory_ = 0.0;
+ double max_downtime_ = 0.03;
+ double mig_speed_ = 0.0;
public:
void start_tracking();
- void stop_tracking() { dp_tracking = false; }
- bool is_tracking() { return dp_tracking; }
- void track(kernel::activity::ExecImplPtr exec, double amount) { dp_objs.insert({exec, amount}); }
- void untrack(kernel::activity::ExecImplPtr exec) { dp_objs.erase(exec); }
- double get_stored_remains(kernel::activity::ExecImplPtr exec) { return dp_objs.at(exec); }
- void update_dirty_page_count(double delta) { dp_updated_by_deleted_tasks += delta; }
+ void stop_tracking() { dp_tracking_ = false; }
+ bool is_tracking() { return dp_tracking_; }
+ void track(kernel::activity::ExecImplPtr exec, double amount) { dp_objs_.insert({exec, amount}); }
+ void untrack(kernel::activity::ExecImplPtr exec) { dp_objs_.erase(exec); }
+ double get_stored_remains(kernel::activity::ExecImplPtr exec) { return dp_objs_.at(exec); }
+ void update_dirty_page_count(double delta) { dp_updated_by_deleted_tasks_ += delta; }
double computed_flops_lookup();
- double get_intensity() { return dp_intensity; }
- void set_intensity(double intensity) { dp_intensity = intensity; }
- double get_working_set_memory() { return working_set_memory; }
- void set_working_set_memory(sg_size_t size) { working_set_memory = size; }
- void set_migration_speed(double speed) { mig_speed = speed; }
- double get_migration_speed() { return mig_speed; }
- double get_max_downtime() { return max_downtime; }
-
- static simgrid::xbt::Extension<VirtualMachineImpl, VmDirtyPageTrackingExt> EXTENSION_ID;
- virtual ~VmDirtyPageTrackingExt() = default;
- VmDirtyPageTrackingExt() = default;
+ double get_intensity() { return dp_intensity_; }
+ void set_intensity(double intensity) { dp_intensity_ = intensity; }
+ double get_working_set_memory() { return working_set_memory_; }
+ void set_working_set_memory(sg_size_t size) { working_set_memory_ = size; }
+ void set_migration_speed(double speed) { mig_speed_ = speed; }
+ double get_migration_speed() { return mig_speed_; }
+ double get_max_downtime() { return max_downtime_; }
+
+ static simgrid::xbt::Extension<VirtualMachineImpl, DirtyPageTrackingExt> EXTENSION_ID;
+ virtual ~DirtyPageTrackingExt() = default;
+ DirtyPageTrackingExt() = default;
};
-simgrid::xbt::Extension<VirtualMachineImpl, VmDirtyPageTrackingExt> VmDirtyPageTrackingExt::EXTENSION_ID;
+simgrid::xbt::Extension<VirtualMachineImpl, DirtyPageTrackingExt> DirtyPageTrackingExt::EXTENSION_ID;
-void VmDirtyPageTrackingExt::start_tracking()
+void DirtyPageTrackingExt::start_tracking()
{
- dp_tracking = true;
- for (auto const& elm : dp_objs)
- dp_objs[elm.first] = elm.first->get_remaining();
+ dp_tracking_ = true;
+ for (auto const& elm : dp_objs_)
+ dp_objs_[elm.first] = elm.first->get_remaining();
}
-double VmDirtyPageTrackingExt::computed_flops_lookup()
+double DirtyPageTrackingExt::computed_flops_lookup()
{
double total = 0;
- for (auto const& elm : dp_objs) {
+ for (auto const& elm : dp_objs_) {
total += elm.second - elm.first->get_remaining();
- dp_objs[elm.first] = elm.first->get_remaining();
+ dp_objs_[elm.first] = elm.first->get_remaining();
}
- total += dp_updated_by_deleted_tasks;
+ total += dp_updated_by_deleted_tasks_;
- dp_updated_by_deleted_tasks = 0;
+ dp_updated_by_deleted_tasks_ = 0;
return total;
}
} // namespace vm
} // namespace simgrid
-static void onVirtualMachineCreation(simgrid::vm::VirtualMachineImpl* vm)
+static void on_virtual_machine_creation(simgrid::vm::VirtualMachineImpl* vm)
{
- vm->extension_set<simgrid::vm::VmDirtyPageTrackingExt>(new simgrid::vm::VmDirtyPageTrackingExt());
+ vm->extension_set<simgrid::vm::DirtyPageTrackingExt>(new simgrid::vm::DirtyPageTrackingExt());
}
-static void onExecCreation(simgrid::kernel::activity::ExecImplPtr exec)
+static void on_exec_creation(simgrid::kernel::activity::ExecImplPtr exec)
{
simgrid::s4u::VirtualMachine* vm = dynamic_cast<simgrid::s4u::VirtualMachine*>(exec->host_);
if (vm == nullptr)
return;
- if (vm->get_impl()->extension<simgrid::vm::VmDirtyPageTrackingExt>()->is_tracking()) {
- vm->get_impl()->extension<simgrid::vm::VmDirtyPageTrackingExt>()->track(exec, exec->get_remaining());
+ if (vm->get_impl()->extension<simgrid::vm::DirtyPageTrackingExt>()->is_tracking()) {
+ vm->get_impl()->extension<simgrid::vm::DirtyPageTrackingExt>()->track(exec, exec->get_remaining());
} else {
- vm->get_impl()->extension<simgrid::vm::VmDirtyPageTrackingExt>()->track(exec, 0.0);
+ vm->get_impl()->extension<simgrid::vm::DirtyPageTrackingExt>()->track(exec, 0.0);
}
}
-static void onExecCompletion(simgrid::kernel::activity::ExecImplPtr exec)
+static void on_exec_completion(simgrid::kernel::activity::ExecImplPtr exec)
{
simgrid::s4u::VirtualMachine* vm = dynamic_cast<simgrid::s4u::VirtualMachine*>(exec->host_);
if (vm == nullptr)
/* If we are in the middle of dirty page tracking, we record how much computation has been done until now, and keep
* the information for the lookup_() function that will called soon. */
- if (vm->get_impl()->extension<simgrid::vm::VmDirtyPageTrackingExt>()->is_tracking()) {
- double delta = vm->get_impl()->extension<simgrid::vm::VmDirtyPageTrackingExt>()->get_stored_remains(exec) -
+ if (vm->get_impl()->extension<simgrid::vm::DirtyPageTrackingExt>()->is_tracking()) {
+ double delta = vm->get_impl()->extension<simgrid::vm::DirtyPageTrackingExt>()->get_stored_remains(exec) -
exec->get_remaining();
- vm->get_impl()->extension<simgrid::vm::VmDirtyPageTrackingExt>()->update_dirty_page_count(delta);
+ vm->get_impl()->extension<simgrid::vm::DirtyPageTrackingExt>()->update_dirty_page_count(delta);
}
- vm->get_impl()->extension<simgrid::vm::VmDirtyPageTrackingExt>()->untrack(exec);
+ vm->get_impl()->extension<simgrid::vm::DirtyPageTrackingExt>()->untrack(exec);
}
void sg_vm_dirty_page_tracking_init()
{
- if (not simgrid::vm::VmDirtyPageTrackingExt::EXTENSION_ID.valid()) {
- simgrid::vm::VmDirtyPageTrackingExt::EXTENSION_ID =
- simgrid::vm::VirtualMachineImpl::extension_create<simgrid::vm::VmDirtyPageTrackingExt>();
- simgrid::vm::VirtualMachineImpl::on_creation.connect(&onVirtualMachineCreation);
- simgrid::kernel::activity::ExecImpl::onCreation.connect(&onExecCreation);
- simgrid::kernel::activity::ExecImpl::onCompletion.connect(&onExecCompletion);
+ if (not simgrid::vm::DirtyPageTrackingExt::EXTENSION_ID.valid()) {
+ simgrid::vm::DirtyPageTrackingExt::EXTENSION_ID =
+ simgrid::vm::VirtualMachineImpl::extension_create<simgrid::vm::DirtyPageTrackingExt>();
+ simgrid::vm::VirtualMachineImpl::on_creation.connect(&on_virtual_machine_creation);
+ simgrid::kernel::activity::ExecImpl::onCreation.connect(&on_exec_creation);
+ simgrid::kernel::activity::ExecImpl::onCompletion.connect(&on_exec_completion);
}
}
void sg_vm_start_dirty_page_tracking(sg_vm_t vm)
{
- vm->get_impl()->extension<simgrid::vm::VmDirtyPageTrackingExt>()->start_tracking();
+ vm->get_impl()->extension<simgrid::vm::DirtyPageTrackingExt>()->start_tracking();
}
void sg_vm_stop_dirty_page_tracking(sg_vm_t vm)
{
- vm->get_impl()->extension<simgrid::vm::VmDirtyPageTrackingExt>()->stop_tracking();
+ vm->get_impl()->extension<simgrid::vm::DirtyPageTrackingExt>()->stop_tracking();
}
double sg_vm_lookup_computed_flops(sg_vm_t vm)
{
- return vm->get_impl()->extension<simgrid::vm::VmDirtyPageTrackingExt>()->computed_flops_lookup();
+ return vm->get_impl()->extension<simgrid::vm::DirtyPageTrackingExt>()->computed_flops_lookup();
}
void sg_vm_set_dirty_page_intensity(sg_vm_t vm, double intensity)
{
- vm->get_impl()->extension<simgrid::vm::VmDirtyPageTrackingExt>()->set_intensity(intensity);
+ vm->get_impl()->extension<simgrid::vm::DirtyPageTrackingExt>()->set_intensity(intensity);
}
double sg_vm_get_dirty_page_intensity(sg_vm_t vm)
{
- return vm->get_impl()->extension<simgrid::vm::VmDirtyPageTrackingExt>()->get_intensity();
+ return vm->get_impl()->extension<simgrid::vm::DirtyPageTrackingExt>()->get_intensity();
}
void sg_vm_set_working_set_memory(sg_vm_t vm, sg_size_t size)
{
- vm->get_impl()->extension<simgrid::vm::VmDirtyPageTrackingExt>()->set_working_set_memory(size);
+ vm->get_impl()->extension<simgrid::vm::DirtyPageTrackingExt>()->set_working_set_memory(size);
}
sg_size_t sg_vm_get_working_set_memory(sg_vm_t vm)
{
- return vm->get_impl()->extension<simgrid::vm::VmDirtyPageTrackingExt>()->get_working_set_memory();
+ return vm->get_impl()->extension<simgrid::vm::DirtyPageTrackingExt>()->get_working_set_memory();
}
void sg_vm_set_migration_speed(sg_vm_t vm, double speed)
{
- vm->get_impl()->extension<simgrid::vm::VmDirtyPageTrackingExt>()->set_migration_speed(speed);
+ vm->get_impl()->extension<simgrid::vm::DirtyPageTrackingExt>()->set_migration_speed(speed);
}
double sg_vm_get_migration_speed(sg_vm_t vm)
{
- return vm->get_impl()->extension<simgrid::vm::VmDirtyPageTrackingExt>()->get_migration_speed();
+ return vm->get_impl()->extension<simgrid::vm::DirtyPageTrackingExt>()->get_migration_speed();
}
double sg_vm_get_max_downtime(sg_vm_t vm)
{
- return vm->get_impl()->extension<simgrid::vm::VmDirtyPageTrackingExt>()->get_max_downtime();
+ return vm->get_impl()->extension<simgrid::vm::DirtyPageTrackingExt>()->get_max_downtime();
}
private:
simgrid::s4u::Host* const host_;
+ double sampling_rate_;
protected:
simgrid::s4u::Host* get_host() const { return host_; }
public:
- double sampling_rate;
explicit Governor(simgrid::s4u::Host* ptr) : host_(ptr) { init(); }
virtual ~Governor() = default;
+ virtual std::string get_name() = 0;
void init()
{
const char* local_sampling_rate_config = host_->get_property(property_sampling_rate);
double global_sampling_rate_config = simgrid::config::get_value<double>(property_sampling_rate);
if (local_sampling_rate_config != nullptr) {
- sampling_rate = std::stod(local_sampling_rate_config);
+ sampling_rate_ = std::stod(local_sampling_rate_config);
} else {
- sampling_rate = global_sampling_rate_config;
+ sampling_rate_ = global_sampling_rate_config;
}
}
virtual void update() = 0;
- virtual std::string getName() = 0;
- double samplingRate() { return sampling_rate; }
+ double get_sampling_rate() { return sampling_rate_; }
};
/**
class Performance : public Governor {
public:
explicit Performance(simgrid::s4u::Host* ptr) : Governor(ptr) {}
+ std::string get_name() override { return "Performance"; }
void update() override { get_host()->set_pstate(0); }
- std::string getName() override { return "Performance"; }
};
/**
class Powersave : public Governor {
public:
explicit Powersave(simgrid::s4u::Host* ptr) : Governor(ptr) {}
+ std::string get_name() override { return "Powersave"; }
void update() override { get_host()->set_pstate(get_host()->get_pstate_count() - 1); }
- std::string getName() override { return "Powersave"; }
};
/**
* See https://elixir.bootlin.com/linux/v4.15.4/source/drivers/cpufreq/cpufreq_ondemand.c
* DEF_FREQUENCY_UP_THRESHOLD and od_update()
*/
- double freq_up_threshold = 0.80;
+ double freq_up_threshold_ = 0.80;
public:
explicit OnDemand(simgrid::s4u::Host* ptr) : Governor(ptr) {}
+ std::string get_name() override { return "OnDemand"; }
- std::string getName() override { return "OnDemand"; }
void update() override
{
double load = get_host()->get_core_count() * sg_host_get_avg_load(get_host());
sg_host_load_reset(get_host()); // Only consider the period between two calls to this method!
- if (load > freq_up_threshold) {
+ if (load > freq_up_threshold_) {
get_host()->set_pstate(0); /* Run at max. performance! */
- XBT_INFO("Load: %f > threshold: %f --> changed to pstate %i", load, freq_up_threshold, 0);
+ XBT_INFO("Load: %f > threshold: %f --> changed to pstate %i", load, freq_up_threshold_, 0);
} else {
/* The actual implementation uses a formula here: (See Kernel file cpufreq_ondemand.c:158)
*
*/
int max_pstate = get_host()->get_pstate_count() - 1;
// Load is now < freq_up_threshold; exclude pstate 0 (the fastest)
- // because pstate 0 can only be selected if load > freq_up_threshold
+ // because pstate 0 can only be selected if load > freq_up_threshold_
int new_pstate = max_pstate - load * (max_pstate + 1);
get_host()->set_pstate(new_pstate);
- XBT_DEBUG("Load: %f < threshold: %f --> changed to pstate %i", load, freq_up_threshold, new_pstate);
+ XBT_DEBUG("Load: %f < threshold: %f --> changed to pstate %i", load, freq_up_threshold_, new_pstate);
}
}
};
* > environment.
*/
class Conservative : public Governor {
- double freq_up_threshold = .8;
- double freq_down_threshold = .2;
+ double freq_up_threshold_ = .8;
+ double freq_down_threshold_ = .2;
public:
explicit Conservative(simgrid::s4u::Host* ptr) : Governor(ptr) {}
+ virtual std::string get_name() override { return "Conservative"; }
- virtual std::string getName() override { return "Conservative"; }
virtual void update() override
{
double load = get_host()->get_core_count() * sg_host_get_avg_load(get_host());
int pstate = get_host()->get_pstate();
sg_host_load_reset(get_host()); // Only consider the period between two calls to this method!
- if (load > freq_up_threshold) {
+ if (load > freq_up_threshold_) {
if (pstate != 0) {
get_host()->set_pstate(pstate - 1);
- XBT_INFO("Load: %f > threshold: %f -> increasing performance to pstate %d", load, freq_up_threshold,
+ XBT_INFO("Load: %f > threshold: %f -> increasing performance to pstate %d", load, freq_up_threshold_,
pstate - 1);
} else {
XBT_DEBUG("Load: %f > threshold: %f -> but cannot speed up even more, already in highest pstate %d", load,
- freq_up_threshold, pstate);
+ freq_up_threshold_, pstate);
}
- } else if (load < freq_down_threshold) {
+ } else if (load < freq_down_threshold_) {
int max_pstate = get_host()->get_pstate_count() - 1;
if (pstate != max_pstate) { // Are we in the slowest pstate already?
get_host()->set_pstate(pstate + 1);
- XBT_INFO("Load: %f < threshold: %f -> slowing down to pstate %d", load, freq_down_threshold, pstate + 1);
+ XBT_INFO("Load: %f < threshold: %f -> slowing down to pstate %d", load, freq_down_threshold_, pstate + 1);
} else {
XBT_DEBUG("Load: %f < threshold: %f -> cannot slow down even more, already in slowest pstate %d", load,
- freq_down_threshold, pstate);
+ freq_down_threshold_, pstate);
}
}
}
public:
static simgrid::xbt::Extension<simgrid::s4u::Host, HostDvfs> EXTENSION_ID;
- explicit HostDvfs(simgrid::s4u::Host*);
- ~HostDvfs();
+ explicit HostDvfs(simgrid::s4u::Host*){};
+ ~HostDvfs() = default;
};
simgrid::xbt::Extension<simgrid::s4u::Host, HostDvfs> HostDvfs::EXTENSION_ID;
-HostDvfs::HostDvfs(simgrid::s4u::Host* ptr) {}
-
-HostDvfs::~HostDvfs() = default;
} // namespace dvfs
} // namespace plugin
} // namespace simgrid
// Sleep *before* updating; important for startup (i.e., t = 0).
// In the beginning, we want to go with the pstates specified in the platform file
// (so we sleep first)
- simgrid::s4u::this_actor::sleep_for(governor->samplingRate());
+ simgrid::s4u::this_actor::sleep_for(governor->get_sampling_rate());
governor->update();
- XBT_DEBUG("Governor (%s) just updated!", governor->getName().c_str());
+ XBT_DEBUG("Governor (%s) just updated!", governor->get_name().c_str());
}
XBT_WARN("I should have never reached this point: daemons should be killed when all regular processes are done");
return 0;
});
- // This call must be placed in this function. Otherweise, the daemonize() call comes too late and
+ // This call must be placed in this function. Otherwise, the daemonize() call comes too late and
// SMPI will take this process as an MPI process!
daemon->daemonize();
}
class PowerRange {
public:
- double idle;
- double min;
- double max;
+ double idle_;
+ double min_;
+ double max_;
- PowerRange(double idle, double min, double max) : idle(idle), min(min), max(max) {}
+ PowerRange(double idle, double min, double max) : idle_(idle), min_(min), max_(max) {}
};
class HostEnergy {
explicit HostEnergy(simgrid::s4u::Host* ptr);
~HostEnergy();
- double getCurrentWattsValue();
- double getCurrentWattsValue(double cpu_load);
- double getConsumedEnergy();
- double getWattMinAt(int pstate);
- double getWattMaxAt(int pstate);
+ double get_current_watts_value();
+ double get_current_watts_value(double cpu_load);
+ double get_consumed_energy();
+ double get_watt_min_at(int pstate);
+ double get_watt_max_at(int pstate);
void update();
private:
- void initWattsRangeList();
- simgrid::s4u::Host* host = nullptr;
- std::vector<PowerRange>
- power_range_watts_list; /*< List of (min_power,max_power) pairs corresponding to each cpu pstate */
+ void init_watts_range_list();
+ simgrid::s4u::Host* host_ = nullptr;
+ /*< List of (min_power,max_power) pairs corresponding to each cpu pstate */
+ std::vector<PowerRange> power_range_watts_list_;
/* We need to keep track of what pstate has been used, as we will sometimes be notified only *after* a pstate has been
* used (but we need to update the energy consumption with the old pstate!)
*/
- int pstate = 0;
- const int pstate_off = -1;
+ int pstate_ = 0;
+ const int pstate_off_ = -1;
public:
- double watts_off = 0.0; /*< Consumption when the machine is turned off (shutdown) */
- double total_energy = 0.0; /*< Total energy consumed by the host */
- double last_updated; /*< Timestamp of the last energy update event*/
+ double watts_off_ = 0.0; /*< Consumption when the machine is turned off (shutdown) */
+ double total_energy_ = 0.0; /*< Total energy consumed by the host */
+ double last_updated_; /*< Timestamp of the last energy update event*/
};
simgrid::xbt::Extension<simgrid::s4u::Host, HostEnergy> HostEnergy::EXTENSION_ID;
/* Computes the consumption so far. Called lazily on need. */
void HostEnergy::update()
{
- double start_time = this->last_updated;
+ double start_time = this->last_updated_;
double finish_time = surf_get_clock();
if (start_time < finish_time) {
- double previous_energy = this->total_energy;
+ double previous_energy = this->total_energy_;
- double instantaneous_consumption = this->getCurrentWattsValue();
+ double instantaneous_consumption = this->get_current_watts_value();
double energy_this_step = instantaneous_consumption * (finish_time - start_time);
// TODO Trace: Trace energy_this_step from start_time to finish_time in host->getName()
- this->total_energy = previous_energy + energy_this_step;
- this->last_updated = finish_time;
+ this->total_energy_ = previous_energy + energy_this_step;
+ this->last_updated_ = finish_time;
XBT_DEBUG("[update_energy of %s] period=[%.2f-%.2f]; current power peak=%.0E flop/s; consumption change: %.2f J -> "
"%.2f J",
- host->get_cname(), start_time, finish_time, host->pimpl_cpu->get_speed(1.0), previous_energy,
+ host_->get_cname(), start_time, finish_time, host_->pimpl_cpu->get_speed(1.0), previous_energy,
energy_this_step);
}
/* Save data for the upcoming time interval: whether it's on/off and the pstate if it's on */
- this->pstate = host->is_on() ? host->get_pstate() : pstate_off;
+ this->pstate_ = host_->is_on() ? host_->get_pstate() : pstate_off_;
}
-HostEnergy::HostEnergy(simgrid::s4u::Host* ptr) : host(ptr), last_updated(surf_get_clock())
+HostEnergy::HostEnergy(simgrid::s4u::Host* ptr) : host_(ptr), last_updated_(surf_get_clock())
{
- initWattsRangeList();
+ init_watts_range_list();
- const char* off_power_str = host->get_property("watt_off");
+ const char* off_power_str = host_->get_property("watt_off");
if (off_power_str != nullptr) {
try {
- this->watts_off = std::stod(std::string(off_power_str));
+ this->watts_off_ = std::stod(std::string(off_power_str));
} catch (std::invalid_argument& ia) {
- throw std::invalid_argument(std::string("Invalid value for property watt_off of host ") + host->get_cname() +
+ throw std::invalid_argument(std::string("Invalid value for property watt_off of host ") + host_->get_cname() +
": " + off_power_str);
}
}
HostEnergy::~HostEnergy() = default;
-double HostEnergy::getWattMinAt(int pstate)
+double HostEnergy::get_watt_min_at(int pstate)
{
- xbt_assert(not power_range_watts_list.empty(), "No power range properties specified for host %s", host->get_cname());
- return power_range_watts_list[pstate].min;
+ xbt_assert(not power_range_watts_list_.empty(), "No power range properties specified for host %s",
+ host_->get_cname());
+ return power_range_watts_list_[pstate].min_;
}
-double HostEnergy::getWattMaxAt(int pstate)
+double HostEnergy::get_watt_max_at(int pstate)
{
- xbt_assert(not power_range_watts_list.empty(), "No power range properties specified for host %s", host->get_cname());
- return power_range_watts_list[pstate].max;
+ xbt_assert(not power_range_watts_list_.empty(), "No power range properties specified for host %s",
+ host_->get_cname());
+ return power_range_watts_list_[pstate].max_;
}
/** @brief Computes the power consumed by the host according to the current situation
*
* - If the host is off, that's the watts_off value
* - if it's on, take the current pstate and the current processor load into account */
-double HostEnergy::getCurrentWattsValue()
+double HostEnergy::get_current_watts_value()
{
- if (this->pstate == pstate_off) // The host is off (or was off at the beginning of this time interval)
- return this->watts_off;
+ if (this->pstate_ == pstate_off_) // The host is off (or was off at the beginning of this time interval)
+ return this->watts_off_;
- double current_speed = host->getSpeed();
+ double current_speed = host_->getSpeed();
double cpu_load;
// We may have start == finish if the past consumption was updated since the simcall was started
// We consider that the machine is then fully loaded. That's arbitrary but it avoids a NaN
cpu_load = 1;
else
- cpu_load = host->pimpl_cpu->get_constraint()->get_usage() / current_speed;
+ cpu_load = host_->pimpl_cpu->get_constraint()->get_usage() / current_speed;
/** Divide by the number of cores here **/
- cpu_load /= host->pimpl_cpu->get_core_count();
+ cpu_load /= host_->pimpl_cpu->get_core_count();
if (cpu_load > 1) // A machine with a load > 1 consumes as much as a fully loaded machine, not more
cpu_load = 1;
*
* where X is the amount of idling cores, and Y the amount of computing cores.
*/
- return getCurrentWattsValue(cpu_load);
+ return get_current_watts_value(cpu_load);
}
/** @brief Computes the power that the host would consume at the provided processor load
*
* Whether the host is ON or OFF is not taken into account.
*/
-double HostEnergy::getCurrentWattsValue(double cpu_load)
+double HostEnergy::get_current_watts_value(double cpu_load)
{
- xbt_assert(not power_range_watts_list.empty(), "No power range properties specified for host %s", host->get_cname());
+ xbt_assert(not power_range_watts_list_.empty(), "No power range properties specified for host %s",
+ host_->get_cname());
/* Return watts_off if pstate == pstate_off (ie, if the host is off) */
- if (this->pstate == pstate_off) {
- return watts_off;
+ if (this->pstate_ == pstate_off_) {
+ return watts_off_;
}
/* min_power corresponds to the power consumed when only one core is active */
/* max_power is the power consumed at 100% cpu load */
- auto range = power_range_watts_list.at(this->pstate);
+ auto range = power_range_watts_list_.at(this->pstate_);
double current_power = 0;
double min_power = 0;
double max_power = 0;
double power_slope = 0;
if (cpu_load > 0) { /* Something is going on, the machine is not idle */
- double min_power = range.min;
- double max_power = range.max;
+ double min_power = range.min_;
+ double max_power = range.max_;
/**
* The min_power states how much we consume when only one single
* (maxCpuLoad is by definition 1)
*/
double power_slope;
- int coreCount = host->get_core_count();
+ int coreCount = host_->get_core_count();
double coreReciprocal = static_cast<double>(1) / static_cast<double>(coreCount);
if (coreCount > 1)
power_slope = (max_power - min_power) / (1 - coreReciprocal);
current_power = min_power + (cpu_load - coreReciprocal) * power_slope;
} else { /* Our machine is idle, take the dedicated value! */
- current_power = range.idle;
+ current_power = range.idle_;
}
XBT_DEBUG("[get_current_watts] min_power=%f, max_power=%f, slope=%f", min_power, max_power, power_slope);
return current_power;
}
-double HostEnergy::getConsumedEnergy()
+double HostEnergy::get_consumed_energy()
{
- if (last_updated < surf_get_clock()) // We need to simcall this as it modifies the environment
+ if (last_updated_ < surf_get_clock()) // We need to simcall this as it modifies the environment
simgrid::simix::simcall(std::bind(&HostEnergy::update, this));
- return total_energy;
+ return total_energy_;
}
-void HostEnergy::initWattsRangeList()
+void HostEnergy::init_watts_range_list()
{
- const char* all_power_values_str = host->get_property("watt_per_state");
+ const char* all_power_values_str = host_->get_property("watt_per_state");
if (all_power_values_str == nullptr)
return;
std::vector<std::string> all_power_values;
boost::split(all_power_values, all_power_values_str, boost::is_any_of(","));
- XBT_DEBUG("%s: profile: %s, cores: %d", host->get_cname(), all_power_values_str, host->get_core_count());
+ XBT_DEBUG("%s: profile: %s, cores: %d", host_->get_cname(), all_power_values_str, host_->get_core_count());
int i = 0;
for (auto const& current_power_values_str : all_power_values) {
/* retrieve the power values associated with the current pstate */
std::vector<std::string> current_power_values;
boost::split(current_power_values, current_power_values_str, boost::is_any_of(":"));
- if (host->get_core_count() == 1) {
+ if (host_->get_core_count() == 1) {
xbt_assert(current_power_values.size() == 2 || current_power_values.size() == 3,
"Power properties incorrectly defined for host %s."
"It should be 'Idle:FullSpeed' power values because you have one core only.",
- host->get_cname());
+ host_->get_cname());
if (current_power_values.size() == 2) {
// In this case, 1core == AllCores
current_power_values.push_back(current_power_values.at(1));
"The energy profile of mono-cores should be formatted as 'Idle:FullSpeed' only.\n"
"If you go for a 'Idle:OneCore:AllCores' power profile on mono-cores, then OneCore and AllCores "
"must be equal.",
- host->get_cname());
+ host_->get_cname());
}
} else {
xbt_assert(current_power_values.size() == 3,
"Power properties incorrectly defined for host %s."
"It should be 'Idle:OneCore:AllCores' power values because you have more than one core.",
- host->get_cname());
+ host_->get_cname());
}
/* min_power corresponds to the idle power (cpu load = 0) */
/* max_power is the power consumed at 100% cpu load */
- char* msg_idle = bprintf("Invalid idle value for pstate %d on host %s: %%s", i, host->get_cname());
- char* msg_min = bprintf("Invalid OneCore value for pstate %d on host %s: %%s", i, host->get_cname());
- char* msg_max = bprintf("Invalid AllCores value for pstate %d on host %s: %%s", i, host->get_cname());
+ char* msg_idle = bprintf("Invalid idle value for pstate %d on host %s: %%s", i, host_->get_cname());
+ char* msg_min = bprintf("Invalid OneCore value for pstate %d on host %s: %%s", i, host_->get_cname());
+ char* msg_max = bprintf("Invalid AllCores value for pstate %d on host %s: %%s", i, host_->get_cname());
PowerRange range(xbt_str_parse_double((current_power_values.at(0)).c_str(), msg_idle),
xbt_str_parse_double((current_power_values.at(1)).c_str(), msg_min),
xbt_str_parse_double((current_power_values.at(2)).c_str(), msg_max));
- power_range_watts_list.push_back(range);
+ power_range_watts_list_.push_back(range);
xbt_free(msg_idle);
xbt_free(msg_min);
xbt_free(msg_max);
using simgrid::plugin::HostEnergy;
/* **************************** events callback *************************** */
-static void onCreation(simgrid::s4u::Host& host)
+static void on_creation(simgrid::s4u::Host& host)
{
if (dynamic_cast<simgrid::s4u::VirtualMachine*>(&host)) // Ignore virtual machines
return;
host.extension_set(new HostEnergy(&host));
}
-static void onActionStateChange(simgrid::surf::CpuAction* action)
+static void on_action_state_change(simgrid::surf::CpuAction* action)
{
for (simgrid::surf::Cpu* const& cpu : action->cpus()) {
simgrid::s4u::Host* host = cpu->get_host();
// Get the host_energy extension for the relevant host
HostEnergy* host_energy = host->extension<HostEnergy>();
- if (host_energy->last_updated < surf_get_clock())
+ if (host_energy->last_updated_ < surf_get_clock())
host_energy->update();
}
}
/* This callback is fired either when the host changes its state (on/off) ("onStateChange") or its speed
* (because the user changed the pstate, or because of external trace events) ("onSpeedChange") */
-static void onHostChange(simgrid::s4u::Host& host)
+static void on_host_change(simgrid::s4u::Host& host)
{
if (dynamic_cast<simgrid::s4u::VirtualMachine*>(&host)) // Ignore virtual machines
return;
host_energy->update();
}
-static void onHostDestruction(simgrid::s4u::Host& host)
+static void on_host_destruction(simgrid::s4u::Host& host)
{
if (dynamic_cast<simgrid::s4u::VirtualMachine*>(&host)) // Ignore virtual machines
return;
XBT_INFO("Energy consumption of host %s: %f Joules", host.get_cname(),
- host.extension<HostEnergy>()->getConsumedEnergy());
+ host.extension<HostEnergy>()->get_consumed_energy());
}
-static void onSimulationEnd()
+static void on_simulation_end()
{
std::vector<simgrid::s4u::Host*> hosts = simgrid::s4u::Engine::get_instance()->get_all_hosts();
if (dynamic_cast<simgrid::s4u::VirtualMachine*>(hosts[i]) == nullptr) { // Ignore virtual machines
bool host_was_used = (sg_host_get_computed_flops(hosts[i]) != 0);
- double energy = hosts[i]->extension<HostEnergy>()->getConsumedEnergy();
+ double energy = hosts[i]->extension<HostEnergy>()->get_consumed_energy();
total_energy += energy;
if (host_was_used)
used_hosts_energy += energy;
HostEnergy::EXTENSION_ID = simgrid::s4u::Host::extension_create<HostEnergy>();
- simgrid::s4u::Host::on_creation.connect(&onCreation);
- simgrid::s4u::Host::on_state_change.connect(&onHostChange);
- simgrid::s4u::Host::on_speed_change.connect(&onHostChange);
- simgrid::s4u::Host::on_destruction.connect(&onHostDestruction);
- simgrid::s4u::on_simulation_end.connect(&onSimulationEnd);
- simgrid::surf::CpuAction::on_state_change.connect(&onActionStateChange);
+ simgrid::s4u::Host::on_creation.connect(&on_creation);
+ simgrid::s4u::Host::on_state_change.connect(&on_host_change);
+ simgrid::s4u::Host::on_speed_change.connect(&on_host_change);
+ simgrid::s4u::Host::on_destruction.connect(&on_host_destruction);
+ simgrid::s4u::on_simulation_end.connect(&on_simulation_end);
+ simgrid::surf::CpuAction::on_state_change.connect(&on_action_state_change);
}
/** @ingroup plugin_energy
{
xbt_assert(HostEnergy::EXTENSION_ID.valid(),
"The Energy plugin is not active. Please call sg_host_energy_plugin_init() during initialization.");
- return host->extension<HostEnergy>()->getConsumedEnergy();
+ return host->extension<HostEnergy>()->get_consumed_energy();
}
/** @ingroup plugin_energy
{
xbt_assert(HostEnergy::EXTENSION_ID.valid(),
"The Energy plugin is not active. Please call sg_host_energy_plugin_init() during initialization.");
- return host->extension<HostEnergy>()->getWattMinAt(pstate);
+ return host->extension<HostEnergy>()->get_watt_min_at(pstate);
}
/** @ingroup plugin_energy
* @brief Returns the amount of watt dissipated at the given pstate when the host burns CPU at 100%
{
xbt_assert(HostEnergy::EXTENSION_ID.valid(),
"The Energy plugin is not active. Please call sg_host_energy_plugin_init() during initialization.");
- return host->extension<HostEnergy>()->getWattMaxAt(pstate);
+ return host->extension<HostEnergy>()->get_watt_max_at(pstate);
}
/** @ingroup plugin_energy
{
xbt_assert(HostEnergy::EXTENSION_ID.valid(),
"The Energy plugin is not active. Please call sg_host_energy_plugin_init() during initialization.");
- return host->extension<HostEnergy>()->getCurrentWattsValue();
+ return host->extension<HostEnergy>()->get_current_watts_value();
}
/* This program is free software; you can redistribute it and/or modify it
* under the terms of the license (GNU LGPL) which comes with this package. */
+#include <simgrid/s4u.hpp>
#include "simgrid/plugins/load.h"
#include "src/plugins/vm/VirtualMachineImpl.hpp"
public:
static simgrid::xbt::Extension<simgrid::s4u::Host, HostLoad> EXTENSION_ID;
- explicit HostLoad(simgrid::s4u::Host* ptr);
- ~HostLoad();
-
- double getCurrentLoad();
- double getComputedFlops();
- double getAverageLoad();
- double getIdleTime();
+ explicit HostLoad(simgrid::s4u::Host* ptr)
+ : host_(ptr)
+ , last_updated_(surf_get_clock())
+ , last_reset_(surf_get_clock())
+ , current_speed_(host_->getSpeed())
+ , current_flops_(host_->pimpl_cpu->get_constraint()->get_usage())
+ , theor_max_flops_(0)
+ , was_prev_idle_(current_flops_ == 0)
+ {
+ }
+ ~HostLoad() = default;
+ HostLoad() = delete;
+ explicit HostLoad(simgrid::s4u::Host& ptr) = delete;
+ explicit HostLoad(simgrid::s4u::Host&& ptr) = delete;
+
+ double get_current_load();
+ double get_average_load() { return (theor_max_flops_ == 0) ? 0 : computed_flops_ / theor_max_flops_; };
+ double get_computed_flops() { return computed_flops_; }
+ double get_idle_time() { return idle_time_; } /** Return idle time since last reset */
void update();
void reset();
private:
- simgrid::s4u::Host* host = nullptr;
- double last_updated = 0;
- double last_reset = 0;
- double current_speed = 0;
- double current_flops = 0;
- double computed_flops = 0;
- double idle_time = 0;
- double theor_max_flops = 0;
- bool was_prev_idle = true; /* A host is idle at the beginning */
+ simgrid::s4u::Host* host_ = nullptr;
+ double last_updated_ = 0;
+ double last_reset_ = 0;
+ /**
+ * current_speed each core is running at right now
+ */
+ double current_speed_ = 0;
+ /**
+ * How many flops are currently used by all the processes running on this
+ * host?
+ */
+ double current_flops_ = 0;
+ double computed_flops_ = 0;
+ double idle_time_ = 0;
+ double theor_max_flops_ = 0;
+ bool was_prev_idle_ = true; /* A host is idle at the beginning */
};
simgrid::xbt::Extension<simgrid::s4u::Host, HostLoad> HostLoad::EXTENSION_ID;
-HostLoad::HostLoad(simgrid::s4u::Host* ptr)
- : host(ptr)
- , last_updated(surf_get_clock())
- , last_reset(surf_get_clock())
- , current_speed(host->getSpeed())
- , current_flops(host->pimpl_cpu->get_constraint()->get_usage())
- , theor_max_flops(0)
- , was_prev_idle(current_flops == 0)
-{
-}
-
-HostLoad::~HostLoad() = default;
-
void HostLoad::update()
{
double now = surf_get_clock();
/* Current flop per second computed by the cpu; current_flops = k * pstate_speed_in_flops, k \in {0, 1, ..., cores}
* number of active cores */
- current_flops = host->pimpl_cpu->get_constraint()->get_usage();
+ current_flops_ = host_->pimpl_cpu->get_constraint()->get_usage();
/* flops == pstate_speed * cores_being_currently_used */
- computed_flops += (now - last_updated) * current_flops;
+ computed_flops_ += (now - last_updated_) * current_flops_;
- if (was_prev_idle) {
- idle_time += (now - last_updated);
+ if (was_prev_idle_) {
+ idle_time_ += (now - last_updated_);
}
- theor_max_flops += current_speed * host->get_core_count() * (now - last_updated);
- current_speed = host->getSpeed();
- last_updated = now;
- was_prev_idle = (current_flops == 0);
+ theor_max_flops_ += current_speed_ * host_->get_core_count() * (now - last_updated_);
+ current_speed_ = host_->getSpeed();
+ last_updated_ = now;
+ was_prev_idle_ = (current_flops_ == 0);
}
/**
- * WARNING: This function does not guarantee that you have the real load at any time
- * imagine all actions on your CPU terminate at time t. Your load is then 0. Then
- * you query the load (still 0) and then another action starts (still at time t!).
- * This means that the load was never really 0 (because the time didn't advance) but
- * it will still be reported as 0.
+ * WARNING: This function does not guarantee that you have the real load at any time imagine all actions on your CPU
+ * terminate at time t. Your load is then 0. Then you query the load (still 0) and then another action starts (still at
+ * time t!). This means that the load was never really 0 (because the time didn't advance) but it will still be reported
+ * as 0.
*
* So, use at your own risk.
*/
-double HostLoad::getCurrentLoad()
+double HostLoad::get_current_load()
{
- // We don't need to call update() here because it is called everytime an
- // action terminates or starts
+ // We don't need to call update() here because it is called every time an action terminates or starts
// FIXME: Can this happen at the same time? stop -> call to getCurrentLoad, load = 0 -> next action starts?
- return current_flops / static_cast<double>(host->getSpeed() * host->get_core_count());
-}
-
-/**
- * Return idle time since last reset
- */
-double HostLoad::getIdleTime()
-{
- return idle_time;
-}
-
-double HostLoad::getAverageLoad()
-{
- if (theor_max_flops == 0) { // Avoid division by 0
- return 0;
- }
-
- return computed_flops / theor_max_flops;
-}
-
-double HostLoad::getComputedFlops()
-{
- return computed_flops;
+ return current_flops_ / static_cast<double>(host_->getSpeed() * host_->get_core_count());
}
/*
*/
void HostLoad::reset()
{
- last_updated = surf_get_clock();
- last_reset = surf_get_clock();
- idle_time = 0;
- computed_flops = 0;
- theor_max_flops = 0;
- current_flops = host->pimpl_cpu->get_constraint()->get_usage();
- current_speed = host->getSpeed();
- was_prev_idle = (current_flops == 0);
+ last_updated_ = surf_get_clock();
+ last_reset_ = surf_get_clock();
+ idle_time_ = 0;
+ computed_flops_ = 0;
+ theor_max_flops_ = 0;
+ current_flops_ = host_->pimpl_cpu->get_constraint()->get_usage();
+ current_speed_ = host_->getSpeed();
+ was_prev_idle_ = (current_flops_ == 0);
}
} // namespace plugin
} // namespace simgrid
/* **************************** events callback *************************** */
/* This callback is fired either when the host changes its state (on/off) or its speed
* (because the user changed the pstate, or because of external trace events) */
-static void onHostChange(simgrid::s4u::Host& host)
+static void on_host_change(simgrid::s4u::Host& host)
{
if (dynamic_cast<simgrid::s4u::VirtualMachine*>(&host)) // Ignore virtual machines
return;
}
/* This callback is called when an action (computation, idle, ...) terminates */
-static void onActionStateChange(simgrid::surf::CpuAction* action)
+static void on_action_state_change(simgrid::surf::CpuAction* action)
{
for (simgrid::surf::Cpu* const& cpu : action->cpus()) {
simgrid::s4u::Host* host = cpu->get_host();
HostLoad::EXTENSION_ID = simgrid::s4u::Host::extension_create<HostLoad>();
+ if (simgrid::s4u::Engine::is_initialized()) { // If not yet initialized, this would create a new instance
+ // which would cause seg faults...
+ simgrid::s4u::Engine* e = simgrid::s4u::Engine::get_instance();
+ for (auto& host : e->get_all_hosts()) {
+ host->extension_set(new HostLoad(host));
+ }
+ }
+
/* When attaching a callback into a signal, you can use a lambda as follows, or a regular function as done below */
simgrid::s4u::Host::on_creation.connect([](simgrid::s4u::Host& host) {
host.extension_set(new HostLoad(&host));
});
- simgrid::surf::CpuAction::on_state_change.connect(&onActionStateChange);
- simgrid::s4u::Host::on_state_change.connect(&onHostChange);
- simgrid::s4u::Host::on_speed_change.connect(&onHostChange);
+ simgrid::surf::CpuAction::on_state_change.connect(&on_action_state_change);
+ simgrid::s4u::Host::on_state_change.connect(&on_host_change);
+ simgrid::s4u::Host::on_speed_change.connect(&on_host_change);
}
/** @brief Returns the current load of the host passed as argument
xbt_assert(HostLoad::EXTENSION_ID.valid(),
"The Load plugin is not active. Please call sg_host_load_plugin_init() during initialization.");
- return host->extension<HostLoad>()->getCurrentLoad();
+ return host->extension<HostLoad>()->get_current_load();
}
/** @brief Returns the current load of the host passed as argument
xbt_assert(HostLoad::EXTENSION_ID.valid(),
"The Load plugin is not active. Please call sg_host_load_plugin_init() during initialization.");
- return host->extension<HostLoad>()->getAverageLoad();
+ return host->extension<HostLoad>()->get_average_load();
}
/** @brief Returns the time this host was idle since the last reset
xbt_assert(HostLoad::EXTENSION_ID.valid(),
"The Load plugin is not active. Please call sg_host_load_plugin_init() during initialization.");
- return host->extension<HostLoad>()->getIdleTime();
+ return host->extension<HostLoad>()->get_idle_time();
}
double sg_host_get_computed_flops(sg_host_t host)
xbt_assert(HostLoad::EXTENSION_ID.valid(),
"The Load plugin is not active. Please call sg_host_load_plugin_init() during initialization.");
- return host->extension<HostLoad>()->getComputedFlops();
+ return host->extension<HostLoad>()->get_computed_flops();
}
void sg_host_load_reset(sg_host_t host)
public:
static simgrid::xbt::Extension<simgrid::s4u::Link, LinkEnergy> EXTENSION_ID;
- explicit LinkEnergy(simgrid::s4u::Link* ptr);
- ~LinkEnergy();
+ explicit LinkEnergy(simgrid::s4u::Link* ptr) : link_(ptr), last_updated_(surf_get_clock()) {}
+ ~LinkEnergy() = default;
- void initWattsRangeList();
- double getConsumedEnergy();
+ void init_watts_range_list();
+ double get_consumed_energy();
void update();
private:
- double getPower();
+ double get_power();
simgrid::s4u::Link* link_{};
double idle_{0.0};
double busy_{0.0};
- double totalEnergy_{0.0};
- double lastUpdated_{0.0}; /*< Timestamp of the last energy update event*/
+ double total_energy_{0.0};
+ double last_updated_{0.0}; /*< Timestamp of the last energy update event*/
};
simgrid::xbt::Extension<simgrid::s4u::Link, LinkEnergy> LinkEnergy::EXTENSION_ID;
-LinkEnergy::LinkEnergy(simgrid::s4u::Link* ptr) : link_(ptr), lastUpdated_(surf_get_clock()) {}
-
-LinkEnergy::~LinkEnergy() = default;
-
void LinkEnergy::update()
{
- double power = getPower();
+ double power = get_power();
double now = surf_get_clock();
- totalEnergy_ += power * (now - lastUpdated_);
- lastUpdated_ = now;
+ total_energy_ += power * (now - last_updated_);
+ last_updated_ = now;
}
-void LinkEnergy::initWattsRangeList()
+void LinkEnergy::init_watts_range_list()
{
if (inited_)
}
}
-double LinkEnergy::getPower()
+double LinkEnergy::get_power()
{
if (!inited_)
return idle_ + dynamic_power;
}
-double LinkEnergy::getConsumedEnergy()
+double LinkEnergy::get_consumed_energy()
{
- if (lastUpdated_ < surf_get_clock()) // We need to simcall this as it modifies the environment
+ if (last_updated_ < surf_get_clock()) // We need to simcall this as it modifies the environment
simgrid::simix::simcall(std::bind(&LinkEnergy::update, this));
- return this->totalEnergy_;
+ return this->total_energy_;
}
} // namespace plugin
} // namespace simgrid
using simgrid::plugin::LinkEnergy;
/* **************************** events callback *************************** */
-static void onCommunicate(simgrid::kernel::resource::NetworkAction* action, simgrid::s4u::Host* src,
- simgrid::s4u::Host* dst)
+static void on_communicate(simgrid::kernel::resource::NetworkAction* action, simgrid::s4u::Host*, simgrid::s4u::Host*)
{
XBT_DEBUG("onCommunicate is called");
for (simgrid::kernel::resource::LinkImpl* link : action->links()) {
XBT_DEBUG("Update link %s", link->get_cname());
LinkEnergy* link_energy = link->piface_.extension<LinkEnergy>();
- link_energy->initWattsRangeList();
+ link_energy->init_watts_range_list();
link_energy->update();
}
}
-static void onSimulationEnd()
+static void on_simulation_end()
{
std::vector<simgrid::s4u::Link*> links = simgrid::s4u::Engine::get_instance()->get_all_links();
double total_energy = 0.0; // Total dissipated energy (whole platform)
for (const auto link : links) {
- double link_energy = link->extension<LinkEnergy>()->getConsumedEnergy();
+ double link_energy = link->extension<LinkEnergy>()->get_consumed_energy();
total_energy += link_energy;
}
simgrid::s4u::Link::on_destruction.connect([](simgrid::s4u::Link& link) {
if (strcmp(link.get_cname(), "__loopback__"))
XBT_INFO("Energy consumption of link '%s': %f Joules", link.get_cname(),
- link.extension<LinkEnergy>()->getConsumedEnergy());
+ link.extension<LinkEnergy>()->get_consumed_energy());
});
simgrid::s4u::Link::on_communication_state_change.connect([](simgrid::kernel::resource::NetworkAction* action) {
}
});
- simgrid::s4u::Link::on_communicate.connect(&onCommunicate);
- simgrid::s4u::on_simulation_end.connect(&onSimulationEnd);
+ simgrid::s4u::Link::on_communicate.connect(&on_communicate);
+ simgrid::s4u::on_simulation_end.connect(&on_simulation_end);
}
/** @ingroup plugin_energy
*/
double sg_link_get_consumed_energy(sg_link_t link)
{
- return link->extension<LinkEnergy>()->getConsumedEnergy();
+ return link->extension<LinkEnergy>()->get_consumed_energy();
}
void Engine::run()
{
+ /* Clean IO before the run */
+ fflush(stdout);
+ fflush(stderr);
+
if (MC_is_active()) {
MC_run();
} else {
{
simgrid::s4u::Engine::get_instance()->load_deployment(file);
}
-
+void sg_engine_run()
+{
+ simgrid::s4u::Engine::get_instance()->run();
+}
void sg_engine_register_function(const char* name, int (*code)(int, char**))
{
simgrid::s4u::Engine::get_instance()->register_function(name, code);
}
} // namespace s4u
} // namespace simgrid
+
+/* **************************** Public C interface *************************** */
+/** \brief Set the mailbox to receive in asynchronous mode
+ *
+ * All messages sent to this mailbox will be transferred to the receiver without waiting for the receive call.
+ * The receive call will still be necessary to use the received data.
+ * If there is a need to receive some messages asynchronously, and some not, two different mailboxes should be used.
+ *
+ * \param alias The name of the mailbox
+ */
+void sg_mailbox_set_receiver(const char* alias)
+{
+ simgrid::s4u::Mailbox::by_name(alias)->set_receiver(simgrid::s4u::Actor::self());
+ XBT_VERB("%s mailbox set to receive eagerly for myself\n", alias);
+}
+
+/** \brief Check if there is a communication going on in a mailbox.
+ *
+ * \param alias the name of the mailbox to be considered
+ * \return Returns 1 if there is a communication, 0 otherwise
+ */
+int sg_mailbox_listen(const char* alias)
+{
+ return simgrid::s4u::Mailbox::by_name(alias)->listen() ? 1 : 0;
+}
default_privatization);
simgrid::config::alias("smpi/privatization", {"smpi/privatize_global_variables", "smpi/privatize-global-variables"});
+ simgrid::config::declare_flag<std::string>(
+ "smpi/privatize-libs", "Add libraries (; separated) to privatize (libgfortran for example). You need to provide the full names of the files (libgfortran.so.4), or its full path", "");
+
simgrid::config::declare_flag<bool>("smpi/grow-injected-times",
"Whether we want to make the injected time in MPI_Iprobe and MPI_Test grow, to "
"allow faster simulation. This can make simulation less precise, though.",
TRACE_smpi_comm_in(rank, __func__,
new simgrid::instr::VarCollTIData(
- "gatherV", root,
+ "gatherv", root,
sendtmptype->is_replayable() ? sendtmpcount : sendtmpcount * sendtmptype->size(), nullptr,
dt_size_recv, trace_recvcounts, simgrid::smpi::Datatype::encode(sendtmptype),
simgrid::smpi::Datatype::encode(recvtype)));
TRACE_smpi_comm_in(rank, __func__,
new simgrid::instr::CollTIData(
- "allGather", -1, -1.0, sendtype->is_replayable() ? sendcount : sendcount * sendtype->size(),
+ "allgather", -1, -1.0, sendtype->is_replayable() ? sendcount : sendcount * sendtype->size(),
recvtype->is_replayable() ? recvcount : recvcount * recvtype->size(),
simgrid::smpi::Datatype::encode(sendtype), simgrid::smpi::Datatype::encode(recvtype)));
TRACE_smpi_comm_in(rank, __func__,
new simgrid::instr::VarCollTIData(
- "allGatherV", -1, sendtype->is_replayable() ? sendcount : sendcount * sendtype->size(),
+ "allgatherv", -1, sendtype->is_replayable() ? sendcount : sendcount * sendtype->size(),
nullptr, dt_size_recv, trace_recvcounts, simgrid::smpi::Datatype::encode(sendtype),
simgrid::smpi::Datatype::encode(recvtype)));
TRACE_smpi_comm_in(rank, __func__,
new simgrid::instr::VarCollTIData(
- "scatterV", root, dt_size_send, trace_sendcounts,
+ "scatterv", root, dt_size_send, trace_sendcounts,
recvtype->is_replayable() ? recvcount : recvcount * recvtype->size(), nullptr,
simgrid::smpi::Datatype::encode(sendtype), simgrid::smpi::Datatype::encode(recvtype)));
int rank = simgrid::s4u::this_actor::get_pid();
TRACE_smpi_comm_in(rank, __func__,
- new simgrid::instr::CollTIData("allReduce", -1, 0,
+ new simgrid::instr::CollTIData("allreduce", -1, 0,
datatype->is_replayable() ? count : count * datatype->size(), -1,
simgrid::smpi::Datatype::encode(datatype), ""));
}
TRACE_smpi_comm_in(rank, __func__, new simgrid::instr::VarCollTIData(
- "reduceScatter", -1, dt_send_size, nullptr, -1, trace_recvcounts,
+ "reducescatter", -1, dt_send_size, nullptr, -1, trace_recvcounts,
simgrid::smpi::Datatype::encode(datatype), ""));
simgrid::smpi::Colls::reduce_scatter(sendtmpbuf, recvbuf, recvcounts, datatype, op, comm);
}
TRACE_smpi_comm_in(rank, __func__,
- new simgrid::instr::VarCollTIData("reduceScatter", -1, 0, nullptr, -1, trace_recvcounts,
+ new simgrid::instr::VarCollTIData("reducescatter", -1, 0, nullptr, -1, trace_recvcounts,
simgrid::smpi::Datatype::encode(datatype), ""));
int* recvcounts = new int[count];
TRACE_smpi_comm_in(rank, __func__,
new simgrid::instr::CollTIData(
- "allToAll", -1, -1.0,
+ "alltoall", -1, -1.0,
sendtmptype->is_replayable() ? sendtmpcount : sendtmpcount * sendtmptype->size(),
recvtype->is_replayable() ? recvcount : recvcount * recvtype->size(),
simgrid::smpi::Datatype::encode(sendtmptype), simgrid::smpi::Datatype::encode(recvtype)));
}
TRACE_smpi_comm_in(rank, __func__,
- new simgrid::instr::VarCollTIData("allToAllV", -1, send_size, trace_sendcounts, recv_size,
+ new simgrid::instr::VarCollTIData("alltoallv", -1, send_size, trace_sendcounts, recv_size,
trace_recvcounts, simgrid::smpi::Datatype::encode(sendtype),
simgrid::smpi::Datatype::encode(recvtype)));
int my_proc_id = simgrid::s4u::this_actor::get_pid();
TRACE_smpi_comm_in(my_proc_id, __func__,
- new simgrid::instr::Pt2PtTIData("Irecv", src,
+ new simgrid::instr::Pt2PtTIData("irecv", src,
datatype->is_replayable() ? count : count * datatype->size(),
tag, simgrid::smpi::Datatype::encode(datatype)));
int my_proc_id = simgrid::s4u::this_actor::get_pid();
int trace_dst = getPid(comm, dst);
TRACE_smpi_comm_in(my_proc_id, __func__,
- new simgrid::instr::Pt2PtTIData("Isend", dst,
+ new simgrid::instr::Pt2PtTIData("isend", dst,
datatype->is_replayable() ? count : count * datatype->size(),
tag, simgrid::smpi::Datatype::encode(datatype)));
int my_proc_id = (*request)->comm() != MPI_COMM_NULL
? simgrid::s4u::this_actor::get_pid()
: -1; // TODO: cheinrich: Check if this correct or if it should be MPI_UNDEFINED
- TRACE_smpi_comm_in(my_proc_id, __func__, new simgrid::instr::NoOpTIData("wait"));
+ TRACE_smpi_comm_in(my_proc_id, __func__,
+ new simgrid::instr::WaitTIData((*request)->src(), (*request)->dst(), (*request)->tag()));
simgrid::smpi::Request::wait(request, status);
retval = MPI_SUCCESS;
}
int rank_traced = simgrid::s4u::this_actor::get_pid(); // FIXME: In PMPI_Wait, we check if the comm is null?
- TRACE_smpi_comm_in(rank_traced, __func__, new simgrid::instr::CpuTIData("waitAll", static_cast<double>(count)));
+ TRACE_smpi_comm_in(rank_traced, __func__, new simgrid::instr::CpuTIData("waitall", static_cast<double>(count)));
int retval = simgrid::smpi::Request::waitall(count, requests, status);
#include "simgrid/msg.h" // msg_bar_t
#include "smpi/smpi.h"
+#include "smpi/smpi_helpers_internal.h"
#include "src/instr/instr_smpi.hpp"
#include "src/internal_config.h"
#include <unordered_map>
#include <vector>
-#include <sys/time.h>
-#if _POSIX_TIMERS
-#include <time.h>
-#endif
#define MPI_REQ_PERSISTENT 0x1
#define MPI_REQ_NON_PERSISTENT 0x2
void mpi_file_read_(int* fh, void* buf, int* count, int* datatype, MPI_Status* status, int* ierr);
void mpi_file_write_(int* fh, void* buf, int* count, int* datatype, MPI_Status* status, int* ierr);
-
-XBT_PUBLIC int smpi_usleep(useconds_t usecs);
-#if _POSIX_TIMERS > 0
-XBT_PUBLIC int smpi_nanosleep(const struct timespec* tp, struct timespec* t);
-XBT_PUBLIC int smpi_clock_gettime(clockid_t clk_id, struct timespec* tp);
-#endif
-XBT_PUBLIC unsigned int smpi_sleep(unsigned int secs);
-XBT_PUBLIC int smpi_gettimeofday(struct timeval* tv, struct timezone* tz);
-
-
-struct option;
-XBT_PUBLIC int smpi_getopt_long (int argc, char *const *argv, const char *options, const struct option *long_options, int *opt_index);
-XBT_PUBLIC int smpi_getopt (int argc, char *const *argv, const char *options);
-
} // extern "C"
struct s_smpi_privatization_region_t {
*/
std::vector<simgrid::s4u::ActorPtr> rank_to_actor_map_;
std::map<simgrid::s4u::ActorPtr, int> actor_to_rank_map_;
- std::vector<int> rank_to_index_map_;
std::vector<int> index_to_rank_map_;
int refcount_;
samples.clear();
}
+int smpi_getopt_long_only (int argc, char *const *argv, const char *options,
+ const struct option * long_options, int *opt_index)
+{
+ if (smpi_process())
+ optind = smpi_process()->get_optind();
+ int ret = getopt_long_only (argc, argv, options, long_options, opt_index);
+ if (smpi_process())
+ smpi_process()->set_optind(optind);
+ return ret;
+}
+
int smpi_getopt_long (int argc, char *const *argv, const char *options,
const struct option * long_options, int *opt_index)
{
#include <cfloat> /* DBL_MAX */
#include <dlfcn.h>
#include <fcntl.h>
+#if not defined(__APPLE__)
+#include <link.h>
+#endif
#include <fstream>
#if HAVE_SENDFILE
int smpi_universe_size = 0;
extern double smpi_total_benched_time;
xbt_os_timer_t global_timer;
+static std::vector<std::string> privatize_libs_paths;
/**
* Setting MPI_COMM_WORLD to MPI_COMM_UNINITIALIZED (it's a variable)
* is important because the implementation of MPI_Comm checks
smpi_comm_copy_data_callback = callback;
}
-static void print(std::vector<std::pair<size_t, size_t>> vec) {
- std::fprintf(stderr, "{");
- for (auto const& elt : vec) {
- std::fprintf(stderr, "(0x%zx, 0x%zx),", elt.first, elt.second);
- }
- std::fprintf(stderr, "}\n");
-}
static void memcpy_private(void* dest, const void* src, std::vector<std::pair<size_t, size_t>>& private_blocks)
{
for (auto const& block : private_blocks)
return 0;
}
+
// TODO, remove the number of functions involved here
static smpi_entry_point_type smpi_resolve_function(void* handle)
{
return smpi_entry_point_type();
}
+static void smpi_copy_file(std::string src, std::string target, off_t fdin_size)
+{
+ int fdin = open(src.c_str(), O_RDONLY);
+ xbt_assert(fdin >= 0, "Cannot read from %s. Please make sure that the file exists and is executable.", src.c_str());
+ int fdout = open(target.c_str(), O_CREAT | O_RDWR, S_IRWXU);
+ xbt_assert(fdout >= 0, "Cannot write into %s", target.c_str());
+
+ XBT_DEBUG("Copy %ld bytes into %s", static_cast<long>(fdin_size), target.c_str());
+#if HAVE_SENDFILE
+ ssize_t sent_size = sendfile(fdout, fdin, NULL, fdin_size);
+ xbt_assert(sent_size == fdin_size, "Error while copying %s: only %zd bytes copied instead of %ld (errno: %d -- %s)",
+ target.c_str(), sent_size, fdin_size, errno, strerror(errno));
+#else
+ const int bufsize = 1024 * 1024 * 4;
+ char buf[bufsize];
+ while (int got = read(fdin, buf, bufsize)) {
+ if (got == -1) {
+ xbt_assert(errno == EINTR, "Cannot read from %s", src.c_str());
+ } else {
+ char* p = buf;
+ int todo = got;
+ while (int done = write(fdout, p, todo)) {
+ if (done == -1) {
+ xbt_assert(errno == EINTR, "Cannot write into %s", target.c_str());
+ } else {
+ p += done;
+ todo -= done;
+ }
+ }
+ }
+ }
+#endif
+ close(fdin);
+ close(fdout);
+}
+
+#if not defined(__APPLE__)
+static int visit_libs(struct dl_phdr_info* info, size_t, void* data)
+{
+ char* libname = (char*)(data);
+ const char *path = info->dlpi_name;
+ if(strstr(path, libname)){
+ strncpy(libname, path, 512);
+ return 1;
+ }
+
+ return 0;
+}
+#endif
+
int smpi_main(const char* executable, int argc, char *argv[])
{
srand(SMPI_RAND_SEED);
stat(executable_copy.c_str(), &fdin_stat);
off_t fdin_size = fdin_stat.st_size;
static std::size_t rank = 0;
-
+
+
+ std::string libnames = simgrid::config::get_value<std::string>("smpi/privatize-libs");
+ if(not libnames.empty()){
+ //split option
+ std::vector<std::string> privatize_libs;
+ boost::split(privatize_libs,libnames, boost::is_any_of(";"));
+
+ for (auto const& libname : privatize_libs) {
+ //load the library once to add it to the local libs, to get the absolute path
+ void* libhandle = dlopen(libname.c_str(), RTLD_LAZY);
+ //get library name from path
+ char fullpath[512]={'\0'};
+ strcpy(fullpath, libname.c_str());
+#if not defined(__APPLE__)
+ int ret = dl_iterate_phdr(visit_libs, fullpath);
+ if(ret==0)
+ xbt_die("Can't find a linked %s - check the setting you gave to smpi/privatize-libs", fullpath);
+ else
+ XBT_DEBUG("Extra lib to privatize found : %s", fullpath);
+#else
+ xbt_die("smpi/privatize-libs is not (yet) compatible with OSX");
+#endif
+ privatize_libs_paths.push_back(fullpath);
+ dlclose(libhandle);
+ }
+ }
+
simix_global->default_function = [executable_copy, fdin_size](std::vector<std::string> args) {
return std::function<void()>([executable_copy, fdin_size, args] {
// Copy the dynamic library:
std::string target_executable = executable_copy
+ "_" + std::to_string(getpid())
- + "_" + std::to_string(rank++) + ".so";
-
- int fdin = open(executable_copy.c_str(), O_RDONLY);
- xbt_assert(fdin >= 0, "Cannot read from %s. Please make sure that the file exists and is executable.",
- executable_copy.c_str());
- int fdout = open(target_executable.c_str(), O_CREAT | O_RDWR, S_IRWXU);
- xbt_assert(fdout >= 0, "Cannot write into %s", target_executable.c_str());
-
- XBT_DEBUG("Copy %ld bytes into %s", static_cast<long>(fdin_size), target_executable.c_str());
-#if HAVE_SENDFILE
- ssize_t sent_size = sendfile(fdout, fdin, NULL, fdin_size);
- xbt_assert(sent_size == fdin_size,
- "Error while copying %s: only %zd bytes copied instead of %ld (errno: %d -- %s)",
- target_executable.c_str(), sent_size, fdin_size, errno, strerror(errno));
-#else
- const int bufsize = 1024 * 1024 * 4;
- char buf[bufsize];
- while (int got = read(fdin, buf, bufsize)) {
- if (got == -1) {
- xbt_assert(errno == EINTR, "Cannot read from %s", executable_copy.c_str());
- } else {
- char* p = buf;
- int todo = got;
- while (int done = write(fdout, p, todo)) {
- if (done == -1) {
- xbt_assert(errno == EINTR, "Cannot write into %s", target_executable.c_str());
- } else {
- p += done;
- todo -= done;
- }
- }
+ + "_" + std::to_string(rank) + ".so";
+
+ smpi_copy_file(executable_copy, target_executable, fdin_size);
+ //if smpi/privatize-libs is set, duplicate pointed lib and link each executable copy to a different one.
+ std::string target_lib;
+ for (auto const& libpath : privatize_libs_paths){
+ //if we were given a full path, strip it
+ size_t index = libpath.find_last_of("/\\");
+ std::string libname;
+ if(index!=std::string::npos)
+ libname=libpath.substr(index+1);
+
+ if(not libname.empty()){
+ //load the library to add it to the local libs, to get the absolute path
+ struct stat fdin_stat2;
+ stat(libpath.c_str(), &fdin_stat2);
+ off_t fdin_size2 = fdin_stat2.st_size;
+
+ // Copy the dynamic library, the new name must be the same length as the old one
+ // just replace the name with 7 digits for the rank and the rest of the name.
+ unsigned int pad=7;
+ if(libname.length()<pad)
+ pad=libname.length();
+ target_lib = std::string(pad - std::to_string(rank).length(), '0')
+ +std::to_string(rank)+libname.substr(pad);
+ XBT_DEBUG("copy lib %s to %s, with size %lld", libpath.c_str(), target_lib.c_str(), (long long)fdin_size2);
+ smpi_copy_file(libpath, target_lib, fdin_size2);
+
+ std::string sedcommand = "sed -i -e 's/"+libname+"/"+target_lib+"/g' "+target_executable;
+ int ret = system(sedcommand.c_str());
+ if(ret!=0) xbt_die ("error while applying sed command %s \n", sedcommand.c_str());
}
}
-#endif
- close(fdin);
- close(fdout);
+ rank++;
// Load the copy and resolve the entry point:
void* handle = dlopen(target_executable.c_str(), RTLD_LAZY | RTLD_LOCAL | RTLD_DEEPBIND);
int saved_errno = errno;
- if (simgrid::config::get_value<bool>("smpi/keep-temps") == false)
+ if (simgrid::config::get_value<bool>("smpi/keep-temps") == false){
unlink(target_executable.c_str());
+ if(not target_lib.empty())
+ unlink(target_lib.c_str());
+ }
if (handle == nullptr)
xbt_die("dlopen failed: %s (errno: %d -- %s)", dlerror(), saved_errno, strerror(saved_errno));
smpi_entry_point_type entry_point = smpi_resolve_function(handle);
if (not entry_point)
xbt_die("Could not resolve entry point");
-
smpi_run_entry_point(entry_point, args);
});
};
/* This program is free software; you can redistribute it and/or modify it
* under the terms of the license (GNU LGPL) which comes with this package. */
-#include "private.hpp"
#include "smpi_coll.hpp"
#include "smpi_comm.hpp"
#include "smpi_datatype.hpp"
#include "smpi_group.hpp"
-#include "smpi_process.hpp"
#include "smpi_request.hpp"
#include "xbt/replay.hpp"
#include <simgrid/smpi/replay.hpp>
+#include <src/smpi/include/private.hpp>
-#include <boost/algorithm/string/join.hpp>
#include <memory>
#include <numeric>
#include <unordered_map>
#include <vector>
#include <tuple>
+
+XBT_LOG_NEW_DEFAULT_SUBCATEGORY(smpi_replay, smpi, "Trace Replay with SMPI");
+
// From https://stackoverflow.com/questions/7110301/generic-hash-for-tuples-in-unordered-map-unordered-set
// This is all just to make std::unordered_map work with std::tuple. If we need this in other places,
// this could go into a header file.
};
}
-XBT_LOG_NEW_DEFAULT_SUBCATEGORY(smpi_replay,smpi,"Trace Replay with SMPI");
-
typedef std::tuple</*sender*/ int, /* reciever */ int, /* tag */int> req_key_t;
typedef std::unordered_map<req_key_t, MPI_Request, hash_tuple::hash<std::tuple<int,int,int>>> req_storage_t;
-
-static void log_timed_action(simgrid::xbt::ReplayAction& action, double clock)
+void log_timed_action(simgrid::xbt::ReplayAction& action, double clock)
{
if (XBT_LOG_ISENABLED(smpi_replay, xbt_log_priority_verbose)){
std::string s = boost::algorithm::join(action, " ");
disps = std::vector<int>(comm_size, 0);
recvcounts = std::shared_ptr<std::vector<int>>(new std::vector<int>(comm_size));
- if (name == "gatherV") {
+ if (name == "gatherv") {
root = (action.size() > 3 + comm_size) ? std::stoi(action[3 + comm_size]) : 0;
if (action.size() > 4 + comm_size)
datatype1 = simgrid::smpi::Datatype::decode(action[4 + comm_size]);
void ReduceScatterArgParser::parse(simgrid::xbt::ReplayAction& action, std::string name)
{
/* The structure of the reducescatter action for the rank 0 (total 4 processes) is the following:
- 0 reduceScatter 275427 275427 275427 204020 11346849 0
+ 0 reducescatter 275427 275427 275427 204020 11346849 0
where:
1) The first four values after the name of the action declare the recvcounts array
2) The value 11346849 is the amount of instructions
void AllToAllVArgParser::parse(simgrid::xbt::ReplayAction& action, std::string name)
{
- /* The structure of the allToAllV action for the rank 0 (total 4 processes) is the following:
- 0 allToAllV 100 1 7 10 12 100 1 70 10 5
+ /* The structure of the alltoallv action for the rank 0 (total 4 processes) is the following:
+ 0 alltoallv 100 1 7 10 12 100 1 70 10 5
where:
1) 100 is the size of the send buffer *sizeof(int),
2) 1 7 10 12 is the sendcounts array
recv_size_sum = std::accumulate(recvcounts->begin(), recvcounts->end(), 0);
}
-template<class T>
-void ReplayAction<T>::execute(simgrid::xbt::ReplayAction& action)
-{
- // Needs to be re-initialized for every action, hence here
- double start_time = smpi_process()->simulated_elapsed();
- args.parse(action, name);
- kernel(action);
- if (name != "Init")
- log_timed_action(action, start_time);
-}
-
void WaitAction::kernel(simgrid::xbt::ReplayAction& action)
{
std::string s = boost::algorithm::join(action, " ");
// MPI_REQUEST_NULL by Request::wait!
bool is_wait_for_receive = (request->flags() & MPI_REQ_RECV);
// TODO: Here we take the rank while we normally take the process id (look for my_proc_id)
- TRACE_smpi_comm_in(rank, __func__, new simgrid::instr::NoOpTIData("wait"));
+ TRACE_smpi_comm_in(rank, __func__, new simgrid::instr::WaitTIData(args.src, args.dst, args.tag));
MPI_Status status;
Request::wait(&request, &status);
TRACE_smpi_comm_out(rank);
if (is_wait_for_receive)
TRACE_smpi_recv(args.src, args.dst, args.tag);
+}
+
+void SendAction::kernel(simgrid::xbt::ReplayAction& action)
+{
+ int dst_traced = MPI_COMM_WORLD->group()->actor(args.partner)->get_pid();
+
+ TRACE_smpi_comm_in(my_proc_id, __func__, new simgrid::instr::Pt2PtTIData(name, args.partner, args.size,
+ args.tag, Datatype::encode(args.datatype1)));
+ if (not TRACE_smpi_view_internals())
+ TRACE_smpi_send(my_proc_id, my_proc_id, dst_traced, args.tag, args.size * args.datatype1->size());
+
+ if (name == "send") {
+ Request::send(nullptr, args.size, args.datatype1, args.partner, args.tag, MPI_COMM_WORLD);
+ } else if (name == "isend") {
+ MPI_Request request = Request::isend(nullptr, args.size, args.datatype1, args.partner, args.tag, MPI_COMM_WORLD);
+ req_storage.add(request);
+ } else {
+ xbt_die("Don't know this action, %s", name.c_str());
}
- void SendAction::kernel(simgrid::xbt::ReplayAction& action)
- {
- int dst_traced = MPI_COMM_WORLD->group()->actor(args.partner)->get_pid();
+ TRACE_smpi_comm_out(my_proc_id);
+}
- TRACE_smpi_comm_in(my_proc_id, __func__, new simgrid::instr::Pt2PtTIData(name, args.partner, args.size,
- args.tag, Datatype::encode(args.datatype1)));
- if (not TRACE_smpi_view_internals())
- TRACE_smpi_send(my_proc_id, my_proc_id, dst_traced, args.tag, args.size * args.datatype1->size());
+void RecvAction::kernel(simgrid::xbt::ReplayAction& action)
+{
+ int src_traced = MPI_COMM_WORLD->group()->actor(args.partner)->get_pid();
- if (name == "send") {
- Request::send(nullptr, args.size, args.datatype1, args.partner, args.tag, MPI_COMM_WORLD);
- } else if (name == "Isend") {
- MPI_Request request = Request::isend(nullptr, args.size, args.datatype1, args.partner, args.tag, MPI_COMM_WORLD);
- req_storage.add(request);
- } else {
- xbt_die("Don't know this action, %s", name.c_str());
- }
+ TRACE_smpi_comm_in(my_proc_id, __func__, new simgrid::instr::Pt2PtTIData(name, args.partner, args.size,
+ args.tag, Datatype::encode(args.datatype1)));
- TRACE_smpi_comm_out(my_proc_id);
+ MPI_Status status;
+ // unknown size from the receiver point of view
+ if (args.size <= 0.0) {
+ Request::probe(args.partner, args.tag, MPI_COMM_WORLD, &status);
+ args.size = status.count;
}
- void RecvAction::kernel(simgrid::xbt::ReplayAction& action)
- {
- int src_traced = MPI_COMM_WORLD->group()->actor(args.partner)->get_pid();
+ if (name == "recv") {
+ Request::recv(nullptr, args.size, args.datatype1, args.partner, args.tag, MPI_COMM_WORLD, &status);
+ } else if (name == "irecv") {
+ MPI_Request request = Request::irecv(nullptr, args.size, args.datatype1, args.partner, args.tag, MPI_COMM_WORLD);
+ req_storage.add(request);
+ }
- TRACE_smpi_comm_in(my_proc_id, __func__, new simgrid::instr::Pt2PtTIData(name, args.partner, args.size,
- args.tag, Datatype::encode(args.datatype1)));
+ TRACE_smpi_comm_out(my_proc_id);
+ // TODO: Check why this was only activated in the "recv" case and not in the "irecv" case
+ if (name == "recv" && not TRACE_smpi_view_internals()) {
+ TRACE_smpi_recv(src_traced, my_proc_id, args.tag);
+ }
+}
+
+void ComputeAction::kernel(simgrid::xbt::ReplayAction& action)
+{
+ TRACE_smpi_computing_in(my_proc_id, args.flops);
+ smpi_execute_flops(args.flops);
+ TRACE_smpi_computing_out(my_proc_id);
+}
+
+void TestAction::kernel(simgrid::xbt::ReplayAction& action)
+{
+ MPI_Request request = req_storage.find(args.src, args.dst, args.tag);
+ req_storage.remove(request);
+ // if request is null here, this may mean that a previous test has succeeded
+ // Different times in traced application and replayed version may lead to this
+ // In this case, ignore the extra calls.
+ if (request != MPI_REQUEST_NULL) {
+ TRACE_smpi_comm_in(my_proc_id, __func__, new simgrid::instr::NoOpTIData("test"));
MPI_Status status;
- // unknown size from the receiver point of view
- if (args.size <= 0.0) {
- Request::probe(args.partner, args.tag, MPI_COMM_WORLD, &status);
- args.size = status.count;
- }
+ int flag = Request::test(&request, &status);
- if (name == "recv") {
- Request::recv(nullptr, args.size, args.datatype1, args.partner, args.tag, MPI_COMM_WORLD, &status);
- } else if (name == "Irecv") {
- MPI_Request request = Request::irecv(nullptr, args.size, args.datatype1, args.partner, args.tag, MPI_COMM_WORLD);
+ XBT_DEBUG("MPI_Test result: %d", flag);
+ /* push back request in vector to be caught by a subsequent wait. if the test did succeed, the request is now
+ * nullptr.*/
+ if (request == MPI_REQUEST_NULL)
+ req_storage.addNullRequest(args.src, args.dst, args.tag);
+ else
req_storage.add(request);
- }
TRACE_smpi_comm_out(my_proc_id);
- // TODO: Check why this was only activated in the "recv" case and not in the "Irecv" case
- if (name == "recv" && not TRACE_smpi_view_internals()) {
- TRACE_smpi_recv(src_traced, my_proc_id, args.tag);
- }
- }
-
- void ComputeAction::kernel(simgrid::xbt::ReplayAction& action)
- {
- TRACE_smpi_computing_in(my_proc_id, args.flops);
- smpi_execute_flops(args.flops);
- TRACE_smpi_computing_out(my_proc_id);
- }
-
- void TestAction::kernel(simgrid::xbt::ReplayAction& action)
- {
- MPI_Request request = req_storage.find(args.src, args.dst, args.tag);
- req_storage.remove(request);
- // if request is null here, this may mean that a previous test has succeeded
- // Different times in traced application and replayed version may lead to this
- // In this case, ignore the extra calls.
- if (request != MPI_REQUEST_NULL) {
- TRACE_smpi_comm_in(my_proc_id, __func__, new simgrid::instr::NoOpTIData("test"));
-
- MPI_Status status;
- int flag = Request::test(&request, &status);
-
- XBT_DEBUG("MPI_Test result: %d", flag);
- /* push back request in vector to be caught by a subsequent wait. if the test did succeed, the request is now
- * nullptr.*/
- if (request == MPI_REQUEST_NULL)
- req_storage.addNullRequest(args.src, args.dst, args.tag);
- else
- req_storage.add(request);
-
- TRACE_smpi_comm_out(my_proc_id);
- }
}
+}
- void InitAction::kernel(simgrid::xbt::ReplayAction& action)
- {
- CHECK_ACTION_PARAMS(action, 0, 1)
+void InitAction::kernel(simgrid::xbt::ReplayAction& action)
+{
+ CHECK_ACTION_PARAMS(action, 0, 1)
MPI_DEFAULT_TYPE = (action.size() > 2) ? MPI_DOUBLE // default MPE datatype
- : MPI_BYTE; // default TAU datatype
+ : MPI_BYTE; // default TAU datatype
- /* start a simulated timer */
- smpi_process()->simulated_start();
- }
+ /* start a simulated timer */
+ smpi_process()->simulated_start();
+}
- void CommunicatorAction::kernel(simgrid::xbt::ReplayAction& action)
- {
- /* nothing to do */
- }
+void CommunicatorAction::kernel(simgrid::xbt::ReplayAction& action)
+{
+ /* nothing to do */
+}
- void WaitAllAction::kernel(simgrid::xbt::ReplayAction& action)
- {
- const unsigned int count_requests = req_storage.size();
-
- if (count_requests > 0) {
- TRACE_smpi_comm_in(my_proc_id, __func__, new simgrid::instr::Pt2PtTIData("waitAll", -1, count_requests, ""));
- std::vector<std::pair</*sender*/int,/*recv*/int>> sender_receiver;
- std::vector<MPI_Request> reqs;
- req_storage.get_requests(reqs);
- for (const auto& req : reqs) {
- if (req && (req->flags() & MPI_REQ_RECV)) {
- sender_receiver.push_back({req->src(), req->dst()});
- }
- }
- MPI_Status status[count_requests];
- Request::waitall(count_requests, &(reqs.data())[0], status);
- req_storage.get_store().clear();
+void WaitAllAction::kernel(simgrid::xbt::ReplayAction& action)
+{
+ const unsigned int count_requests = req_storage.size();
- for (auto& pair : sender_receiver) {
- TRACE_smpi_recv(pair.first, pair.second, 0);
+ if (count_requests > 0) {
+ TRACE_smpi_comm_in(my_proc_id, __func__, new simgrid::instr::Pt2PtTIData("waitall", -1, count_requests, ""));
+ std::vector<std::pair</*sender*/int,/*recv*/int>> sender_receiver;
+ std::vector<MPI_Request> reqs;
+ req_storage.get_requests(reqs);
+ for (const auto& req : reqs) {
+ if (req && (req->flags() & MPI_REQ_RECV)) {
+ sender_receiver.push_back({req->src(), req->dst()});
}
- TRACE_smpi_comm_out(my_proc_id);
}
- }
+ MPI_Status status[count_requests];
+ Request::waitall(count_requests, &(reqs.data())[0], status);
+ req_storage.get_store().clear();
- void BarrierAction::kernel(simgrid::xbt::ReplayAction& action)
- {
- TRACE_smpi_comm_in(my_proc_id, __func__, new simgrid::instr::NoOpTIData("barrier"));
- Colls::barrier(MPI_COMM_WORLD);
+ for (auto& pair : sender_receiver) {
+ TRACE_smpi_recv(pair.first, pair.second, 0);
+ }
TRACE_smpi_comm_out(my_proc_id);
}
+}
- void BcastAction::kernel(simgrid::xbt::ReplayAction& action)
- {
- TRACE_smpi_comm_in(my_proc_id, "action_bcast",
- new simgrid::instr::CollTIData("bcast", MPI_COMM_WORLD->group()->actor(args.root)->get_pid(),
- -1.0, args.size, -1, Datatype::encode(args.datatype1), ""));
+void BarrierAction::kernel(simgrid::xbt::ReplayAction& action)
+{
+ TRACE_smpi_comm_in(my_proc_id, __func__, new simgrid::instr::NoOpTIData("barrier"));
+ Colls::barrier(MPI_COMM_WORLD);
+ TRACE_smpi_comm_out(my_proc_id);
+}
- Colls::bcast(send_buffer(args.size * args.datatype1->size()), args.size, args.datatype1, args.root, MPI_COMM_WORLD);
+void BcastAction::kernel(simgrid::xbt::ReplayAction& action)
+{
+ TRACE_smpi_comm_in(my_proc_id, "action_bcast",
+ new simgrid::instr::CollTIData("bcast", MPI_COMM_WORLD->group()->actor(args.root)->get_pid(),
+ -1.0, args.size, -1, Datatype::encode(args.datatype1), ""));
- TRACE_smpi_comm_out(my_proc_id);
- }
+ Colls::bcast(send_buffer(args.size * args.datatype1->size()), args.size, args.datatype1, args.root, MPI_COMM_WORLD);
- void ReduceAction::kernel(simgrid::xbt::ReplayAction& action)
- {
- TRACE_smpi_comm_in(my_proc_id, "action_reduce",
- new simgrid::instr::CollTIData("reduce", MPI_COMM_WORLD->group()->actor(args.root)->get_pid(),
- args.comp_size, args.comm_size, -1,
- Datatype::encode(args.datatype1), ""));
+ TRACE_smpi_comm_out(my_proc_id);
+}
- Colls::reduce(send_buffer(args.comm_size * args.datatype1->size()),
- recv_buffer(args.comm_size * args.datatype1->size()), args.comm_size, args.datatype1, MPI_OP_NULL, args.root, MPI_COMM_WORLD);
- smpi_execute_flops(args.comp_size);
+void ReduceAction::kernel(simgrid::xbt::ReplayAction& action)
+{
+ TRACE_smpi_comm_in(my_proc_id, "action_reduce",
+ new simgrid::instr::CollTIData("reduce", MPI_COMM_WORLD->group()->actor(args.root)->get_pid(),
+ args.comp_size, args.comm_size, -1,
+ Datatype::encode(args.datatype1), ""));
- TRACE_smpi_comm_out(my_proc_id);
- }
+ Colls::reduce(send_buffer(args.comm_size * args.datatype1->size()),
+ recv_buffer(args.comm_size * args.datatype1->size()), args.comm_size, args.datatype1, MPI_OP_NULL, args.root, MPI_COMM_WORLD);
+ smpi_execute_flops(args.comp_size);
- void AllReduceAction::kernel(simgrid::xbt::ReplayAction& action)
- {
- TRACE_smpi_comm_in(my_proc_id, "action_allReduce", new simgrid::instr::CollTIData("allReduce", -1, args.comp_size, args.comm_size, -1,
- Datatype::encode(args.datatype1), ""));
+ TRACE_smpi_comm_out(my_proc_id);
+}
- Colls::allreduce(send_buffer(args.comm_size * args.datatype1->size()),
- recv_buffer(args.comm_size * args.datatype1->size()), args.comm_size, args.datatype1, MPI_OP_NULL, MPI_COMM_WORLD);
- smpi_execute_flops(args.comp_size);
+void AllReduceAction::kernel(simgrid::xbt::ReplayAction& action)
+{
+ TRACE_smpi_comm_in(my_proc_id, "action_allreduce", new simgrid::instr::CollTIData("allreduce", -1, args.comp_size, args.comm_size, -1,
+ Datatype::encode(args.datatype1), ""));
- TRACE_smpi_comm_out(my_proc_id);
- }
+ Colls::allreduce(send_buffer(args.comm_size * args.datatype1->size()),
+ recv_buffer(args.comm_size * args.datatype1->size()), args.comm_size, args.datatype1, MPI_OP_NULL, MPI_COMM_WORLD);
+ smpi_execute_flops(args.comp_size);
- void AllToAllAction::kernel(simgrid::xbt::ReplayAction& action)
- {
- TRACE_smpi_comm_in(my_proc_id, "action_allToAll",
- new simgrid::instr::CollTIData("allToAll", -1, -1.0, args.send_size, args.recv_size,
- Datatype::encode(args.datatype1),
- Datatype::encode(args.datatype2)));
+ TRACE_smpi_comm_out(my_proc_id);
+}
- Colls::alltoall(send_buffer(args.send_size * args.comm_size * args.datatype1->size()), args.send_size,
- args.datatype1, recv_buffer(args.recv_size * args.comm_size * args.datatype2->size()),
- args.recv_size, args.datatype2, MPI_COMM_WORLD);
+void AllToAllAction::kernel(simgrid::xbt::ReplayAction& action)
+{
+ TRACE_smpi_comm_in(my_proc_id, "action_alltoall",
+ new simgrid::instr::CollTIData("alltoall", -1, -1.0, args.send_size, args.recv_size,
+ Datatype::encode(args.datatype1),
+ Datatype::encode(args.datatype2)));
- TRACE_smpi_comm_out(my_proc_id);
- }
+ Colls::alltoall(send_buffer(args.send_size * args.comm_size * args.datatype1->size()), args.send_size,
+ args.datatype1, recv_buffer(args.recv_size * args.comm_size * args.datatype2->size()),
+ args.recv_size, args.datatype2, MPI_COMM_WORLD);
- void GatherAction::kernel(simgrid::xbt::ReplayAction& action)
- {
- TRACE_smpi_comm_in(my_proc_id, name.c_str(), new simgrid::instr::CollTIData(name, (name == "gather") ? args.root : -1, -1.0, args.send_size, args.recv_size,
- Datatype::encode(args.datatype1), Datatype::encode(args.datatype2)));
+ TRACE_smpi_comm_out(my_proc_id);
+}
- if (name == "gather") {
- int rank = MPI_COMM_WORLD->rank();
- Colls::gather(send_buffer(args.send_size * args.datatype1->size()), args.send_size, args.datatype1,
- (rank == args.root) ? recv_buffer(args.recv_size * args.comm_size * args.datatype2->size()) : nullptr, args.recv_size, args.datatype2, args.root, MPI_COMM_WORLD);
- }
- else
- Colls::allgather(send_buffer(args.send_size * args.datatype1->size()), args.send_size, args.datatype1,
- recv_buffer(args.recv_size * args.datatype2->size()), args.recv_size, args.datatype2, MPI_COMM_WORLD);
+void GatherAction::kernel(simgrid::xbt::ReplayAction& action)
+{
+ TRACE_smpi_comm_in(my_proc_id, name.c_str(), new simgrid::instr::CollTIData(name, (name == "gather") ? args.root : -1, -1.0, args.send_size, args.recv_size,
+ Datatype::encode(args.datatype1), Datatype::encode(args.datatype2)));
- TRACE_smpi_comm_out(my_proc_id);
+ if (name == "gather") {
+ int rank = MPI_COMM_WORLD->rank();
+ Colls::gather(send_buffer(args.send_size * args.datatype1->size()), args.send_size, args.datatype1,
+ (rank == args.root) ? recv_buffer(args.recv_size * args.comm_size * args.datatype2->size()) : nullptr, args.recv_size, args.datatype2, args.root, MPI_COMM_WORLD);
}
+ else
+ Colls::allgather(send_buffer(args.send_size * args.datatype1->size()), args.send_size, args.datatype1,
+ recv_buffer(args.recv_size * args.datatype2->size()), args.recv_size, args.datatype2, MPI_COMM_WORLD);
- void GatherVAction::kernel(simgrid::xbt::ReplayAction& action)
- {
- int rank = MPI_COMM_WORLD->rank();
+ TRACE_smpi_comm_out(my_proc_id);
+}
- TRACE_smpi_comm_in(my_proc_id, name.c_str(), new simgrid::instr::VarCollTIData(
- name, (name == "gatherV") ? args.root : -1, args.send_size, nullptr, -1, args.recvcounts,
- Datatype::encode(args.datatype1), Datatype::encode(args.datatype2)));
+void GatherVAction::kernel(simgrid::xbt::ReplayAction& action)
+{
+ int rank = MPI_COMM_WORLD->rank();
- if (name == "gatherV") {
- Colls::gatherv(send_buffer(args.send_size * args.datatype1->size()), args.send_size, args.datatype1,
- (rank == args.root) ? recv_buffer(args.recv_size_sum * args.datatype2->size()) : nullptr,
- args.recvcounts->data(), args.disps.data(), args.datatype2, args.root, MPI_COMM_WORLD);
- }
- else {
- Colls::allgatherv(send_buffer(args.send_size * args.datatype1->size()), args.send_size, args.datatype1,
- recv_buffer(args.recv_size_sum * args.datatype2->size()), args.recvcounts->data(),
- args.disps.data(), args.datatype2, MPI_COMM_WORLD);
- }
+ TRACE_smpi_comm_in(my_proc_id, name.c_str(), new simgrid::instr::VarCollTIData(
+ name, (name == "gatherv") ? args.root : -1, args.send_size, nullptr, -1, args.recvcounts,
+ Datatype::encode(args.datatype1), Datatype::encode(args.datatype2)));
- TRACE_smpi_comm_out(my_proc_id);
+ if (name == "gatherv") {
+ Colls::gatherv(send_buffer(args.send_size * args.datatype1->size()), args.send_size, args.datatype1,
+ (rank == args.root) ? recv_buffer(args.recv_size_sum * args.datatype2->size()) : nullptr,
+ args.recvcounts->data(), args.disps.data(), args.datatype2, args.root, MPI_COMM_WORLD);
+ }
+ else {
+ Colls::allgatherv(send_buffer(args.send_size * args.datatype1->size()), args.send_size, args.datatype1,
+ recv_buffer(args.recv_size_sum * args.datatype2->size()), args.recvcounts->data(),
+ args.disps.data(), args.datatype2, MPI_COMM_WORLD);
}
- void ScatterAction::kernel(simgrid::xbt::ReplayAction& action)
- {
- int rank = MPI_COMM_WORLD->rank();
- TRACE_smpi_comm_in(my_proc_id, "action_scatter", new simgrid::instr::CollTIData(name, args.root, -1.0, args.send_size, args.recv_size,
- Datatype::encode(args.datatype1),
- Datatype::encode(args.datatype2)));
+ TRACE_smpi_comm_out(my_proc_id);
+}
- Colls::scatter(send_buffer(args.send_size * args.datatype1->size()), args.send_size, args.datatype1,
- (rank == args.root) ? recv_buffer(args.recv_size * args.datatype2->size()) : nullptr, args.recv_size, args.datatype2, args.root, MPI_COMM_WORLD);
+void ScatterAction::kernel(simgrid::xbt::ReplayAction& action)
+{
+ int rank = MPI_COMM_WORLD->rank();
+ TRACE_smpi_comm_in(my_proc_id, "action_scatter", new simgrid::instr::CollTIData(name, args.root, -1.0, args.send_size, args.recv_size,
+ Datatype::encode(args.datatype1),
+ Datatype::encode(args.datatype2)));
- TRACE_smpi_comm_out(my_proc_id);
- }
+ Colls::scatter(send_buffer(args.send_size * args.datatype1->size()), args.send_size, args.datatype1,
+ (rank == args.root) ? recv_buffer(args.recv_size * args.datatype2->size()) : nullptr, args.recv_size, args.datatype2, args.root, MPI_COMM_WORLD);
- void ScatterVAction::kernel(simgrid::xbt::ReplayAction& action)
- {
- int rank = MPI_COMM_WORLD->rank();
- TRACE_smpi_comm_in(my_proc_id, "action_scatterv", new simgrid::instr::VarCollTIData(name, args.root, -1, args.sendcounts, args.recv_size,
- nullptr, Datatype::encode(args.datatype1),
- Datatype::encode(args.datatype2)));
+ TRACE_smpi_comm_out(my_proc_id);
+}
- Colls::scatterv((rank == args.root) ? send_buffer(args.send_size_sum * args.datatype1->size()) : nullptr,
- args.sendcounts->data(), args.disps.data(), args.datatype1,
- recv_buffer(args.recv_size * args.datatype2->size()), args.recv_size, args.datatype2, args.root,
- MPI_COMM_WORLD);
+void ScatterVAction::kernel(simgrid::xbt::ReplayAction& action)
+{
+ int rank = MPI_COMM_WORLD->rank();
+ TRACE_smpi_comm_in(my_proc_id, "action_scatterv", new simgrid::instr::VarCollTIData(name, args.root, -1, args.sendcounts, args.recv_size,
+ nullptr, Datatype::encode(args.datatype1),
+ Datatype::encode(args.datatype2)));
- TRACE_smpi_comm_out(my_proc_id);
- }
+ Colls::scatterv((rank == args.root) ? send_buffer(args.send_size_sum * args.datatype1->size()) : nullptr,
+ args.sendcounts->data(), args.disps.data(), args.datatype1,
+ recv_buffer(args.recv_size * args.datatype2->size()), args.recv_size, args.datatype2, args.root,
+ MPI_COMM_WORLD);
- void ReduceScatterAction::kernel(simgrid::xbt::ReplayAction& action)
- {
- TRACE_smpi_comm_in(my_proc_id, "action_reducescatter",
- new simgrid::instr::VarCollTIData("reduceScatter", -1, 0, nullptr, -1, args.recvcounts,
- std::to_string(args.comp_size), /* ugly hack to print comp_size */
- Datatype::encode(args.datatype1)));
+ TRACE_smpi_comm_out(my_proc_id);
+}
- Colls::reduce_scatter(send_buffer(args.recv_size_sum * args.datatype1->size()),
- recv_buffer(args.recv_size_sum * args.datatype1->size()), args.recvcounts->data(),
- args.datatype1, MPI_OP_NULL, MPI_COMM_WORLD);
+void ReduceScatterAction::kernel(simgrid::xbt::ReplayAction& action)
+{
+ TRACE_smpi_comm_in(my_proc_id, "action_reducescatter",
+ new simgrid::instr::VarCollTIData("reducescatter", -1, 0, nullptr, -1, args.recvcounts,
+ std::to_string(args.comp_size), /* ugly hack to print comp_size */
+ Datatype::encode(args.datatype1)));
- smpi_execute_flops(args.comp_size);
- TRACE_smpi_comm_out(my_proc_id);
- }
+ Colls::reduce_scatter(send_buffer(args.recv_size_sum * args.datatype1->size()),
+ recv_buffer(args.recv_size_sum * args.datatype1->size()), args.recvcounts->data(),
+ args.datatype1, MPI_OP_NULL, MPI_COMM_WORLD);
- void AllToAllVAction::kernel(simgrid::xbt::ReplayAction& action)
- {
- TRACE_smpi_comm_in(my_proc_id, __func__,
- new simgrid::instr::VarCollTIData(
- "allToAllV", -1, args.send_size_sum, args.sendcounts, args.recv_size_sum, args.recvcounts,
- Datatype::encode(args.datatype1), Datatype::encode(args.datatype2)));
+ smpi_execute_flops(args.comp_size);
+ TRACE_smpi_comm_out(my_proc_id);
+}
- Colls::alltoallv(send_buffer(args.send_buf_size * args.datatype1->size()), args.sendcounts->data(), args.senddisps.data(), args.datatype1,
- recv_buffer(args.recv_buf_size * args.datatype2->size()), args.recvcounts->data(), args.recvdisps.data(), args.datatype2, MPI_COMM_WORLD);
+void AllToAllVAction::kernel(simgrid::xbt::ReplayAction& action)
+{
+ TRACE_smpi_comm_in(my_proc_id, __func__,
+ new simgrid::instr::VarCollTIData(
+ "alltoallv", -1, args.send_size_sum, args.sendcounts, args.recv_size_sum, args.recvcounts,
+ Datatype::encode(args.datatype1), Datatype::encode(args.datatype2)));
- TRACE_smpi_comm_out(my_proc_id);
- }
+ Colls::alltoallv(send_buffer(args.send_buf_size * args.datatype1->size()), args.sendcounts->data(), args.senddisps.data(), args.datatype1,
+ recv_buffer(args.recv_buf_size * args.datatype2->size()), args.recvcounts->data(), args.recvdisps.data(), args.datatype2, MPI_COMM_WORLD);
+
+ TRACE_smpi_comm_out(my_proc_id);
+}
} // Replay Namespace
}} // namespace simgrid::smpi
-std::vector<simgrid::smpi::replay::RequestStorage> storage;
+static std::vector<simgrid::smpi::replay::RequestStorage> storage;
/** @brief Only initialize the replay, don't do it for real */
void smpi_replay_init(int* argc, char*** argv)
{
xbt_replay_action_register("comm_split",[](simgrid::xbt::ReplayAction& action) { simgrid::smpi::replay::CommunicatorAction().execute(action); });
xbt_replay_action_register("comm_dup", [](simgrid::xbt::ReplayAction& action) { simgrid::smpi::replay::CommunicatorAction().execute(action); });
xbt_replay_action_register("send", [](simgrid::xbt::ReplayAction& action) { simgrid::smpi::replay::SendAction("send", storage[simgrid::s4u::this_actor::get_pid()-1]).execute(action); });
- xbt_replay_action_register("Isend", [](simgrid::xbt::ReplayAction& action) { simgrid::smpi::replay::SendAction("Isend", storage[simgrid::s4u::this_actor::get_pid()-1]).execute(action); });
+ xbt_replay_action_register("isend", [](simgrid::xbt::ReplayAction& action) { simgrid::smpi::replay::SendAction("isend", storage[simgrid::s4u::this_actor::get_pid()-1]).execute(action); });
xbt_replay_action_register("recv", [](simgrid::xbt::ReplayAction& action) { simgrid::smpi::replay::RecvAction("recv", storage[simgrid::s4u::this_actor::get_pid()-1]).execute(action); });
- xbt_replay_action_register("Irecv", [](simgrid::xbt::ReplayAction& action) { simgrid::smpi::replay::RecvAction("Irecv", storage[simgrid::s4u::this_actor::get_pid()-1]).execute(action); });
+ xbt_replay_action_register("irecv", [](simgrid::xbt::ReplayAction& action) { simgrid::smpi::replay::RecvAction("irecv", storage[simgrid::s4u::this_actor::get_pid()-1]).execute(action); });
xbt_replay_action_register("test", [](simgrid::xbt::ReplayAction& action) { simgrid::smpi::replay::TestAction(storage[simgrid::s4u::this_actor::get_pid()-1]).execute(action); });
xbt_replay_action_register("wait", [](simgrid::xbt::ReplayAction& action) { simgrid::smpi::replay::WaitAction(storage[simgrid::s4u::this_actor::get_pid()-1]).execute(action); });
- xbt_replay_action_register("waitAll", [](simgrid::xbt::ReplayAction& action) { simgrid::smpi::replay::WaitAllAction(storage[simgrid::s4u::this_actor::get_pid()-1]).execute(action); });
+ xbt_replay_action_register("waitall", [](simgrid::xbt::ReplayAction& action) { simgrid::smpi::replay::WaitAllAction(storage[simgrid::s4u::this_actor::get_pid()-1]).execute(action); });
xbt_replay_action_register("barrier", [](simgrid::xbt::ReplayAction& action) { simgrid::smpi::replay::BarrierAction().execute(action); });
xbt_replay_action_register("bcast", [](simgrid::xbt::ReplayAction& action) { simgrid::smpi::replay::BcastAction().execute(action); });
xbt_replay_action_register("reduce", [](simgrid::xbt::ReplayAction& action) { simgrid::smpi::replay::ReduceAction().execute(action); });
- xbt_replay_action_register("allReduce", [](simgrid::xbt::ReplayAction& action) { simgrid::smpi::replay::AllReduceAction().execute(action); });
- xbt_replay_action_register("allToAll", [](simgrid::xbt::ReplayAction& action) { simgrid::smpi::replay::AllToAllAction().execute(action); });
- xbt_replay_action_register("allToAllV", [](simgrid::xbt::ReplayAction& action) { simgrid::smpi::replay::AllToAllVAction().execute(action); });
+ xbt_replay_action_register("allreduce", [](simgrid::xbt::ReplayAction& action) { simgrid::smpi::replay::AllReduceAction().execute(action); });
+ xbt_replay_action_register("alltoall", [](simgrid::xbt::ReplayAction& action) { simgrid::smpi::replay::AllToAllAction().execute(action); });
+ xbt_replay_action_register("alltoallv", [](simgrid::xbt::ReplayAction& action) { simgrid::smpi::replay::AllToAllVAction().execute(action); });
xbt_replay_action_register("gather", [](simgrid::xbt::ReplayAction& action) { simgrid::smpi::replay::GatherAction("gather").execute(action); });
xbt_replay_action_register("scatter", [](simgrid::xbt::ReplayAction& action) { simgrid::smpi::replay::ScatterAction().execute(action); });
- xbt_replay_action_register("gatherV", [](simgrid::xbt::ReplayAction& action) { simgrid::smpi::replay::GatherVAction("gatherV").execute(action); });
- xbt_replay_action_register("scatterV", [](simgrid::xbt::ReplayAction& action) { simgrid::smpi::replay::ScatterVAction().execute(action); });
- xbt_replay_action_register("allGather", [](simgrid::xbt::ReplayAction& action) { simgrid::smpi::replay::GatherAction("allGather").execute(action); });
- xbt_replay_action_register("allGatherV", [](simgrid::xbt::ReplayAction& action) { simgrid::smpi::replay::GatherVAction("allGatherV").execute(action); });
- xbt_replay_action_register("reduceScatter", [](simgrid::xbt::ReplayAction& action) { simgrid::smpi::replay::ReduceScatterAction().execute(action); });
+ xbt_replay_action_register("gatherv", [](simgrid::xbt::ReplayAction& action) { simgrid::smpi::replay::GatherVAction("gatherv").execute(action); });
+ xbt_replay_action_register("scatterv", [](simgrid::xbt::ReplayAction& action) { simgrid::smpi::replay::ScatterVAction().execute(action); });
+ xbt_replay_action_register("allgather", [](simgrid::xbt::ReplayAction& action) { simgrid::smpi::replay::GatherAction("allgather").execute(action); });
+ xbt_replay_action_register("allgatherv", [](simgrid::xbt::ReplayAction& action) { simgrid::smpi::replay::GatherVAction("allgatherv").execute(action); });
+ xbt_replay_action_register("reducescatter", [](simgrid::xbt::ReplayAction& action) { simgrid::smpi::replay::ReduceScatterAction().execute(action); });
xbt_replay_action_register("compute", [](simgrid::xbt::ReplayAction& action) { simgrid::smpi::replay::ComputeAction().execute(action); });
//if we have a delayed start, sleep here.
refcount_ = 1; /* refcount_: start > 0 so that this group never gets freed */
}
-Group::Group(int n) : size_(n), rank_to_actor_map_(size_, nullptr), rank_to_index_map_(size_, MPI_UNDEFINED), index_to_rank_map_(size_, MPI_UNDEFINED)
+Group::Group(int n) : size_(n), rank_to_actor_map_(size_, nullptr), index_to_rank_map_(size_, MPI_UNDEFINED)
{
refcount_ = 1;
}
if (origin != MPI_GROUP_NULL && origin != MPI_GROUP_EMPTY) {
size_ = origin->size();
refcount_ = 1;
- rank_to_index_map_ = origin->rank_to_index_map_;
+ // FIXME: cheinrich: There is no such thing as an index any more; the two maps should be removed
index_to_rank_map_ = origin->index_to_rank_map_;
rank_to_actor_map_ = origin->rank_to_actor_map_;
actor_to_rank_map_ = origin->actor_to_rank_map_;
{
if (0 <= rank && rank < size_) {
int index = actor->get_pid();
- rank_to_index_map_[rank] = index;
if (index != MPI_UNDEFINED) {
if ((unsigned)index >= index_to_rank_map_.size())
index_to_rank_map_.resize(index + 1, MPI_UNDEFINED);
list_set CXXFLAGS "-std=gnu++11"
list_set LINKARGS "-std=gnu++11"
if [ "@WIN32@" != "1" ]; then
- # list_add CXXFLAGS "-Dmain=smpi_simulated_main_"
+ # list_add CXXFLAGS "-include" "@includedir@/smpi/smpi_helpers.h"
list_add CXXFLAGS "-fpic"
if [ "x${SMPI_PRETEND_CC}" = "x" ]; then
list_add LINKARGS "-shared"
speedPerPstate.push_back(peer->speed);
simgrid::s4u::Host* host = as->create_host(peer->id.c_str(), &speedPerPstate, 1, nullptr);
- as->setPeerLink(host->pimpl_netpoint, peer->bw_in, peer->bw_out, peer->coord);
+ as->set_peer_link(host->pimpl_netpoint, peer->bw_in, peer->bw_out, peer->coord);
/* Change from the defaults */
if (peer->state_trace)
#include "xbt/automaton.h"
#include <stdio.h> /* printf */
#include <xbt/log.h>
+#include <xbt/sysdep.h>
XBT_LOG_NEW_DEFAULT_SUBCATEGORY(xbt_automaton, xbt, "Automaton");
# include <unistd.h> /* isatty */
#endif
#include <xbt/log.h>
+#include <xbt/sysdep.h>
XBT_LOG_EXTERNAL_DEFAULT_CATEGORY(xbt_automaton);
if (size <= BLOCKSIZE / 2) { // Full block -> Fragment; no need to optimize for time
result = mmalloc(mdp, size);
- if (result != NULL) { // useless (mmalloc never returns NULL), but harmless
- memcpy(result, ptr, requested_size);
- mfree(mdp, ptr);
- return (result);
- }
+ memcpy(result, ptr, requested_size);
+ mfree(mdp, ptr);
+ return (result);
}
/* Full blocks -> Full blocks; see if we can hold it in place. */
add_executable (${x} ${x}/${x}.c)
target_link_libraries(${x} simgrid)
set_target_properties(${x} PROPERTIES RUNTIME_OUTPUT_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}/${x})
-
- set(tesh_files ${tesh_files} ${CMAKE_CURRENT_SOURCE_DIR}/${x}/${x}.tesh)
- set(teshsuite_src ${teshsuite_src} ${CMAKE_CURRENT_SOURCE_DIR}/${x}/${x}.c)
endforeach()
if(NOT WIN32)
add_executable (${x} ${x}/${x}.c)
target_link_libraries(${x} simgrid)
set_target_properties(${x} PROPERTIES RUNTIME_OUTPUT_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}/${x})
-
- set(tesh_files ${tesh_files} ${CMAKE_CURRENT_SOURCE_DIR}/${x}/${x}.tesh)
- set(teshsuite_src ${teshsuite_src} ${CMAKE_CURRENT_SOURCE_DIR}/${x}/${x}.c)
endforeach()
endif()
endif()
+foreach(x coll-allgather coll-allgatherv coll-allreduce coll-alltoall coll-alltoallv coll-barrier coll-bcast
+ coll-gather coll-reduce coll-reduce-scatter coll-scatter macro-sample pt2pt-dsend pt2pt-pingpong
+ type-hvector type-indexed type-struct type-vector bug-17132 timers privatization
+ macro-shared macro-partial-shared macro-partial-shared-communication)
+ set(tesh_files ${tesh_files} ${CMAKE_CURRENT_SOURCE_DIR}/${x}/${x}.tesh)
+ set(teshsuite_src ${teshsuite_src} ${CMAKE_CURRENT_SOURCE_DIR}/${x}/${x}.c)
+endforeach()
+
set (teshsuite_src ${teshsuite_src} PARENT_SCOPE)
set(tesh_files ${tesh_files} ${CMAKE_CURRENT_SOURCE_DIR}/coll-allreduce/coll-allreduce-large.tesh
${CMAKE_CURRENT_SOURCE_DIR}/coll-allreduce/coll-allreduce-automatic.tesh
add_executable(${file} ${file}.c)
target_link_libraries(${file} simgrid mtest_c)
endforeach()
-endif()
-if (enable_smpi_MPICH3_testsuite AND HAVE_RAW_CONTEXTS)
- ADD_TEST(test-smpi-mpich3-pt2pt-raw ${CMAKE_COMMAND} -E chdir ${CMAKE_BINARY_DIR}/teshsuite/smpi/mpich3-test/pt2pt ${PERL_EXECUTABLE} ${CMAKE_HOME_DIRECTORY}/teshsuite/smpi/mpich3-test/runtests ${TESH_OPTION} -mpiexec=${CMAKE_BINARY_DIR}/smpi_script/bin/smpirun -srcdir=${CMAKE_HOME_DIRECTORY}/teshsuite/smpi/mpich3-test/pt2pt -tests=testlist -execarg=--cfg=contexts/factory:raw)
- SET_TESTS_PROPERTIES(test-smpi-mpich3-pt2pt-raw PROPERTIES PASS_REGULAR_EXPRESSION "tests passed!")
+ if(HAVE_RAW_CONTEXTS AND (NOT enable_memcheck) AND (NOT enable_address_sanitizer) AND (NOT enable_undefined_sanitizer) AND (NOT enable_thread_sanitizer))
+ set(facto "--cfg=contexts/factory:raw")
+ set(name raw)
+ else()
+ set(facto "--cfg=contexts/factory:thread")
+ set(name thread)
+ endif()
+ ADD_TEST(test-smpi-mpich3-pt2pt-${name} ${CMAKE_COMMAND} -E chdir ${CMAKE_BINARY_DIR}/teshsuite/smpi/mpich3-test/pt2pt ${PERL_EXECUTABLE} ${CMAKE_HOME_DIRECTORY}/teshsuite/smpi/mpich3-test/runtests ${TESH_OPTION} -mpiexec=${CMAKE_BINARY_DIR}/smpi_script/bin/smpirun -srcdir=${CMAKE_HOME_DIRECTORY}/teshsuite/smpi/mpich3-test/pt2pt -tests=testlist -execarg=${facto} )
+ SET_TESTS_PROPERTIES(test-smpi-mpich3-pt2pt-${name} PROPERTIES PASS_REGULAR_EXPRESSION "tests passed!")
+ unset(facto)
+ unset(name)
endif()
foreach(file anyall bottom eagerdt huge_anysrc huge_underflow inactivereq isendself isendirecv isendselfprobe issendselfcancel pingping probenull
src/msg/msg_global.cpp
src/msg/msg_gos.cpp
src/msg/msg_legacy.cpp
- src/msg/msg_mailbox.cpp
src/msg/msg_process.cpp
src/msg/msg_synchro.cpp
src/msg/msg_task.cpp
include/simgrid/plugins/load_balancer.h
include/simgrid/smpi/replay.hpp
include/simgrid/instr.h
+ include/simgrid/mailbox.h
include/simgrid/msg.h
include/simgrid/simdag.h
include/simgrid/modelchecker.h
include/smpi/smpi.h
include/smpi/smpi_main.h
include/smpi/smpi_helpers.h
+ include/smpi/smpi_helpers_internal.h
include/smpi/smpi_extended_traces.h
include/smpi/smpi_extended_traces_fortran.h
include/smpi/forward.hpp