-SimGrid (3.23) NOT RELEASED (Release Target: June 21. 2019, 15:54 UTC)
+----------------------------------------------------------------------------
+
+SimGrid (3.23.3) NOT RELEASED YET (v3.24 expected September 23. 7:50 UTC)
+
+SMPI:
+ - Fortran bindings for DVFS have been removed.
+
+----------------------------------------------------------------------------
+
+SimGrid (3.23.2) July 8. 2019
+
+Documentation:
+ - Nicer introduction page.
+ - Migrate the "Deploy your application" page to the new doc.
+ - Move Java as a subtree of MSG.
+
+General:
+ - Rename simgrid::TimeoutError to simgrid::TimeoutException.
+
+XBT:
+ - Drop xbt_dynar_sort_strings().
+
+Bugs:
+ - Really fix FG#26: Turning off a link should raise NetworkFailureException
+ - FG#27: Wrong exception thrown to wait_any when link is turned off
+ - GH#328: Java: Canceling multiple tasks in a single vm/host
+
+----------------------------------------------------------------------------
+
+SimGrid (3.23) June 25. 2019
+
+The Exotic Solstice Release.
General:
+ - SunOS and Haiku OS support. Because exotic platforms are fun.
- Stop setting random seed with srand48() at initialization.
- Use addr2line as a fallback for stacktraces when backtrace is not available.
- Build option -Denable_documentation is now OFF by default.
+ - Network model 'NS3' was renamed into 'ns-3'.
+
+Python:
+ - Simgrid can now hopefully be installed with pip.
+
+S4U:
+ - wait_any can now be used for asynchronous executions too.
XBT:
- New log appenders: stdout and stderr. Use stdout for xbt_help.
- SMPI now reports support of MPI3.1. This does not mean SMPI supports all MPI 3 calls, but it was already the case with 2.2
- MPI/IO is now supported over the Storage API (no files are written or read, storage is simulated). Supported calls are all synchronous ones.
- MPI interface is now const correct for input parameters
+ - MPI_Ireduce, MPI_Iallreduce, MPI_Iscan, MPI_Iexscan, MPI_Ireduce_scatter, MPI_Ireduce_scatter_block support
+ - Fortran bindings for async collectives.
+ - MPI_Comm_get_name, MPI_Comm_set_name, MPI_Count support.
Model-checker:
- Remove option 'model-check/record': Paths are recorded in any cases now.
- Remove the lagrange-based models (Reno/Reno2/Vegas). The regular
models proved to be more accurate than these old experiments.
-Fixed bugs (FG=FramaGit; GH=GitHub):
+Fixed bugs (FG=FramaGit; GH=GitHub -- Please prefer framagit for new bugs)
- FG#1: Broken link in error messages
- FG#2: missing installation documentation
- FG#3: missing documentation in smpirun
+ - FG#6: Python bindings not available on PyPI
- FG#7: simple cmake call requires doxygen
- FG#8: make python bindings an optional dependency
- FG#10: Can not use MSG_process_set_data from SMPI any more
- FG#13: Installs unstripped file 'bin/graphicator'
- FG#14: Installs the empty directory 'doc/simgrid/html'
- FG#15: Setting -Denable_python=OFF doesn't disable the search for pybind11
+ - FG#17: Dead link in doc (pls_ns3)
+ - FG#20: 'tesh --help' should return 0
+ - FG#21: Documentation link on http://simgrid.org/ broken
+ - FG#22: Debian installation instruction are broken
+ - FG#26: Turning off a link should raise NetworkFailureException exceptions
- GH#133: Java: a process can run on a VM even if its host is off
- GH#320: Stacktrace: Avoid the backtrace variant of Boost.Stacktrace?
- GH#326: Valgrind-detected error for join() when energy plugin is activated
- MPI_Alltoallw support
- Partial MPI nonblocking collectives implementation: MPI_Ibcast, MPI_Ibarrier,
MPI_Iallgather, MPI_Iallgatherv, MPI_Ialltoall, MPI_Ialltoallv, MPI_Igather,
- MPI_Igatherv, MPI_Iscatter, MPI_Iscatterv, MPI_Ialltoallw, MPI_Ireduce,
- MPI_Iallreduce, MPI_Iscan, MPI_Iexscan, MPI_Ireduce_scatter,
- MPI_Ireduce_scatter_block, with fortran bindings.
+ MPI_Igatherv, MPI_Iscatter, MPI_Iscatterv, MPI_Ialltoallw.
- MPI_Request_get_status, MPI_Status_set_cancelled, MPI_Status_set_elements
- support, MPI_Comm_get_name, MPI_Comm_set_name
+ support
- Basic implementation of generalized requests (SMPI doesn't
allow MPI_THREAD_MULTIPLE) : MPI_Grequest_complete, MPI_Grequest_start