1 /*! @page uhood_switch Process Synchronizations and Context Switching
5 @section uhood_switch_DES SimGrid as an Operating System
7 SimGrid is a discrete event simulator of distributed systems: it does
8 not simulate the world by small fixed-size steps but determines the
9 date of the next event (such as the end of a communication, the end of
10 a computation) and jumps to this date.
12 A number of actors executing user-provided code run on top of the
13 simulation kernel. The interactions between these actors and the
14 simulation kernel are very similar to the ones between the system
15 processes and the Operating System (except that the actors and
16 simulation kernel share the same address space in a single OS
19 When an actor needs to interact with the outer world (eg. to start a
20 communication), it issues a <i>simcall</i> (simulation call), just
21 like a system process issues a <i>syscall</i> to interact with its
22 environment through the Operating System. Any <i>simcall</i> freezes
23 the actor until it is woken up by the simulation kernel (eg. when the
24 communication is finished).
26 Mimicking the OS behavior may seem over-engineered here, but this is
27 mandatory to the model-checker. The simcalls, representing actors'
28 actions, are the transitions of the formal system. Verifying the
29 system requires to manipulate these transitions explicitly. This also
30 allows one to run the actors safely in parallel, even if this is less
31 commonly used by our users.
33 So, the key ideas here are:
35 - The simulator is a discrete event simulator (event-driven).
37 - An actor can issue a blocking simcall and will be suspended until
38 it is woken up by the simulation kernel (when the operation is
41 - In order to move forward in (simulated) time, the simulation kernel
42 needs to know which actions the actors want to do.
44 - The simulated time will only move forward when all the actors are
45 blocked, waiting on a simcall.
47 This leads to some very important consequences:
49 - An actor cannot synchronize with another actor using OS-level primitives
50 such as `pthread_mutex_lock()` or `std::mutex`. The simulation kernel
51 would wait for the actor to issue a simcall and would deadlock. Instead it
52 must use simulation-level synchronization primitives
53 (such as `simcall_mutex_lock()`).
55 - Similarly, an actor cannot sleep using
56 `std::this_thread::sleep_for()` which waits in the real world but
57 must instead wait in the simulation with
58 `simgrid::s4u::Actor::this_actor::sleep_for()` which waits in the
61 - The simulation kernel cannot block.
62 Only the actors can block (using simulation primitives).
64 @section uhood_switch_futures Futures and Promises
66 @subsection uhood_switch_futures_what What is a future?
68 Futures are a nice classical programming abstraction, present in many
69 language. Wikipedia defines a
70 [future](https://en.wikipedia.org/wiki/Futures_and_promises) as an
71 object that acts as a proxy for a result that is initially unknown,
72 usually because the computation of its value is yet incomplete. This
73 concept is thus perfectly adapted to represent in the kernel the
74 asynchronous operations corresponding to the actors' simcalls.
77 Futures can be manipulated using two kind of APIs:
79 - a <b>blocking API</b> where we wait for the result to be available
82 - a <b>continuation-based API</b> where we say what should be done
83 with the result when the operation completes
84 (`future.then(something_to_do_with_the_result)`). This is heavily
85 used in ECMAScript that exhibits the same kind of never-blocking
86 asynchronous model as our discrete event simulator.
88 C++11 includes a generic class (`std::future<T>`) which implements a
89 blocking API. The continuation-based API is not available in the
90 standard (yet) but is [already
91 described](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2015/p0159r0.html#futures.unique_future.6)
92 in the Concurrency Technical Specification.
94 `Promise`s are the counterparts of `Future`s: `std::future<T>` is used
95 <em>by the consumer</em> of the result. On the other hand,
96 `std::promise<T>` is used <em>by the producer</em> of the result. The
97 producer calls `promise.set_value(42)` or `promise.set_exception(e)`
98 in order to <em>set the result</em> which will be made available to
99 the consumer by `future.get()`.
101 @subsection uhood_switch_futures_needs Which future do we need?
103 The blocking API provided by the standard C++11 futures does not suit
104 our needs since the simulation kernel <em>cannot</em> block, and since
105 we want to explicitly schedule the actors. Instead, we need to
106 reimplement a continuation-based API to be used in our event-driven
109 Our futures are based on the C++ Concurrency Technical Specification
110 API, with a few differences:
112 - The simulation kernel is single-threaded so we do not need
113 inter-thread synchronization for our futures.
115 - As the simulation kernel cannot block, `f.wait()` is not meaningful
118 - Similarly, `future.get()` does an implicit wait. Calling this method in the
119 simulation kernel only makes sense if the future is already ready. If the
120 future is not ready, this would deadlock the simulator and an error is
123 - We always call the continuations in the simulation loop (and not
124 inside the `future.then()` or `promise.set_value()` calls). That
125 way, we don't have to fear problems like invariants not being
126 restored when the callbacks are called :fearful: or stack overflows
127 triggered by deeply nested continuations chains :cold_sweat:. The
128 continuations are all called in a nice and predictable place in the
129 simulator with a nice and predictable state :relieved:.
131 - Some features of the standard (such as shared futures) are not
132 needed in our context, and thus not considered here.
134 @subsection uhood_switch_futures_implem Implementing `Future` and `Promise`
136 The `simgrid::kernel::Future` and `simgrid::kernel::Promise` use a
137 shared state defined as follows:
140 enum class FutureStatus {
146 class FutureStateBase : private boost::noncopyable {
148 void schedule(simgrid::xbt::Task<void()>&& job);
149 void set_exception(std::exception_ptr exception);
150 void set_continuation(simgrid::xbt::Task<void()>&& continuation);
151 FutureStatus get_status() const;
152 bool is_ready() const;
155 FutureStatus status_ = FutureStatus::not_ready;
156 std::exception_ptr exception_;
157 simgrid::xbt::Task<void()> continuation_;
161 class FutureState : public FutureStateBase {
163 void set_value(T value);
166 boost::optional<T> value_;
170 class FutureState<T&> : public FutureStateBase {
174 class FutureState<void> : public FutureStateBase {
179 Both `Future` and `Promise` have a reference to the shared state:
186 std::shared_ptr<FutureState<T>> state_;
193 std::shared_ptr<FutureState<T>> state_;
194 bool future_get_ = false;
198 The crux of `future.then()` is:
203 auto simgrid::kernel::Future<T>::then_no_unwrap(F continuation)
204 -> Future<decltype(continuation(std::move(*this)))>
206 typedef decltype(continuation(std::move(*this))) R;
208 if (state_ == nullptr)
209 throw std::future_error(std::future_errc::no_state);
211 auto state = std::move(state_);
212 // Create a new future...
214 Future<R> future = promise.get_future();
215 // ...and when the current future is ready...
216 state->set_continuation(simgrid::xbt::makeTask(
217 [](Promise<R> promise, std::shared_ptr<FutureState<T>> state,
219 // ...set the new future value by running the continuation.
220 Future<T> future(std::move(state));
221 simgrid::xbt::fulfillPromise(promise,[&]{
222 return continuation(std::move(future));
225 std::move(promise), state, std::move(continuation)));
226 return std::move(future);
230 We added a (much simpler) `future.then_()` method which does not
236 void simgrid::kernel::Future<T>::then_(F continuation)
238 if (state_ == nullptr)
239 throw std::future_error(std::future_errc::no_state);
240 // Give shared-ownership to the continuation:
241 auto state = std::move(state_);
242 state->set_continuation(simgrid::xbt::makeTask(
243 std::move(continuation), state));
247 The `.get()` delegates to the shared state. As we mentioned previously, an
248 error is raised if the future is not ready:
252 T simgrid::kernel::Future::get()
254 if (state_ == nullptr)
255 throw std::future_error(std::future_errc::no_state);
256 std::shared_ptr<FutureState<T>> state = std::move(state_);
261 T simgrid::kernel::FutureState<T>::get()
263 if (status_ != FutureStatus::ready)
264 xbt_die("Deadlock: this future is not ready");
265 status_ = FutureStatus::done;
267 std::exception_ptr exception = std::move(exception_);
268 std::rethrow_exception(std::move(exception));
270 xbt_assert(this->value_);
271 auto result = std::move(this->value_.get());
272 this->value_ = boost::optional<T>();
273 return std::move(result);
277 @section uhood_switch_simcalls Implementing the simcalls
279 So a simcall is a way for the actor to push a request to the
280 simulation kernel and yield the control until the request is
281 fulfilled. The performance requirements are very high because
282 the actors usually do an inordinate amount of simcalls during the
285 As for real syscalls, the basic idea is to write the wanted call and
286 its arguments in a memory area that is specific to the actor, and
287 yield the control to the simulation kernel. Once in kernel mode, the
288 simcalls of each demanding actor are evaluated sequentially in a
289 strictly reproducible order. This makes the whole simulation
293 @subsection uhood_switch_simcalls_v2 The historical way
295 In the very first implementation, everything was written by hand and
296 highly optimized, making our software very hard to maintain and
297 evolve. We decided to sacrifice some performance for
298 maintainability. In a second try (that is still in use in SimGrid
299 v3.13), we had a lot of boiler code generated from a python script,
300 taking the [list of simcalls](https://github.com/simgrid/simgrid/blob/4ae2fd01d8cc55bf83654e29f294335e3cb1f022/src/simix/simcalls.in)
301 as input. It looks like this:
304 # This looks like C++ but it is a basic IDL-like language
305 # (one definition per line) parsed by a python script:
307 void process_kill(smx_actor_t process);
308 void process_killall(int reset_pid);
309 void process_cleanup(smx_actor_t process) [[nohandler]];
310 void process_suspend(smx_actor_t process) [[block]];
311 void process_resume(smx_actor_t process);
312 void process_set_host(smx_actor_t process, sg_host_t dest);
313 int process_is_suspended(smx_actor_t process) [[nohandler]];
314 int process_join(smx_actor_t process, double timeout) [[block]];
315 int process_sleep(double duration) [[block]];
317 smx_mutex_t mutex_init();
318 void mutex_lock(smx_mutex_t mutex) [[block]];
319 int mutex_trylock(smx_mutex_t mutex);
320 void mutex_unlock(smx_mutex_t mutex);
325 At runtime, a simcall is represented by a structure containing a simcall
326 number and its arguments (among some other things):
329 struct s_smx_simcall {
331 e_smx_simcall_t call;
334 // Arguments of the simcall:
335 union u_smx_scalar args[11];
336 // Result of the simcall:
337 union u_smx_scalar result;
338 // Some additional stuff:
344 with the a scalar union type:
357 unsigned long long ull;
364 When manually calling the relevant [Python
365 script](https://github.com/simgrid/simgrid/blob/4ae2fd01d8cc55bf83654e29f294335e3cb1f022/src/simix/simcalls.py),
366 this generates a bunch of C++ files:
368 * an enum of all the [simcall numbers](https://github.com/simgrid/simgrid/blob/4ae2fd01d8cc55bf83654e29f294335e3cb1f022/src/simix/popping_enum.h#L19);
370 * [user-side wrappers](https://github.com/simgrid/simgrid/blob/4ae2fd01d8cc55bf83654e29f294335e3cb1f022/src/simix/popping_bodies.cpp)
371 responsible for wrapping the parameters in the `struct s_smx_simcall`;
372 and wrapping out the result;
374 * [accessors](https://github.com/simgrid/simgrid/blob/4ae2fd01d8cc55bf83654e29f294335e3cb1f022/src/simix/popping_accessors.hpp)
375 to get/set values of of `struct s_smx_simcall`;
377 * a simulation-kernel-side [big switch](https://github.com/simgrid/simgrid/blob/4ae2fd01d8cc55bf83654e29f294335e3cb1f022/src/simix/popping_generated.cpp#L106)
378 handling all the simcall numbers.
380 Then one has to write the code of the kernel side handler for the simcall
381 and the code of the simcall itself (which calls the code-generated
382 marshaling/unmarshaling stuff).
384 In order to simplify this process, we added two generic simcalls which can be
385 used to execute a function in the simulation kernel:
388 # This one should really be called run_immediate:
389 void run_kernel(std::function<void()> const* code) [[nohandler]];
390 void run_blocking(std::function<void()> const* code) [[block,nohandler]];
393 ### Immediate simcall
395 The first one (`simcall_run_kernel()`) executes a function in the simulation
396 kernel context and returns immediately (without blocking the actor):
399 void simcall_run_kernel(std::function<void()> const& code)
401 simcall_BODY_run_kernel(&code);
404 template<class F> inline
405 void simcall_run_kernel(F& f)
407 simcall_run_kernel(std::function<void()>(std::ref(f)));
411 On top of this, we add a wrapper which can be used to return a value of any
412 type and properly handles exceptions:
416 typename std::result_of<F()>::type kernelImmediate(F&& code)
418 // If we are in the simulation kernel, we take the fast path and
419 // execute the code directly without simcall
420 // marshalling/unmarshalling/dispatch:
421 if (SIMIX_is_maestro())
422 return std::forward<F>(code)();
424 // If we are in the application, pass the code to the simulation
425 // kernel which executes it for us and reports the result:
426 typedef typename std::result_of<F()>::type R;
427 simgrid::xbt::Result<R> result;
428 simcall_run_kernel([&]{
429 xbt_assert(SIMIX_is_maestro(), "Not in maestro");
430 simgrid::xbt::fulfillPromise(result, std::forward<F>(code));
436 where [`Result<R>`](#result) can store either a `R` or an exception.
441 xbt_dict_t Host::properties() {
442 return simgrid::simix::kernelImmediate([&] {
443 simgrid::surf::HostImpl* surf_host =
444 this->extension<simgrid::surf::HostImpl>();
445 return surf_host->getProperties();
450 ### Blocking simcall {#uhood_switch_v2_blocking}
452 The second generic simcall (`simcall_run_blocking()`) executes a function in
453 the SimGrid simulation kernel immediately but does not wake up the calling actor
457 void simcall_run_blocking(std::function<void()> const& code);
460 void simcall_run_blocking(F& f)
462 simcall_run_blocking(std::function<void()>(std::ref(f)));
466 The `f` function is expected to setup some callbacks in the simulation
467 kernel which will wake up the actor (with
468 `simgrid::simix::unblock(actor)`) when the operation is completed.
470 This is wrapped in a higher-level primitive as well. The
471 `kernel_sync()` function expects a function-object which is executed
472 immediately in the simulation kernel and returns a `Future<T>`. The
473 simulator blocks the actor and resumes it when the `Future<T>` becomes
474 ready with its result:
478 auto kernel_sync(F code) -> decltype(code().get())
480 typedef decltype(code().get()) T;
481 if (SIMIX_is_maestro())
482 xbt_die("Can't execute blocking call in kernel mode");
484 smx_actor_t self = SIMIX_process_self();
485 simgrid::xbt::Result<T> result;
487 simcall_run_blocking([&result, self, &code]{
489 auto future = code();
490 future.then_([&result, self](simgrid::kernel::Future<T> value) {
491 // Propagate the result from the future
492 // to the simgrid::xbt::Result:
493 simgrid::xbt::setPromise(result, value);
494 simgrid::simix::unblock(self);
498 // The code failed immediately. We can wake up the actor
499 // immediately with the exception:
500 result.set_exception(std::current_exception());
501 simgrid::simix::unblock(self);
505 // Get the result of the operation (which might be an exception):
510 A contrived example of this would be:
513 int res = simgrid::simix::kernel_sync([&] {
514 return kernel_wait_until(30).then(
515 [](simgrid::kernel::Future<void> future) {
522 ### Asynchronous operations {#uhood_switch_v2_async}
524 We can write the related `kernel_async()` which wakes up the actor immediately
525 and returns a future to the actor. As this future is used in the actor context,
526 it is a different future
527 (`simgrid::simix::Future` instead of `simgrid::kernel::Future`)
528 which implements a C++11 `std::future` wait-based API:
535 Future(simgrid::kernel::Future<T> future) : future_(std::move(future)) {}
536 bool valid() const { return future_.valid(); }
538 bool is_ready() const;
541 // We wrap an event-based kernel future:
542 simgrid::kernel::Future<T> future_;
546 The `future.get()` method is implemented as[^getcompared]:
550 T simgrid::simix::Future<T>::get()
553 throw std::future_error(std::future_errc::no_state);
554 smx_actor_t self = SIMIX_process_self();
555 simgrid::xbt::Result<T> result;
556 simcall_run_blocking([this, &result, self]{
558 // When the kernel future is ready...
560 [this, &result, self](simgrid::kernel::Future<T> value) {
561 // ... wake up the process with the result of the kernel future.
562 simgrid::xbt::setPromise(result, value);
563 simgrid::simix::unblock(self);
567 result.set_exception(std::current_exception());
568 simgrid::simix::unblock(self);
575 `kernel_async()` simply :wink: calls `kernelImmediate()` and wraps the
576 `simgrid::kernel::Future` into a `simgrid::simix::Future`:
580 auto kernel_async(F code)
581 -> Future<decltype(code().get())>
583 typedef decltype(code().get()) T;
585 // Execute the code in the simulation kernel and get the kernel future:
586 simgrid::kernel::Future<T> future =
587 simgrid::simix::kernelImmediate(std::move(code));
589 // Wrap the kernel future in a user future:
590 return simgrid::simix::Future<T>(std::move(future));
594 A contrived example of this would be:
597 simgrid::simix::Future<int> future = simgrid::simix::kernel_sync([&] {
598 return kernel_wait_until(30).then(
599 [](simgrid::kernel::Future<void> future) {
605 int res = future.get();
608 `kernel_sync()` could be rewritten as:
612 auto kernel_sync(F code) -> decltype(code().get())
614 return kernel_async(std::move(code)).get();
618 The semantic is equivalent but this form would require two simcalls
619 instead of one to do the same job (one in `kernel_async()` and one in
622 ## Mutexes and condition variables
624 ### Condition Variables
626 Similarly SimGrid already had simulation-level condition variables
627 which can be exposed using the same API as `std::condition_variable`:
630 class ConditionVariable {
634 ConditionVariable(smx_cond_t cond) : cond_(cond) {}
637 ConditionVariable(ConditionVariable const&) = delete;
638 ConditionVariable& operator=(ConditionVariable const&) = delete;
640 friend void intrusive_ptr_add_ref(ConditionVariable* cond);
641 friend void intrusive_ptr_release(ConditionVariable* cond);
642 using Ptr = boost::intrusive_ptr<ConditionVariable>;
643 static Ptr createConditionVariable();
645 void wait(std::unique_lock<Mutex>& lock);
647 void wait(std::unique_lock<Mutex>& lock, P pred);
649 // Wait functions taking a plain double as time:
651 std::cv_status wait_until(std::unique_lock<Mutex>& lock,
652 double timeout_time);
653 std::cv_status wait_for(
654 std::unique_lock<Mutex>& lock, double duration);
656 bool wait_until(std::unique_lock<Mutex>& lock,
657 double timeout_time, P pred);
659 bool wait_for(std::unique_lock<Mutex>& lock,
660 double duration, P pred);
662 // Wait functions taking a std::chrono time:
664 template<class Rep, class Period, class P>
665 bool wait_for(std::unique_lock<Mutex>& lock,
666 std::chrono::duration<Rep, Period> duration, P pred);
667 template<class Rep, class Period>
668 std::cv_status wait_for(std::unique_lock<Mutex>& lock,
669 std::chrono::duration<Rep, Period> duration);
670 template<class Duration>
671 std::cv_status wait_until(std::unique_lock<Mutex>& lock,
672 const SimulationTimePoint<Duration>& timeout_time);
673 template<class Duration, class P>
674 bool wait_until(std::unique_lock<Mutex>& lock,
675 const SimulationTimePoint<Duration>& timeout_time, P pred);
685 We currently accept both `double` (for simplicity and consistency with
686 the current codebase) and `std::chrono` types (for compatibility with
687 C++ code) as durations and timepoints. One important thing to notice here is
688 that `cond.wait_for()` and `cond.wait_until()` work in the simulated time,
689 not in the real time.
691 The simple `cond.wait()` and `cond.wait_for()` delegate to
692 pre-existing simcalls:
695 void ConditionVariable::wait(std::unique_lock<Mutex>& lock)
697 simcall_cond_wait(cond_, lock.mutex()->mutex_);
700 std::cv_status ConditionVariable::wait_for(
701 std::unique_lock<Mutex>& lock, double timeout)
703 // The simcall uses -1 for "any timeout" but we don't want this:
708 simcall_cond_wait_timeout(cond_, lock.mutex()->mutex_, timeout);
709 return std::cv_status::no_timeout;
711 catch (const simgrid::TimeoutException& e) {
712 // If the exception was a timeout, we have to take the lock again:
714 lock.mutex()->lock();
715 return std::cv_status::timeout;
727 Other methods are simple wrappers around those two:
731 void ConditionVariable::wait(std::unique_lock<Mutex>& lock, P pred)
738 bool ConditionVariable::wait_until(std::unique_lock<Mutex>& lock,
739 double timeout_time, P pred)
742 if (this->wait_until(lock, timeout_time) == std::cv_status::timeout)
748 bool ConditionVariable::wait_for(std::unique_lock<Mutex>& lock,
749 double duration, P pred)
751 return this->wait_until(lock,
752 SIMIX_get_clock() + duration, std::move(pred));
759 We wrote two future implementations based on the `std::future` API:
761 * the first one is a non-blocking event-based (`future.then(stuff)`)
762 future used inside our (non-blocking event-based) simulation kernel;
764 * the second one is a wait-based (`future.get()`) future used in the actors
765 which waits using a simcall.
767 These futures are used to implement `kernel_sync()` and `kernel_async()` which
768 expose asynchronous operations in the simulation kernel to the actors.
770 In addition, we wrote variations of some other C++ standard library
771 classes (`SimulationClock`, `Mutex`, `ConditionVariable`) which work in
774 * using simulated time;
776 * using simcalls for synchronisation.
778 Reusing the same API as the C++ standard library is very useful because:
780 * we use a proven API with a clearly defined semantic;
782 * people already familiar with those API can use our own easily;
784 * users can rely on documentation, examples and tutorials made by other
787 * we can reuse generic code with our types (`std::unique_lock`,
788 `std::lock_guard`, etc.).
790 This type of approach might be useful for other libraries which define
791 their own contexts. An example of this is
792 [Mordor](https://github.com/mozy/mordor), an I/O library using fibers
793 (cooperative scheduling): it implements cooperative/fiber
794 [mutex](https://github.com/mozy/mordor/blob/4803b6343aee531bfc3588ffc26a0d0fdf14b274/mordor/fibersynchronization.h#L70),
796 mutex](https://github.com/mozy/mordor/blob/4803b6343aee531bfc3588ffc26a0d0fdf14b274/mordor/fibersynchronization.h#L105)
797 which are compatible with the
798 [`BasicLockable`](http://en.cppreference.com/w/cpp/concept/BasicLockable)
800 [`[thread.req.lockable.basic]`]((http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n4296.pdf#page=1175))
801 in the C++14 standard).
803 ## Appendix: useful helpers
807 Result is like a mix of `std::future` and `std::promise` in a
808 single-object without shared-state and synchronisation:
814 bool is_valid() const;
815 void set_exception(std::exception_ptr e);
816 void set_value(T&& value);
817 void set_value(T const& value);
820 boost::variant<boost::blank, T, std::exception_ptr> value_;
826 Those helper are useful for dealing with generic future-based code:
829 template<class R, class F>
830 auto fulfillPromise(R& promise, F&& code)
831 -> decltype(promise.set_value(code()))
834 promise.set_value(std::forward<F>(code)());
837 promise.set_exception(std::current_exception());
841 template<class P, class F>
842 auto fulfillPromise(P& promise, F&& code)
843 -> decltype(promise.set_value())
846 std::forward<F>(code)();
850 promise.set_exception(std::current_exception());
854 template<class P, class F>
855 void setPromise(P& promise, F&& future)
857 fulfillPromise(promise, [&]{ return std::forward<F>(future).get(); });
863 `Task<R(F...)>` is a type-erased callable object similar to
864 `std::function<R(F...)>` but works for move-only types. It is similar to
865 `std::package_task<R(F...)>` but does not wrap the result in a `std::future<R>`
866 (it is not <i>packaged</i>).
868 | |`std::function` |`std::packaged_task`|`simgrid::xbt::Task`
869 |---------------|----------------|--------------------|--------------------------
870 |Copyable | Yes | No | No
871 |Movable | Yes | Yes | Yes
872 |Call | `const` | non-`const` | non-`const`
873 |Callable | multiple times | once | once
874 |Sets a promise | No | Yes | No
876 It could be implemented as:
882 std::packaged_task<T> task_;
887 task_(std::forward<F>(f))
890 template<class... ArgTypes>
891 auto operator()(ArgTypes... args)
892 -> decltype(task_.get_future().get())
894 task_(std::forward<ArgTypes)(args)...);
895 return task_.get_future().get();
901 but we don't need a shared-state.
903 This is useful in order to bind move-only type arguments:
906 template<class F, class... Args>
910 std::tuple<Args...> args_;
911 typedef decltype(simgrid::xbt::apply(
912 std::move(code_), std::move(args_))) result_type;
914 TaskImpl(F code, std::tuple<Args...> args) :
915 code_(std::move(code)),
916 args_(std::move(args))
918 result_type operator()()
920 // simgrid::xbt::apply is C++17 std::apply:
921 return simgrid::xbt::apply(std::move(code_), std::move(args_));
925 template<class F, class... Args>
926 auto makeTask(F code, Args... args)
927 -> Task< decltype(code(std::move(args)...))() >
929 TaskImpl<F, Args...> task(
930 std::move(code), std::make_tuple(std::move(args)...));
931 return std::move(task);
940 You might want to compare this method with `simgrid::kernel::Future::get()`
941 we showed previously: the method of the kernel future does not block and
942 raises an error if the future is not ready; the method of the actor future
943 blocks after having set a continuation to wake the actor when the future
948 `std::lock()` might kinda work too but it may not be such as good idea to
949 use it as it may use a [<q>deadlock avoidance algorithm such as
950 try-and-back-off</q>](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n4296.pdf#page=1199).
951 A backoff would probably uselessly wait in real time instead of simulated
952 time. The deadlock avoidance algorithm might as well add non-determinism
953 in the simulation which we would like to avoid.
954 `std::try_lock()` should be safe to use though.