1 /*! @page uhood_switch Process Synchronizations and Context Switching
5 @section uhood_switch_DES SimGrid as an Operating System
7 SimGrid is a discrete event simulator of distributed systems: it does
8 not simulate the world by small fixed-size steps but determines the
9 date of the next event (such as the end of a communication, the end of
10 a computation) and jumps to this date.
12 A number of actors executing user-provided code run on top of the
13 simulation kernel. The interactions between these actors and the
14 simulation kernel are very similar to the ones between the system
15 processes and the Operating System (except that the actors and
16 simulation kernel share the same address space in a single OS
19 When an actor needs to interact with the outer world (eg. to start a
20 communication), it issues a <i>simcall</i> (simulation call), just
21 like a system process issues a <i>syscall</i> to interact with its
22 environment through the Operating System. Any <i>simcall</i> freezes
23 the actor until it is woken up by the simulation kernel (eg. when the
24 communication is finished).
26 Mimicking the OS behavior may seem over-engineered here, but this is
27 mandatory to the model-checker. The simcalls, representing actors'
28 actions, are the transitions of the formal system. Verifying the
29 system requires to manipulate these transitions explicitly. This also
30 allows one to run the actors safely in parallel, even if this is less
31 commonly used by our users.
33 So, the key ideas here are:
35 - The simulator is a discrete event simulator (event-driven).
37 - An actor can issue a blocking simcall and will be suspended until
38 it is woken up by the simulation kernel (when the operation is
41 - In order to move forward in (simulated) time, the simulation kernel
42 needs to know which actions the actors want to do.
44 - The simulated time will only move forward when all the actors are
45 blocked, waiting on a simcall.
47 This leads to some very important consequences:
49 - An actor cannot synchronize with another actor using OS-level primitives
50 such as `pthread_mutex_lock()` or `std::mutex`. The simulation kernel
51 would wait for the actor to issue a simcall and would deadlock. Instead it
52 must use simulation-level synchronization primitives
53 (such as `simcall_mutex_lock()`).
55 - Similarly, an actor cannot sleep using
56 `std::this_thread::sleep_for()` which waits in the real world but
57 must instead wait in the simulation with
58 `simgrid::s4u::Actor::this_actor::sleep_for()` which waits in the
61 - The simulation kernel cannot block.
62 Only the actors can block (using simulation primitives).
64 @section uhood_switch_futures Futures and Promises
66 @subsection uhood_switch_futures_what What is a future?
68 Futures are a nice classical programming abstraction, present in many
69 language. Wikipedia defines a
70 [future](https://en.wikipedia.org/wiki/Futures_and_promises) as an
71 object that acts as a proxy for a result that is initially unknown,
72 usually because the computation of its value is yet incomplete. This
73 concept is thus perfectly adapted to represent in the kernel the
74 asynchronous operations corresponding to the actors' simcalls.
77 Futures can be manipulated using two kind of APIs:
79 - a <b>blocking API</b> where we wait for the result to be available
82 - a <b>continuation-based API</b> where we say what should be done
83 with the result when the operation completes
84 (`future.then(something_to_do_with_the_result)`). This is heavily
85 used in ECMAScript that exhibits the same kind of never-blocking
86 asynchronous model as our discrete event simulator.
88 C++11 includes a generic class (`std::future<T>`) which implements a
89 blocking API. The continuation-based API is not available in the
90 standard (yet) but is [already
91 described](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2015/p0159r0.html#futures.unique_future.6)
92 in the Concurrency Technical Specification.
94 `Promise`s are the counterparts of `Future`s: `std::future<T>` is used
95 <em>by the consumer</em> of the result. On the other hand,
96 `std::promise<T>` is used <em>by the producer</em> of the result. The
97 producer calls `promise.set_value(42)` or `promise.set_exception(e)`
98 in order to <em>set the result</em> which will be made available to
99 the consumer by `future.get()`.
101 @subsection uhood_switch_futures_needs Which future do we need?
103 The blocking API provided by the standard C++11 futures does not suit
104 our needs since the simulation kernel <em>cannot</em> block, and since
105 we want to explicitly schedule the actors. Instead, we need to
106 reimplement a continuation-based API to be used in our event-driven
109 Our futures are based on the C++ Concurrency Technical Specification
110 API, with a few differences:
112 - The simulation kernel is single-threaded so we do not need
113 inter-thread synchronization for our futures.
115 - As the simulation kernel cannot block, `f.wait()` is not meaningful
118 - Similarly, `future.get()` does an implicit wait. Calling this method in the
119 simulation kernel only makes sense if the future is already ready. If the
120 future is not ready, this would deadlock the simulator and an error is
123 - We always call the continuations in the simulation loop (and not
124 inside the `future.then()` or `promise.set_value()` calls). That
125 way, we don't have to fear problems like invariants not being
126 restored when the callbacks are called :fearful: or stack overflows
127 triggered by deeply nested continuations chains :cold_sweat:. The
128 continuations are all called in a nice and predictable place in the
129 simulator with a nice and predictable state :relieved:.
131 - Some features of the standard (such as shared futures) are not
132 needed in our context, and thus not considered here.
134 @subsection uhood_switch_futures_implem Implementing `Future` and `Promise`
136 The `simgrid::kernel::Future` and `simgrid::kernel::Promise` use a
137 shared state defined as follows:
140 enum class FutureStatus {
146 class FutureStateBase : private boost::noncopyable {
148 void schedule(simgrid::xbt::Task<void()>&& job);
149 void set_exception(std::exception_ptr exception);
150 void set_continuation(simgrid::xbt::Task<void()>&& continuation);
151 FutureStatus get_status() const;
152 bool is_ready() const;
155 FutureStatus status_ = FutureStatus::not_ready;
156 std::exception_ptr exception_;
157 simgrid::xbt::Task<void()> continuation_;
161 class FutureState : public FutureStateBase {
163 void set_value(T value);
166 boost::optional<T> value_;
170 class FutureState<T&> : public FutureStateBase {
174 class FutureState<void> : public FutureStateBase {
179 Both `Future` and `Promise` have a reference to the shared state:
186 std::shared_ptr<FutureState<T>> state_;
193 std::shared_ptr<FutureState<T>> state_;
194 bool future_get_ = false;
198 The crux of `future.then()` is:
203 auto simgrid::kernel::Future<T>::then_no_unwrap(F continuation)
204 -> Future<decltype(continuation(std::move(*this)))>
206 typedef decltype(continuation(std::move(*this))) R;
208 if (state_ == nullptr)
209 throw std::future_error(std::future_errc::no_state);
211 auto state = std::move(state_);
212 // Create a new future...
214 Future<R> future = promise.get_future();
215 // ...and when the current future is ready...
216 state->set_continuation(simgrid::xbt::makeTask(
217 [](Promise<R> promise, std::shared_ptr<FutureState<T>> state,
219 // ...set the new future value by running the continuation.
220 Future<T> future(std::move(state));
221 simgrid::xbt::fulfillPromise(promise,[&]{
222 return continuation(std::move(future));
225 std::move(promise), state, std::move(continuation)));
226 return std::move(future);
230 We added a (much simpler) `future.then_()` method which does not
236 void simgrid::kernel::Future<T>::then_(F continuation)
238 if (state_ == nullptr)
239 throw std::future_error(std::future_errc::no_state);
240 // Give shared-ownership to the continuation:
241 auto state = std::move(state_);
242 state->set_continuation(simgrid::xbt::makeTask(
243 std::move(continuation), state));
247 The `.get()` delegates to the shared state. As we mentioned previously, an
248 error is raised if the future is not ready:
252 T simgrid::kernel::Future::get()
254 if (state_ == nullptr)
255 throw std::future_error(std::future_errc::no_state);
256 std::shared_ptr<FutureState<T>> state = std::move(state_);
261 T simgrid::kernel::FutureState<T>::get()
263 xbt_assert(status_ == FutureStatus::ready, "Deadlock: this future is not ready");
264 status_ = FutureStatus::done;
266 std::exception_ptr exception = std::move(exception_);
267 std::rethrow_exception(std::move(exception));
269 xbt_assert(this->value_);
270 auto result = std::move(this->value_.get());
271 this->value_ = boost::optional<T>();
272 return std::move(result);
276 @section uhood_switch_simcalls Implementing the simcalls
278 So a simcall is a way for the actor to push a request to the
279 simulation kernel and yield the control until the request is
280 fulfilled. The performance requirements are very high because
281 the actors usually do an inordinate amount of simcalls during the
284 As for real syscalls, the basic idea is to write the wanted call and
285 its arguments in a memory area that is specific to the actor, and
286 yield the control to the simulation kernel. Once in kernel mode, the
287 simcalls of each demanding actor are evaluated sequentially in a
288 strictly reproducible order. This makes the whole simulation
292 @subsection uhood_switch_simcalls_v2 The historical way
294 In the very first implementation, everything was written by hand and
295 highly optimized, making our software very hard to maintain and
296 evolve. We decided to sacrifice some performance for
297 maintainability. In a second try (that is still in use in SimGrid
298 v3.13), we had a lot of boiler code generated from a python script,
299 taking the [list of simcalls](https://github.com/simgrid/simgrid/blob/4ae2fd01d8cc55bf83654e29f294335e3cb1f022/src/simix/simcalls.in)
300 as input. It looks like this:
303 # This looks like C++ but it is a basic IDL-like language
304 # (one definition per line) parsed by a python script:
306 void process_kill(smx_actor_t process);
307 void process_killall(int reset_pid);
308 void process_cleanup(smx_actor_t process) [[nohandler]];
309 void process_suspend(smx_actor_t process) [[block]];
310 void process_resume(smx_actor_t process);
311 void process_set_host(smx_actor_t process, sg_host_t dest);
312 int process_is_suspended(smx_actor_t process) [[nohandler]];
313 int process_join(smx_actor_t process, double timeout) [[block]];
314 int process_sleep(double duration) [[block]];
316 smx_mutex_t mutex_init();
317 void mutex_lock(smx_mutex_t mutex) [[block]];
318 int mutex_trylock(smx_mutex_t mutex);
319 void mutex_unlock(smx_mutex_t mutex);
324 At runtime, a simcall is represented by a structure containing a simcall
325 number and its arguments (among some other things):
328 struct s_smx_simcall {
333 // Arguments of the simcall:
334 union u_smx_scalar args[11];
335 // Result of the simcall:
336 union u_smx_scalar result;
337 // Some additional stuff:
343 with the a scalar union type:
356 unsigned long long ull;
363 When manually calling the relevant [Python
364 script](https://github.com/simgrid/simgrid/blob/4ae2fd01d8cc55bf83654e29f294335e3cb1f022/src/simix/simcalls.py),
365 this generates a bunch of C++ files:
367 * an enum of all the [simcall numbers](https://github.com/simgrid/simgrid/blob/4ae2fd01d8cc55bf83654e29f294335e3cb1f022/src/simix/popping_enum.h#L19);
369 * [user-side wrappers](https://github.com/simgrid/simgrid/blob/4ae2fd01d8cc55bf83654e29f294335e3cb1f022/src/simix/popping_bodies.cpp)
370 responsible for wrapping the parameters in the `struct s_smx_simcall`;
371 and wrapping out the result;
373 * [accessors](https://github.com/simgrid/simgrid/blob/4ae2fd01d8cc55bf83654e29f294335e3cb1f022/src/simix/popping_accessors.hpp)
374 to get/set values of of `struct s_smx_simcall`;
376 * a simulation-kernel-side [big switch](https://github.com/simgrid/simgrid/blob/4ae2fd01d8cc55bf83654e29f294335e3cb1f022/src/simix/popping_generated.cpp#L106)
377 handling all the simcall numbers.
379 Then one has to write the code of the kernel side handler for the simcall
380 and the code of the simcall itself (which calls the code-generated
381 marshaling/unmarshaling stuff).
383 In order to simplify this process, we added two generic simcalls which can be
384 used to execute a function in the simulation kernel:
387 # This one should really be called run_immediate:
388 void run_kernel(std::function<void()> const* code) [[nohandler]];
389 void run_blocking(std::function<void()> const* code) [[block,nohandler]];
392 ### Immediate simcall
394 The first one (`simcall_run_kernel()`) executes a function in the simulation
395 kernel context and returns immediately (without blocking the actor):
398 void simcall_run_kernel(std::function<void()> const& code)
400 simcall_BODY_run_kernel(&code);
403 template<class F> inline
404 void simcall_run_kernel(F& f)
406 simcall_run_kernel(std::function<void()>(std::ref(f)));
410 On top of this, we add a wrapper which can be used to return a value of any
411 type and properly handles exceptions:
415 typename std::result_of<F()>::type kernelImmediate(F&& code)
417 // If we are in the simulation kernel, we take the fast path and
418 // execute the code directly without simcall
419 // marshalling/unmarshalling/dispatch:
420 if (SIMIX_is_maestro())
421 return std::forward<F>(code)();
423 // If we are in the application, pass the code to the simulation
424 // kernel which executes it for us and reports the result:
425 typedef typename std::result_of<F()>::type R;
426 simgrid::xbt::Result<R> result;
427 simcall_run_kernel([&]{
428 xbt_assert(SIMIX_is_maestro(), "Not in maestro");
429 simgrid::xbt::fulfillPromise(result, std::forward<F>(code));
435 where [`Result<R>`](#result) can store either a `R` or an exception.
440 xbt_dict_t Host::properties() {
441 return simgrid::simix::kernelImmediate([&] {
442 simgrid::surf::HostImpl* surf_host =
443 this->extension<simgrid::surf::HostImpl>();
444 return surf_host->getProperties();
449 ### Blocking simcall {#uhood_switch_v2_blocking}
451 The second generic simcall (`simcall_run_blocking()`) executes a function in
452 the SimGrid simulation kernel immediately but does not wake up the calling actor
456 void simcall_run_blocking(std::function<void()> const& code);
459 void simcall_run_blocking(F& f)
461 simcall_run_blocking(std::function<void()>(std::ref(f)));
465 The `f` function is expected to setup some callbacks in the simulation
466 kernel which will wake up the actor (with
467 `simgrid::simix::unblock(actor)`) when the operation is completed.
469 This is wrapped in a higher-level primitive as well. The
470 `kernel_sync()` function expects a function-object which is executed
471 immediately in the simulation kernel and returns a `Future<T>`. The
472 simulator blocks the actor and resumes it when the `Future<T>` becomes
473 ready with its result:
477 auto kernel_sync(F code) -> decltype(code().get())
479 typedef decltype(code().get()) T;
480 xbt_assert(not SIMIX_is_maestro(), "Can't execute blocking call in kernel mode");
482 smx_actor_t self = SIMIX_process_self();
483 simgrid::xbt::Result<T> result;
485 simcall_run_blocking([&result, self, &code]{
487 auto future = code();
488 future.then_([&result, self](simgrid::kernel::Future<T> value) {
489 // Propagate the result from the future
490 // to the simgrid::xbt::Result:
491 simgrid::xbt::setPromise(result, value);
492 simgrid::simix::unblock(self);
496 // The code failed immediately. We can wake up the actor
497 // immediately with the exception:
498 result.set_exception(std::current_exception());
499 simgrid::simix::unblock(self);
503 // Get the result of the operation (which might be an exception):
508 A contrived example of this would be:
511 int res = simgrid::simix::kernel_sync([&] {
512 return kernel_wait_until(30).then(
513 [](simgrid::kernel::Future<void> future) {
520 ### Asynchronous operations {#uhood_switch_v2_async}
522 We can write the related `kernel_async()` which wakes up the actor immediately
523 and returns a future to the actor. As this future is used in the actor context,
524 it is a different future
525 (`simgrid::simix::Future` instead of `simgrid::kernel::Future`)
526 which implements a C++11 `std::future` wait-based API:
533 Future(simgrid::kernel::Future<T> future) : future_(std::move(future)) {}
534 bool valid() const { return future_.valid(); }
536 bool is_ready() const;
539 // We wrap an event-based kernel future:
540 simgrid::kernel::Future<T> future_;
544 The `future.get()` method is implemented as[^getcompared]:
548 T simgrid::simix::Future<T>::get()
551 throw std::future_error(std::future_errc::no_state);
552 smx_actor_t self = SIMIX_process_self();
553 simgrid::xbt::Result<T> result;
554 simcall_run_blocking([this, &result, self]{
556 // When the kernel future is ready...
558 [this, &result, self](simgrid::kernel::Future<T> value) {
559 // ... wake up the process with the result of the kernel future.
560 simgrid::xbt::setPromise(result, value);
561 simgrid::simix::unblock(self);
565 result.set_exception(std::current_exception());
566 simgrid::simix::unblock(self);
573 `kernel_async()` simply :wink: calls `kernelImmediate()` and wraps the
574 `simgrid::kernel::Future` into a `simgrid::simix::Future`:
578 auto kernel_async(F code)
579 -> Future<decltype(code().get())>
581 typedef decltype(code().get()) T;
583 // Execute the code in the simulation kernel and get the kernel future:
584 simgrid::kernel::Future<T> future =
585 simgrid::simix::kernelImmediate(std::move(code));
587 // Wrap the kernel future in a user future:
588 return simgrid::simix::Future<T>(std::move(future));
592 A contrived example of this would be:
595 simgrid::simix::Future<int> future = simgrid::simix::kernel_sync([&] {
596 return kernel_wait_until(30).then(
597 [](simgrid::kernel::Future<void> future) {
603 int res = future.get();
606 `kernel_sync()` could be rewritten as:
610 auto kernel_sync(F code) -> decltype(code().get())
612 return kernel_async(std::move(code)).get();
616 The semantic is equivalent but this form would require two simcalls
617 instead of one to do the same job (one in `kernel_async()` and one in
620 ## Mutexes and condition variables
622 ### Condition Variables
624 Similarly SimGrid already had simulation-level condition variables
625 which can be exposed using the same API as `std::condition_variable`:
628 class ConditionVariable {
632 ConditionVariable(smx_cond_t cond) : cond_(cond) {}
635 ConditionVariable(ConditionVariable const&) = delete;
636 ConditionVariable& operator=(ConditionVariable const&) = delete;
638 friend void intrusive_ptr_add_ref(ConditionVariable* cond);
639 friend void intrusive_ptr_release(ConditionVariable* cond);
640 using Ptr = boost::intrusive_ptr<ConditionVariable>;
641 static Ptr createConditionVariable();
643 void wait(std::unique_lock<Mutex>& lock);
645 void wait(std::unique_lock<Mutex>& lock, P pred);
647 // Wait functions taking a plain double as time:
649 std::cv_status wait_until(std::unique_lock<Mutex>& lock,
650 double timeout_time);
651 std::cv_status wait_for(
652 std::unique_lock<Mutex>& lock, double duration);
654 bool wait_until(std::unique_lock<Mutex>& lock,
655 double timeout_time, P pred);
657 bool wait_for(std::unique_lock<Mutex>& lock,
658 double duration, P pred);
660 // Wait functions taking a std::chrono time:
662 template<class Rep, class Period, class P>
663 bool wait_for(std::unique_lock<Mutex>& lock,
664 std::chrono::duration<Rep, Period> duration, P pred);
665 template<class Rep, class Period>
666 std::cv_status wait_for(std::unique_lock<Mutex>& lock,
667 std::chrono::duration<Rep, Period> duration);
668 template<class Duration>
669 std::cv_status wait_until(std::unique_lock<Mutex>& lock,
670 const SimulationTimePoint<Duration>& timeout_time);
671 template<class Duration, class P>
672 bool wait_until(std::unique_lock<Mutex>& lock,
673 const SimulationTimePoint<Duration>& timeout_time, P pred);
683 We currently accept both `double` (for simplicity and consistency with
684 the current codebase) and `std::chrono` types (for compatibility with
685 C++ code) as durations and timepoints. One important thing to notice here is
686 that `cond.wait_for()` and `cond.wait_until()` work in the simulated time,
687 not in the real time.
689 The simple `cond.wait()` and `cond.wait_for()` delegate to
690 pre-existing simcalls:
693 void ConditionVariable::wait(std::unique_lock<Mutex>& lock)
695 simcall_cond_wait(cond_, lock.mutex()->mutex_);
698 std::cv_status ConditionVariable::wait_for(
699 std::unique_lock<Mutex>& lock, double timeout)
701 // The simcall uses -1 for "any timeout" but we don't want this:
706 simcall_cond_wait_timeout(cond_, lock.mutex()->mutex_, timeout);
707 return std::cv_status::no_timeout;
709 catch (const simgrid::TimeoutException& e) {
710 // If the exception was a timeout, we have to take the lock again:
712 lock.mutex()->lock();
713 return std::cv_status::timeout;
725 Other methods are simple wrappers around those two:
729 void ConditionVariable::wait(std::unique_lock<Mutex>& lock, P pred)
736 bool ConditionVariable::wait_until(std::unique_lock<Mutex>& lock,
737 double timeout_time, P pred)
740 if (this->wait_until(lock, timeout_time) == std::cv_status::timeout)
746 bool ConditionVariable::wait_for(std::unique_lock<Mutex>& lock,
747 double duration, P pred)
749 return this->wait_until(lock,
750 simgrid::s4u::Engine::get_clock() + duration, std::move(pred));
757 We wrote two future implementations based on the `std::future` API:
759 * the first one is a non-blocking event-based (`future.then(stuff)`)
760 future used inside our (non-blocking event-based) simulation kernel;
762 * the second one is a wait-based (`future.get()`) future used in the actors
763 which waits using a simcall.
765 These futures are used to implement `kernel_sync()` and `kernel_async()` which
766 expose asynchronous operations in the simulation kernel to the actors.
768 In addition, we wrote variations of some other C++ standard library
769 classes (`SimulationClock`, `Mutex`, `ConditionVariable`) which work in
772 * using simulated time;
774 * using simcalls for synchronisation.
776 Reusing the same API as the C++ standard library is very useful because:
778 * we use a proven API with a clearly defined semantic;
780 * people already familiar with those API can use our own easily;
782 * users can rely on documentation, examples and tutorials made by other
785 * we can reuse generic code with our types (`std::unique_lock`,
786 `std::lock_guard`, etc.).
788 This type of approach might be useful for other libraries which define
789 their own contexts. An example of this is
790 [Mordor](https://github.com/mozy/mordor), an I/O library using fibers
791 (cooperative scheduling): it implements cooperative/fiber
792 [mutex](https://github.com/mozy/mordor/blob/4803b6343aee531bfc3588ffc26a0d0fdf14b274/mordor/fibersynchronization.h#L70),
794 mutex](https://github.com/mozy/mordor/blob/4803b6343aee531bfc3588ffc26a0d0fdf14b274/mordor/fibersynchronization.h#L105)
795 which are compatible with the
796 [`BasicLockable`](http://en.cppreference.com/w/cpp/concept/BasicLockable)
798 [`[thread.req.lockable.basic]`]((http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n4296.pdf#page=1175))
799 in the C++14 standard).
801 ## Appendix: useful helpers
805 Result is like a mix of `std::future` and `std::promise` in a
806 single-object without shared-state and synchronisation:
812 bool is_valid() const;
813 void set_exception(std::exception_ptr e);
814 void set_value(T&& value);
815 void set_value(T const& value);
818 boost::variant<boost::blank, T, std::exception_ptr> value_;
824 Those helper are useful for dealing with generic future-based code:
827 template<class R, class F>
828 auto fulfillPromise(R& promise, F&& code)
829 -> decltype(promise.set_value(code()))
832 promise.set_value(std::forward<F>(code)());
835 promise.set_exception(std::current_exception());
839 template<class P, class F>
840 auto fulfillPromise(P& promise, F&& code)
841 -> decltype(promise.set_value())
844 std::forward<F>(code)();
848 promise.set_exception(std::current_exception());
852 template<class P, class F>
853 void setPromise(P& promise, F&& future)
855 fulfillPromise(promise, [&]{ return std::forward<F>(future).get(); });
861 `Task<R(F...)>` is a type-erased callable object similar to
862 `std::function<R(F...)>` but works for move-only types. It is similar to
863 `std::package_task<R(F...)>` but does not wrap the result in a `std::future<R>`
864 (it is not <i>packaged</i>).
866 | |`std::function` |`std::packaged_task`|`simgrid::xbt::Task`
867 |---------------|----------------|--------------------|--------------------------
868 |Copyable | Yes | No | No
869 |Movable | Yes | Yes | Yes
870 |Call | `const` | non-`const` | non-`const`
871 |Callable | multiple times | once | once
872 |Sets a promise | No | Yes | No
874 It could be implemented as:
880 std::packaged_task<T> task_;
885 task_(std::forward<F>(f))
888 template<class... ArgTypes>
889 auto operator()(ArgTypes... args)
890 -> decltype(task_.get_future().get())
892 task_(std::forward<ArgTypes)(args)...);
893 return task_.get_future().get();
899 but we don't need a shared-state.
901 This is useful in order to bind move-only type arguments:
904 template<class F, class... Args>
908 std::tuple<Args...> args_;
909 typedef decltype(simgrid::xbt::apply(
910 std::move(code_), std::move(args_))) result_type;
912 TaskImpl(F code, std::tuple<Args...> args) :
913 code_(std::move(code)),
914 args_(std::move(args))
916 result_type operator()()
918 // simgrid::xbt::apply is C++17 std::apply:
919 return simgrid::xbt::apply(std::move(code_), std::move(args_));
923 template<class F, class... Args>
924 auto makeTask(F code, Args... args)
925 -> Task< decltype(code(std::move(args)...))() >
927 TaskImpl<F, Args...> task(
928 std::move(code), std::make_tuple(std::move(args)...));
929 return std::move(task);
938 You might want to compare this method with `simgrid::kernel::Future::get()`
939 we showed previously: the method of the kernel future does not block and
940 raises an error if the future is not ready; the method of the actor future
941 blocks after having set a continuation to wake the actor when the future
946 `std::lock()` might kinda work too but it may not be such as good idea to
947 use it as it may use a [<q>deadlock avoidance algorithm such as
948 try-and-back-off</q>](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n4296.pdf#page=1199).
949 A backoff would probably uselessly wait in real time instead of simulated
950 time. The deadlock avoidance algorithm might as well add non-determinism
951 in the simulation which we would like to avoid.
952 `std::try_lock()` should be safe to use though.