Provided by: freebsd-manpages_12.0-1_all bug

NAME

     epoch, epoch_context, epoch_alloc, epoch_free, epoch_enter, epoch_exit, epoch_wait, epoch_call, in_epoch, —
     kernel epoch based reclamation

SYNOPSIS

     #include <sys/param.h>
     #include <sys/proc.h>
     #include <sys/epoch.h>

     epoch_t
     epoch_alloc(int flags);

     void
     epoch_enter(epoch_t epoch);

     void
     epoch_enter_preempt(epoch_t epoch, epoch_tracker_t et);

     void
     epoch_exit(epoch_t epoch);

     void
     epoch_exit_preempt(epoch_t epoch, epoch_tracker_t et);

     void
     epoch_wait(epoch_t epoch);

     void
     epoch_wait_preempt(epoch_t epoch);

     void
     epoch_call(epoch_t epoch, epoch_context_t ctx, void (*callback) (epoch_context_t));

     int
     in_epoch(epoch_t epoch);

DESCRIPTION

     Epochs are used to guarantee liveness and immutability of data by deferring reclamation and mutation until
     a grace period has elapsed.  Epochs do not have any lock ordering issues.  Entering and leaving an epoch
     section will never block.

     Epochs are allocated with epoch_alloc() and freed with epoch_free().  The flags passed to epoch_alloc
     determine whether preemption is allowed during a section or not (the default), as specified by
     EPOCH_PREEMPT.  Threads indicate the start of an epoch critical section by calling epoch_enter().  The end
     of a critical section is indicated by calling epoch_exit().  The _preempt variants can be used around code
     which requires preemption.  A thread can wait until a grace period has elapsed since any threads have
     entered the epoch by calling epoch_wait() or epoch_wait_preempt(), depending on the epoch_type.  The use of
     a default epoch type allows one to use epoch_wait() which is guaranteed to have much shorter completion
     times since we know that none of the threads in an epoch section will be preempted before completing its
     section.  If the thread can't sleep or is otherwise in a performance sensitive path it can ensure that a
     grace period has elapsed by calling epoch_call() with a callback with any work that needs to wait for an
     epoch to elapse.  Only non-sleepable locks can be acquired during a section protected by
     epoch_enter_preempt() and epoch_exit_preempt().  INVARIANTS can assert that a thread is in an epoch by
     using in_epoch().

     The epoch API currently does not support sleeping in epoch_preempt sections.  A caller should never call
     epoch_wait() in the middle of an epoch section for the same epoch as this will lead to a deadlock.

     By default mutexes cannot be held across epoch_wait_preempt().  To permit this the epoch must be allocated
     with EPOCH_LOCKED.  When doing this one must be cautious of creating a situation where a deadlock is
     possible. Note that epochs are not a straight replacement for read locks.  Callers must use safe list and
     tailq traversal routines in an epoch (see ck_queue).  When modifying a list referenced from an epoch
     section safe removal routines must be used and the caller can no longer modify a list entry in place.  An
     item to be modified must be handled with copy on write and frees must be deferred until after a grace
     period has elapsed.

RETURN VALUES

     in_epoch(curepoch) will return 1 if curthread is in curepoch, 0 otherwise.

CAVEATS

     One must be cautious when using epoch_wait_preempt() threads are pinned during epoch sections so if a
     thread in a section is then preempted by a higher priority compute bound thread on that CPU it can be
     prevented from leaving the section.  Thus the wait time for the waiter is potentially unbounded.

EXAMPLES

     Async free example: Thread 1:

     int
     in_pcbladdr(struct inpcb *inp, struct in_addr *faddr, struct in_laddr *laddr,
         struct ucred *cred)
     {
        /* ... */
        epoch_enter(net_epoch);
         CK_STAILQ_FOREACH(ifa, &ifp->if_addrhead, ifa_link) {
             sa = ifa->ifa_addr;
             if (sa->sa_family != AF_INET)
                 continue;
             sin = (struct sockaddr_in *)sa;
             if (prison_check_ip4(cred, &sin->sin_addr) == 0) {
                  ia = (struct in_ifaddr *)ifa;
                  break;
             }
         }
         epoch_exit(net_epoch);
        /* ... */
     }
     Thread 2:

     void
     ifa_free(struct ifaddr *ifa)
     {

         if (refcount_release(&ifa->ifa_refcnt))
             epoch_call(net_epoch, &ifa->ifa_epoch_ctx, ifa_destroy);
     }

     void
     if_purgeaddrs(struct ifnet *ifp)
     {

         /* .... *
         IF_ADDR_WLOCK(ifp);
         CK_STAILQ_REMOVE(&ifp->if_addrhead, ifa, ifaddr, ifa_link);
         IF_ADDR_WUNLOCK(ifp);
         ifa_free(ifa);
     }

     Thread 1 traverses the ifaddr list in an epoch.  Thread 2 unlinks with the corresponding epoch safe macro,
     marks as logically free, and then defers deletion.  More general mutation or a synchronous free would have
     to follow a call to epoch_wait().

ERRORS

     None.

SEE ALSO

     locking(9), mtx_pool(9), mutex(9), rwlock(9), sema(9), sleep(9), sx(9), timeout(9)