Buffer Sharing and Synchronization¶
The dma-buf subsystem provides the framework for sharing buffers for hardware (DMA) access across multiple device drivers and subsystems, and for synchronizing asynchronous hardware access.
This is used, for example, by drm “prime” multi-GPU support, but is of course not limited to GPU use cases.
The three main components of this are: (1) dma-buf, representing a sg_table and exposed to userspace as a file descriptor to allow passing between devices, (2) fence, which provides a mechanism to signal when one device as finished access, and (3) reservation, which manages the shared or exclusive fence(s) associated with the buffer.
Reservation Objects¶
The reservation object provides a mechanism to manage shared and exclusive fences associated with a buffer. A reservation object can have attached one exclusive fence (normally associated with write operations) or N shared fences (read operations). The RCU mechanism is used to protect read access to fences from locked write-side updates.
-
void dma_resv_init(struct dma_resv *obj)¶
initialize a reservation object
Parameters
struct dma_resv * obj
the reservation object
-
void dma_resv_fini(struct dma_resv *obj)¶
destroys a reservation object
Parameters
struct dma_resv * obj
the reservation object
Reserve space to add shared fences to a dma_resv.
Parameters
struct dma_resv * obj
reservation object
unsigned int num_fences
number of fences we want to add
Description
Should be called before dma_resv_add_shared_fence()
. Must
be called with obj->lock held.
RETURNS Zero for success, or -errno
Add a fence to a shared slot
Parameters
struct dma_resv * obj
the reservation object
struct dma_fence * fence
the shared fence to add
Description
Add a fence to a shared slot, obj->lock must be held, and
dma_resv_reserve_shared()
has been called.
-
void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence)¶
Add an exclusive fence.
Parameters
struct dma_resv * obj
the reservation object
struct dma_fence * fence
the shared fence to add
Description
Add a fence to the exclusive slot. The obj->lock must be held.
-
int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src)¶
Copy all fences from src to dst.
Parameters
struct dma_resv * dst
the destination reservation object
struct dma_resv * src
the source reservation object
Description
Copy all fences from src to dst. dst-lock must be held.
-
int dma_resv_get_fences_rcu(struct dma_resv *obj, struct dma_fence **pfence_excl, unsigned *pshared_count, struct dma_fence ***pshared)¶
Get an object’s shared and exclusive fences without update side lock held
Parameters
struct dma_resv * obj
the reservation object
struct dma_fence ** pfence_excl
the returned exclusive fence (or NULL)
unsigned * pshared_count
the number of shared fences returned
struct dma_fence *** pshared
the array of shared fence ptrs returned (array is krealloc’d to the required size, and must be freed by caller)
Description
Retrieve all fences from the reservation object. If the pointer for the exclusive fence is not specified the fence is put into the array of the shared fences as well. Returns either zero or -ENOMEM.
-
long dma_resv_wait_timeout_rcu(struct dma_resv *obj, bool wait_all, bool intr, unsigned long timeout)¶
Wait on reservation’s objects shared and/or exclusive fences.
Parameters
struct dma_resv * obj
the reservation object
bool wait_all
if true, wait on all fences, else wait on just exclusive fence
bool intr
if true, do interruptible wait
unsigned long timeout
timeout value in jiffies or zero to return immediately
Description
RETURNS Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or greater than zer on success.
-
bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all)¶
Test if a reservation object’s fences have been signaled.
Parameters
struct dma_resv * obj
the reservation object
bool test_all
if true, test all fences, otherwise only test the exclusive fence
Description
RETURNS true if all fences signaled, else false
- struct dma_resv_list
a list of shared fences
Definition
struct dma_resv_list {
struct rcu_head rcu;
u32 shared_count, shared_max;
struct dma_fence __rcu *shared[];
};
Members
rcu
for internal use
shared_count
table of shared fences
shared_max
for growing shared fence table
shared
shared fence table
- struct dma_resv
a reservation object manages fences for a buffer
Definition
struct dma_resv {
struct ww_mutex lock;
seqcount_t seq;
struct dma_fence __rcu *fence_excl;
struct dma_resv_list __rcu *fence;
};
Members
lock
update side lock
seq
sequence count for managing RCU read-side synchronization
fence_excl
the exclusive fence, if there is one currently
fence
list of current shared fences
-
struct dma_resv_list *dma_resv_get_list(struct dma_resv *obj)¶
get the reservation object’s shared fence list, with update-side lock held
Parameters
struct dma_resv * obj
the reservation object
Description
Returns the shared fence list. Does NOT take references to the fence. The obj->lock must be held.
-
int dma_resv_lock(struct dma_resv *obj, struct ww_acquire_ctx *ctx)¶
lock the reservation object
Parameters
struct dma_resv * obj
the reservation object
struct ww_acquire_ctx * ctx
the locking context
Description
Locks the reservation object for exclusive access and modification. Note, that the lock is only against other writers, readers will run concurrently with a writer under RCU. The seqlock is used to notify readers if they overlap with a writer.
As the reservation object may be locked by multiple parties in an undefined order, a #ww_acquire_ctx is passed to unwind if a cycle is detected. See ww_mutex_lock() and ww_acquire_init(). A reservation object may be locked by itself by passing NULL as ctx.
-
int dma_resv_lock_interruptible(struct dma_resv *obj, struct ww_acquire_ctx *ctx)¶
lock the reservation object
Parameters
struct dma_resv * obj
the reservation object
struct ww_acquire_ctx * ctx
the locking context
Description
Locks the reservation object interruptible for exclusive access and modification. Note, that the lock is only against other writers, readers will run concurrently with a writer under RCU. The seqlock is used to notify readers if they overlap with a writer.
As the reservation object may be locked by multiple parties in an undefined order, a #ww_acquire_ctx is passed to unwind if a cycle is detected. See ww_mutex_lock() and ww_acquire_init(). A reservation object may be locked by itself by passing NULL as ctx.
-
void dma_resv_lock_slow(struct dma_resv *obj, struct ww_acquire_ctx *ctx)¶
slowpath lock the reservation object
Parameters
struct dma_resv * obj
the reservation object
struct ww_acquire_ctx * ctx
the locking context
Description
Acquires the reservation object after a die case. This function
will sleep until the lock becomes available. See dma_resv_lock()
as
well.
-
int dma_resv_lock_slow_interruptible(struct dma_resv *obj, struct ww_acquire_ctx *ctx)¶
slowpath lock the reservation object, interruptible
Parameters
struct dma_resv * obj
the reservation object
struct ww_acquire_ctx * ctx
the locking context
Description
Acquires the reservation object interruptible after a die case. This function
will sleep until the lock becomes available. See
dma_resv_lock_interruptible()
as well.
-
bool dma_resv_trylock(struct dma_resv *obj)¶
trylock the reservation object
Parameters
struct dma_resv * obj
the reservation object
Description
Tries to lock the reservation object for exclusive access and modification. Note, that the lock is only against other writers, readers will run concurrently with a writer under RCU. The seqlock is used to notify readers if they overlap with a writer.
Also note that since no context is provided, no deadlock protection is possible.
Returns true if the lock was acquired, false otherwise.
-
bool dma_resv_is_locked(struct dma_resv *obj)¶
is the reservation object locked
Parameters
struct dma_resv * obj
the reservation object
Description
Returns true if the mutex is locked, false if unlocked.
-
struct ww_acquire_ctx *dma_resv_locking_ctx(struct dma_resv *obj)¶
returns the context used to lock the object
Parameters
struct dma_resv * obj
the reservation object
Description
Returns the context used to lock a reservation object or NULL if no context was used or the object is not locked at all.
-
void dma_resv_unlock(struct dma_resv *obj)¶
unlock the reservation object
Parameters
struct dma_resv * obj
the reservation object
Description
Unlocks the reservation object following exclusive access.
-
struct dma_fence *dma_resv_get_excl(struct dma_resv *obj)¶
get the reservation object’s exclusive fence, with update-side lock held
Parameters
struct dma_resv * obj
the reservation object
Description
Returns the exclusive fence (if any). Does NOT take a reference. Writers must hold obj->lock, readers may only hold a RCU read side lock.
RETURNS The exclusive fence or NULL
-
struct dma_fence *dma_resv_get_excl_rcu(struct dma_resv *obj)¶
get the reservation object’s exclusive fence, without lock held.
Parameters
struct dma_resv * obj
the reservation object
Description
If there is an exclusive fence, this atomically increments it’s reference count and returns it.
RETURNS The exclusive fence or NULL if none
DMA Fences¶
DMA fences, represented by struct dma_fence
, are the kernel internal
synchronization primitive for DMA operations like GPU rendering, video
encoding/decoding, or displaying buffers on a screen.
A fence is initialized using dma_fence_init()
and completed using
dma_fence_signal()
. Fences are associated with a context, allocated through
dma_fence_context_alloc()
, and all fences on the same context are
fully ordered.
Since the purposes of fences is to facilitate cross-device and cross-application synchronization, there’s multiple ways to use one:
Individual fences can be exposed as a
sync_file
, accessed as a file descriptor from userspace, created by callingsync_file_create()
. This is called explicit fencing, since userspace passes around explicit synchronization points.Some subsystems also have their own explicit fencing primitives, like
drm_syncobj
. Compared tosync_file
, adrm_syncobj
allows the underlying fence to be updated.Then there’s also implicit fencing, where the synchronization points are implicitly passed around as part of shared
dma_buf
instances. Such implicit fences are stored instruct dma_resv
through thedma_buf.resv
pointer.
DMA Fences Functions Reference¶
-
struct dma_fence *dma_fence_get_stub(void)¶
return a signaled fence
Parameters
void
no arguments
Description
Return a stub fence which is already signaled.
-
u64 dma_fence_context_alloc(unsigned num)¶
allocate an array of fence contexts
Parameters
unsigned num
amount of contexts to allocate
Description
This function will return the first index of the number of fence contexts
allocated. The fence context is used for setting dma_fence.context
to a
unique number by passing the context to dma_fence_init()
.
-
int dma_fence_signal_locked(struct dma_fence *fence)¶
signal completion of a fence
Parameters
struct dma_fence * fence
the fence to signal
Description
Signal completion for software callbacks on a fence, this will unblock
dma_fence_wait()
calls and run all the callbacks added with
dma_fence_add_callback()
. Can be called multiple times, but since a fence
can only go from the unsignaled to the signaled state and not back, it will
only be effective the first time.
Unlike dma_fence_signal()
, this function must be called with dma_fence.lock
held.
Returns 0 on success and a negative error value when fence has been signalled already.
-
int dma_fence_signal(struct dma_fence *fence)¶
signal completion of a fence
Parameters
struct dma_fence * fence
the fence to signal
Description
Signal completion for software callbacks on a fence, this will unblock
dma_fence_wait()
calls and run all the callbacks added with
dma_fence_add_callback()
. Can be called multiple times, but since a fence
can only go from the unsignaled to the signaled state and not back, it will
only be effective the first time.
Returns 0 on success and a negative error value when fence has been signalled already.
-
signed long dma_fence_wait_timeout(struct dma_fence *fence, bool intr, signed long timeout)¶
sleep until the fence gets signaled or until timeout elapses
Parameters
struct dma_fence * fence
the fence to wait on
bool intr
if true, do an interruptible wait
signed long timeout
timeout value in jiffies, or MAX_SCHEDULE_TIMEOUT
Description
Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or the remaining timeout in jiffies on success. Other error values may be returned on custom implementations.
Performs a synchronous wait on this fence. It is assumed the caller directly or indirectly (buf-mgr between reservation and committing) holds a reference to the fence, otherwise the fence might be freed before return, resulting in undefined behavior.
See also dma_fence_wait()
and dma_fence_wait_any_timeout()
.
Parameters
struct kref * kref
dma_fence.recfount
Description
This is the default release functions for dma_fence
. Drivers shouldn’t call
this directly, but instead call dma_fence_put()
.
-
void dma_fence_free(struct dma_fence *fence)¶
default release function for
dma_fence
.
Parameters
struct dma_fence * fence
fence to release
Description
This is the default implementation for dma_fence_ops.release
. It calls
kfree_rcu() on fence.
-
void dma_fence_enable_sw_signaling(struct dma_fence *fence)¶
enable signaling on fence
Parameters
struct dma_fence * fence
the fence to enable
Description
This will request for sw signaling to be enabled, to make the fence
complete as soon as possible. This calls dma_fence_ops.enable_signaling
internally.
-
int dma_fence_add_callback(struct dma_fence *fence, struct dma_fence_cb *cb, dma_fence_func_t func)¶
add a callback to be called when the fence is signaled
Parameters
struct dma_fence * fence
the fence to wait on
struct dma_fence_cb * cb
the callback to register
dma_fence_func_t func
the function to call
Description
cb will be initialized by dma_fence_add_callback()
, no initialization
by the caller is required. Any number of callbacks can be registered
to a fence, but a callback can only be registered to one fence at a time.
Note that the callback can be called from an atomic context. If fence is already signaled, this function will return -ENOENT (and not call the callback).
Add a software callback to the fence. Same restrictions apply to
refcount as it does to dma_fence_wait()
, however the caller doesn’t need to
keep a refcount to fence afterward dma_fence_add_callback()
has returned:
when software access is enabled, the creator of the fence is required to keep
the fence alive until after it signals with dma_fence_signal()
. The callback
itself can be called from irq context.
Returns 0 in case of success, -ENOENT if the fence is already signaled and -EINVAL in case of error.
-
int dma_fence_get_status(struct dma_fence *fence)¶
returns the status upon completion
Parameters
struct dma_fence * fence
the dma_fence to query
Description
This wraps dma_fence_get_status_locked()
to return the error status
condition on a signaled fence. See dma_fence_get_status_locked()
for more
details.
Returns 0 if the fence has not yet been signaled, 1 if the fence has been signaled without an error condition, or a negative error code if the fence has been completed in err.
-
bool dma_fence_remove_callback(struct dma_fence *fence, struct dma_fence_cb *cb)¶
remove a callback from the signaling list
Parameters
struct dma_fence * fence
the fence to wait on
struct dma_fence_cb * cb
the callback to remove
Description
Remove a previously queued callback from the fence. This function returns true if the callback is successfully removed, or false if the fence has already been signaled.
WARNING: Cancelling a callback should only be done if you really know what you’re doing, since deadlocks and race conditions could occur all too easily. For this reason, it should only ever be done on hardware lockup recovery, with a reference held to the fence.
Behaviour is undefined if cb has not been added to fence using
dma_fence_add_callback()
beforehand.
-
signed long dma_fence_default_wait(struct dma_fence *fence, bool intr, signed long timeout)¶
default sleep until the fence gets signaled or until timeout elapses
Parameters
struct dma_fence * fence
the fence to wait on
bool intr
if true, do an interruptible wait
signed long timeout
timeout value in jiffies, or MAX_SCHEDULE_TIMEOUT
Description
Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or the remaining timeout in jiffies on success. If timeout is zero the value one is returned if the fence is already signaled for consistency with other functions taking a jiffies timeout.
-
signed long dma_fence_wait_any_timeout(struct dma_fence **fences, uint32_t count, bool intr, signed long timeout, uint32_t *idx)¶
sleep until any fence gets signaled or until timeout elapses
Parameters
struct dma_fence ** fences
array of fences to wait on
uint32_t count
number of fences to wait on
bool intr
if true, do an interruptible wait
signed long timeout
timeout value in jiffies, or MAX_SCHEDULE_TIMEOUT
uint32_t * idx
used to store the first signaled fence index, meaningful only on positive return
Description
Returns -EINVAL on custom fence wait implementation, -ERESTARTSYS if interrupted, 0 if the wait timed out, or the remaining timeout in jiffies on success.
Synchronous waits for the first fence in the array to be signaled. The caller needs to hold a reference to all fences in the array, otherwise a fence might be freed before return, resulting in undefined behavior.
See also dma_fence_wait()
and dma_fence_wait_timeout()
.
-
void dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops, spinlock_t *lock, u64 context, u64 seqno)¶
Initialize a custom fence.
Parameters
struct dma_fence * fence
the fence to initialize
const struct dma_fence_ops * ops
the dma_fence_ops for operations on this fence
spinlock_t * lock
the irqsafe spinlock to use for locking this fence
u64 context
the execution context this fence is run on
u64 seqno
a linear increasing sequence number for this context
Description
Initializes an allocated fence, the caller doesn’t have to keep its
refcount after committing with this fence, but it will need to hold a
refcount again if dma_fence_ops.enable_signaling
gets called.
context and seqno are used for easy comparison between fences, allowing
to check which fence is later by simply using dma_fence_later()
.
- struct dma_fence
software synchronization primitive
Definition
struct dma_fence {
spinlock_t *lock;
const struct dma_fence_ops *ops;
union {
struct list_head cb_list;
ktime_t timestamp;
struct rcu_head rcu;
};
u64 context;
u64 seqno;
unsigned long flags;
struct kref refcount;
int error;
};
Members
lock
spin_lock_irqsave used for locking
ops
dma_fence_ops associated with this fence
{unnamed_union}
anonymous
cb_list
list of all callbacks to call
timestamp
Timestamp when the fence was signaled.
rcu
used for releasing fence with kfree_rcu
context
execution context this fence belongs to, returned by
dma_fence_context_alloc()
seqno
the sequence number of this fence inside the execution context, can be compared to decide which fence would be signaled later.
flags
A mask of DMA_FENCE_FLAG_* defined below
refcount
refcount for this fence
error
Optional, only valid if < 0, must be set before calling dma_fence_signal, indicates that the fence has completed with an error.
Description
the flags member must be manipulated and read using the appropriate atomic ops (bit_*), so taking the spinlock will not be needed most of the time.
DMA_FENCE_FLAG_SIGNALED_BIT - fence is already signaled DMA_FENCE_FLAG_TIMESTAMP_BIT - timestamp recorded for fence signaling DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT - enable_signaling might have been called DMA_FENCE_FLAG_USER_BITS - start of the unused bits, can be used by the implementer of the fence for its own purposes. Can be used in different ways by different fence implementers, so do not rely on this.
Since atomic bitops are used, this is not guaranteed to be the case. Particularly, if the bit was set, but dma_fence_signal was called right before this bit was set, it would have been able to set the DMA_FENCE_FLAG_SIGNALED_BIT, before enable_signaling was called. Adding a check for DMA_FENCE_FLAG_SIGNALED_BIT after setting DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT closes this race, and makes sure that after dma_fence_signal was called, any enable_signaling call will have either been completed, or never called at all.
- struct dma_fence_cb
callback for
dma_fence_add_callback()
Definition
struct dma_fence_cb {
struct list_head node;
dma_fence_func_t func;
};
Members
node
used by
dma_fence_add_callback()
to append this struct to fence::cb_listfunc
dma_fence_func_t to call
Description
This struct will be initialized by dma_fence_add_callback()
, additional
data can be passed along by embedding dma_fence_cb in another struct.
- struct dma_fence_ops
operations implemented for fence
Definition
struct dma_fence_ops {
bool use_64bit_seqno;
const char * (*get_driver_name)(struct dma_fence *fence);
const char * (*get_timeline_name)(struct dma_fence *fence);
bool (*enable_signaling)(struct dma_fence *fence);
bool (*signaled)(struct dma_fence *fence);
signed long (*wait)(struct dma_fence *fence, bool intr, signed long timeout);
void (*release)(struct dma_fence *fence);
void (*fence_value_str)(struct dma_fence *fence, char *str, int size);
void (*timeline_value_str)(struct dma_fence *fence, char *str, int size);
};
Members
use_64bit_seqno
True if this dma_fence implementation uses 64bit seqno, false otherwise.
get_driver_name
Returns the driver name. This is a callback to allow drivers to compute the name at runtime, without having it to store permanently for each fence, or build a cache of some sort.
This callback is mandatory.
get_timeline_name
Return the name of the context this fence belongs to. This is a callback to allow drivers to compute the name at runtime, without having it to store permanently for each fence, or build a cache of some sort.
This callback is mandatory.
enable_signaling
Enable software signaling of fence.
For fence implementations that have the capability for hw->hw signaling, they can implement this op to enable the necessary interrupts, or insert commands into cmdstream, etc, to avoid these costly operations for the common case where only hw->hw synchronization is required. This is called in the first
dma_fence_wait()
ordma_fence_add_callback()
path to let the fence implementation know that there is another driver waiting on the signal (ie. hw->sw case).This function can be called from atomic context, but not from irq context, so normal spinlocks can be used.
A return value of false indicates the fence already passed, or some failure occurred that made it impossible to enable signaling. True indicates successful enabling.
dma_fence.error
may be set in enable_signaling, but only when false is returned.Since many implementations can call
dma_fence_signal()
even when before enable_signaling has been called there’s a race window, where thedma_fence_signal()
might result in the final fence reference being released and its memory freed. To avoid this, implementations of this callback should grab their own reference usingdma_fence_get()
, to be released when the fence is signalled (through e.g. the interrupt handler).This callback is optional. If this callback is not present, then the driver must always have signaling enabled.
signaled
Peek whether the fence is signaled, as a fastpath optimization for e.g.
dma_fence_wait()
ordma_fence_add_callback()
. Note that this callback does not need to make any guarantees beyond that a fence once indicates as signalled must always return true from this callback. This callback may return false even if the fence has completed already, in this case information hasn’t propogated throug the system yet. See alsodma_fence_is_signaled()
.May set
dma_fence.error
if returning true.This callback is optional.
wait
Custom wait implementation, defaults to
dma_fence_default_wait()
if not set.The dma_fence_default_wait implementation should work for any fence, as long as enable_signaling works correctly. This hook allows drivers to have an optimized version for the case where a process context is already available, e.g. if enable_signaling for the general case needs to set up a worker thread.
Must return -ERESTARTSYS if the wait is intr = true and the wait was interrupted, and remaining jiffies if fence has signaled, or 0 if wait timed out. Can also return other error values on custom implementations, which should be treated as if the fence is signaled. For example a hardware lockup could be reported like that.
This callback is optional.
release
Called on destruction of fence to release additional resources. Can be called from irq context. This callback is optional. If it is NULL, then
dma_fence_free()
is instead called as the default implementation.fence_value_str
Callback to fill in free-form debug info specific to this fence, like the sequence number.
This callback is optional.
timeline_value_str
Fills in the current value of the timeline as a string, like the sequence number. Note that the specific fence passed to this function should not matter, drivers should only use it to look up the corresponding timeline structures.
-
void dma_fence_put(struct dma_fence *fence)¶
decreases refcount of the fence
Parameters
struct dma_fence * fence
fence to reduce refcount of
-
struct dma_fence *dma_fence_get(struct dma_fence *fence)¶
increases refcount of the fence
Parameters
struct dma_fence * fence
fence to increase refcount of
Description
Returns the same fence, with refcount increased by 1.
-
struct dma_fence *dma_fence_get_rcu(struct dma_fence *fence)¶
get a fence from a dma_resv_list with rcu read lock
Parameters
struct dma_fence * fence
fence to increase refcount of
Description
Function returns NULL if no refcount could be obtained, or the fence.
- struct dma_fence * dma_fence_get_rcu_safe (struct dma_fence __rcu ** fencep)
acquire a reference to an RCU tracked fence
Parameters
struct dma_fence __rcu ** fencep
pointer to fence to increase refcount of
Description
Function returns NULL if no refcount could be obtained, or the fence. This function handles acquiring a reference to a fence that may be reallocated within the RCU grace period (such as with SLAB_TYPESAFE_BY_RCU), so long as the caller is using RCU on the pointer to the fence.
An alternative mechanism is to employ a seqlock to protect a bunch of fences, such as used by struct dma_resv. When using a seqlock, the seqlock must be taken before and checked after a reference to the fence is acquired (as shown here).
The caller is required to hold the RCU read lock.
-
bool dma_fence_is_signaled_locked(struct dma_fence *fence)¶
Return an indication if the fence is signaled yet.
Parameters
struct dma_fence * fence
the fence to check
Description
Returns true if the fence was already signaled, false if not. Since this
function doesn’t enable signaling, it is not guaranteed to ever return
true if dma_fence_add_callback()
, dma_fence_wait()
or
dma_fence_enable_sw_signaling()
haven’t been called before.
This function requires dma_fence.lock
to be held.
See also dma_fence_is_signaled()
.
-
bool dma_fence_is_signaled(struct dma_fence *fence)¶
Return an indication if the fence is signaled yet.
Parameters
struct dma_fence * fence
the fence to check
Description
Returns true if the fence was already signaled, false if not. Since this
function doesn’t enable signaling, it is not guaranteed to ever return
true if dma_fence_add_callback()
, dma_fence_wait()
or
dma_fence_enable_sw_signaling()
haven’t been called before.
It’s recommended for seqno fences to call dma_fence_signal when the operation is complete, it makes it possible to prevent issues from wraparound between time of issue and time of use by checking the return value of this function before calling hardware-specific wait instructions.
See also dma_fence_is_signaled_locked()
.
-
bool __dma_fence_is_later(u64 f1, u64 f2, const struct dma_fence_ops *ops)¶
return if f1 is chronologically later than f2
Parameters
u64 f1
the first fence’s seqno
u64 f2
the second fence’s seqno from the same context
const struct dma_fence_ops * ops
dma_fence_ops associated with the seqno
Description
Returns true if f1 is chronologically later than f2. Both fences must be from the same context, since a seqno is not common across contexts.
-
bool dma_fence_is_later(struct dma_fence *f1, struct dma_fence *f2)¶
return if f1 is chronologically later than f2
Parameters
struct dma_fence * f1
the first fence from the same context
struct dma_fence * f2
the second fence from the same context
Description
Returns true if f1 is chronologically later than f2. Both fences must be from the same context, since a seqno is not re-used across contexts.
-
struct dma_fence *dma_fence_later(struct dma_fence *f1, struct dma_fence *f2)¶
return the chronologically later fence
Parameters
struct dma_fence * f1
the first fence from the same context
struct dma_fence * f2
the second fence from the same context
Description
Returns NULL if both fences are signaled, otherwise the fence that would be signaled last. Both fences must be from the same context, since a seqno is not re-used across contexts.
-
int dma_fence_get_status_locked(struct dma_fence *fence)¶
returns the status upon completion
Parameters
struct dma_fence * fence
the dma_fence to query
Description
Drivers can supply an optional error status condition before they signal
the fence (to indicate whether the fence was completed due to an error
rather than success). The value of the status condition is only valid
if the fence has been signaled, dma_fence_get_status_locked()
first checks
the signal state before reporting the error status.
Returns 0 if the fence has not yet been signaled, 1 if the fence has been signaled without an error condition, or a negative error code if the fence has been completed in err.
-
void dma_fence_set_error(struct dma_fence *fence, int error)¶
flag an error condition on the fence
Parameters
struct dma_fence * fence
the dma_fence
int error
the error to store
Description
Drivers can supply an optional error status condition before they signal the fence, to indicate that the fence was completed due to an error rather than success. This must be set before signaling (so that the value is visible before any waiters on the signal callback are woken). This helper exists to help catching erroneous setting of #dma_fence.error.
-
signed long dma_fence_wait(struct dma_fence *fence, bool intr)¶
sleep until the fence gets signaled
Parameters
struct dma_fence * fence
the fence to wait on
bool intr
if true, do an interruptible wait
Description
This function will return -ERESTARTSYS if interrupted by a signal, or 0 if the fence was signaled. Other error values may be returned on custom implementations.
Performs a synchronous wait on this fence. It is assumed the caller directly or indirectly holds a reference to the fence, otherwise the fence might be freed before return, resulting in undefined behavior.
See also dma_fence_wait_timeout()
and dma_fence_wait_any_timeout()
.
Seqno Hardware Fences¶
-
struct seqno_fence *to_seqno_fence(struct dma_fence *fence)¶
cast a fence to a seqno_fence
Parameters
struct dma_fence * fence
fence to cast to a seqno_fence
Description
Returns NULL if the fence is not a seqno_fence, or the seqno_fence otherwise.
-
void seqno_fence_init(struct seqno_fence *fence, spinlock_t *lock, struct dma_buf *sync_buf, uint32_t context, uint32_t seqno_ofs, uint32_t seqno, enum seqno_fence_condition cond, const struct dma_fence_ops *ops)¶
initialize a seqno fence
Parameters
struct seqno_fence * fence
seqno_fence to initialize
spinlock_t * lock
pointer to spinlock to use for fence
struct dma_buf * sync_buf
buffer containing the memory location to signal on
uint32_t context
the execution context this fence is a part of
uint32_t seqno_ofs
the offset within sync_buf
uint32_t seqno
the sequence # to signal on
enum seqno_fence_condition cond
fence wait condition
const struct dma_fence_ops * ops
the fence_ops for operations on this seqno fence
Description
This function initializes a struct seqno_fence with passed parameters, and takes a reference on sync_buf which is released on fence destruction.
A seqno_fence is a dma_fence which can complete in software when enable_signaling is called, but it also completes when (s32)((sync_buf)[seqno_ofs] - seqno) >= 0 is true
The seqno_fence will take a refcount on the sync_buf until it’s destroyed, but actual lifetime of sync_buf may be longer if one of the callers take a reference to it.
Certain hardware have instructions to insert this type of wait condition in the command stream, so no intervention from software would be needed. This type of fence can be destroyed before completed, however a reference on the sync_buf dma-buf can be taken. It is encouraged to re-use the same dma-buf for sync_buf, since mapping or unmapping the sync_buf to the device’s vm can be expensive.
It is recommended for creators of seqno_fence to call dma_fence_signal()
before destruction. This will prevent possible issues from wraparound at
time of issue vs time of check, since users can check dma_fence_is_signaled()
before submitting instructions for the hardware to wait on the fence.
However, when ops.enable_signaling is not called, it doesn’t have to be
done as soon as possible, just before there’s any real danger of seqno
wraparound.
DMA Fence Array¶
-
struct dma_fence_array *dma_fence_array_create(int num_fences, struct dma_fence **fences, u64 context, unsigned seqno, bool signal_on_any)¶
Create a custom fence array
Parameters
int num_fences
[in] number of fences to add in the array
struct dma_fence ** fences
[in] array containing the fences
u64 context
[in] fence context to use
unsigned seqno
[in] sequence number to use
bool signal_on_any
[in] signal on any fence in the array
Description
Allocate a dma_fence_array object and initialize the base fence with
dma_fence_init()
.
In case of error it returns NULL.
The caller should allocate the fences array with num_fences size
and fill it with the fences it wants to add to the object. Ownership of this
array is taken and dma_fence_put()
is used on each fence on release.
If signal_on_any is true the fence array signals if any fence in the array signals, otherwise it signals when all fences in the array signal.
-
bool dma_fence_match_context(struct dma_fence *fence, u64 context)¶
Check if all fences are from the given context
Parameters
struct dma_fence * fence
[in] fence or fence array
u64 context
[in] fence context to check all fences against
Description
Checks the provided fence or, for a fence array, all fences in the array against the given context. Returns false if any fence is from a different context.
- struct dma_fence_array_cb
callback helper for fence array
Definition
struct dma_fence_array_cb {
struct dma_fence_cb cb;
struct dma_fence_array *array;
};
Members
cb
fence callback structure for signaling
array
reference to the parent fence array object
- struct dma_fence_array
fence to represent an array of fences
Definition
struct dma_fence_array {
struct dma_fence base;
spinlock_t lock;
unsigned num_fences;
atomic_t num_pending;
struct dma_fence **fences;
struct irq_work work;
};
Members
base
fence base class
lock
spinlock for fence handling
num_fences
number of fences in the array
num_pending
fences in the array still pending
fences
array of the fences
work
internal irq_work function
-
bool dma_fence_is_array(struct dma_fence *fence)¶
check if a fence is from the array subsclass
Parameters
struct dma_fence * fence
fence to test
Description
Return true if it is a dma_fence_array and false otherwise.
-
struct dma_fence_array *to_dma_fence_array(struct dma_fence *fence)¶
cast a fence to a dma_fence_array
Parameters
struct dma_fence * fence
fence to cast to a dma_fence_array
Description
Returns NULL if the fence is not a dma_fence_array, or the dma_fence_array otherwise.
DMA Fence uABI/Sync File¶
-
struct sync_file *sync_file_create(struct dma_fence *fence)¶
creates a sync file
Parameters
struct dma_fence * fence
fence to add to the sync_fence
Description
Creates a sync_file containg fence. This function acquires and additional
reference of fence for the newly-created sync_file
, if it succeeds. The
sync_file can be released with fput(sync_file->file). Returns the
sync_file or NULL in case of error.
-
struct dma_fence *sync_file_get_fence(int fd)¶
get the fence related to the sync_file fd
Parameters
int fd
sync_file fd to get the fence from
Description
Ensures fd references a valid sync_file and returns a fence that represents all fence in the sync_file. On error NULL is returned.
- struct sync_file
sync file to export to the userspace
Definition
struct sync_file {
struct file *file;
char user_name[32];
#ifdef CONFIG_DEBUG_FS;
struct list_head sync_file_list;
#endif;
wait_queue_head_t wq;
unsigned long flags;
struct dma_fence *fence;
struct dma_fence_cb cb;
};
Members
file
file representing this fence
user_name
Name of the sync file provided by userspace, for merged fences. Otherwise generated through driver callbacks (in which case the entire array is 0).
sync_file_list
membership in global file list
wq
wait queue for fence signaling
flags
flags for the sync_file
fence
fence with the fences in the sync_file
cb
fence callback information
Description
flags: POLL_ENABLED: whether userspace is currently poll()’ing or not