Documentation
¶
Overview ¶
Package mempool wraps RTE mempool library.
Please refer to DPDK Programmer's Guide for reference and caveats.
Index ¶
- Constants
- Variables
- func Walk(fn func(mp *Mempool))
- type Cache
- type Factory
- type FactoryFunc
- type Mempool
- func (mp *Mempool) AvailCount() int
- func (mp *Mempool) Free()
- func (mp *Mempool) GenericGet(objs []unsafe.Pointer, cache *Cache) error
- func (mp *Mempool) GenericPut(objs []unsafe.Pointer, cache *Cache)
- func (mp *Mempool) InUseCount() int
- func (mp *Mempool) IsEmpty() bool
- func (mp *Mempool) IsFull() bool
- func (mp *Mempool) ObjIter(fn func([]byte)) uint32
- func (mp *Mempool) PopulateAnon() (int, error)
- func (mp *Mempool) PopulateDefault() (int, error)
- func (mp *Mempool) PrivBytes() []byte
- func (mp *Mempool) SetOpsByName(name string, poolConfig unsafe.Pointer) error
- func (mp *Mempool) SetOpsRing() error
- type Option
Constants ¶
const ( // By default, objects addresses are spread between channels in // RAM: the pool allocator will add padding between objects // depending on the hardware configuration. See Memory alignment // constraints for details. If this flag is set, the allocator // will just align them to a cache line. NoSpread uint = C.MEMPOOL_F_NO_SPREAD // By default, the returned objects are cache-aligned. This flag // removes this constraint, and no padding will be present between // objects. This flag implies NoSpread. NoCacheAlign = C.MEMPOOL_F_NO_CACHE_ALIGN // If this flag is set, the default behavior when using // rte_mempool_put() or rte_mempool_put_bulk() is // "single-producer". Otherwise, it is "multi-producers". SPPut = C.MEMPOOL_F_SP_PUT // If this flag is set, the default behavior when using // rte_mempool_get() or rte_mempool_get_bulk() is // "single-consumer". Otherwise, it is "multi-consumers". SCGet = C.MEMPOOL_F_SC_GET )
Various non-parameterized options for mempool creation.
Variables ¶
var ( OptNoSpread = OptFlag(NoSpread) OptNoCacheAlign = OptFlag(NoCacheAlign) OptSPPut = OptFlag(SPPut) OptSCGet = OptFlag(SCGet) )
Option shortcuts.
Functions ¶
Types ¶
type Cache ¶
type Cache C.struct_rte_mempool_cache
Cache is a structure that stores a per-core object cache. This can be used by non-EAL threads to enable caching when they interact with a mempool.
func CreateCache ¶
CreateCache creates a user-owned mempool cache.
You may specify OptCacheSize and OptSocket options only.
type FactoryFunc ¶
FactoryFunc implements Factory as a function.
func (FactoryFunc) NewMempool ¶
func (fn FactoryFunc) NewMempool(name string, opts []Option) (*Mempool, error)
NewMempool implements Factory interface.
type Mempool ¶
type Mempool C.struct_rte_mempool
Mempool represents RTE mempool.
func CreateEmpty ¶
CreateEmpty creates new empty mempool. The mempool is allocated and initialized, but it is not populated: no memory is allocated for the mempool elements. The user has to call PopulateDefault() or other API to add memory chunks to the pool. Once populated, the user may also want to initialize each object with ObjIter/ObjIterC.
func CreateMbufPool ¶
CreateMbufPool creates mempool of mbufs. See CreateEmpty options for a list of options. Only differences are described below.
dataRoomSize specifies the maximum size of data buffer in each mbuf, including RTE_PKTMBUF_HEADROOM.
OptPrivateDataSize semantics is different here. It specifies the size of private application data between the rte_mbuf structure and the data buffer. This value must be aligned to RTE_MBUF_PRIV_ALIGN.
The created mempool is already populated and its objects are initialized with rte_pktmbuf_init.
func (*Mempool) AvailCount ¶
AvailCount returns the number of entries in the mempool.
When cache is enabled, this function has to browse the length of all lcores, so it should not be used in a data path, but only for debug purposes. User-owned mempool caches are not accounted for.
func (*Mempool) Free ¶
func (mp *Mempool) Free()
Free the mempool. Unlink the mempool from global list, free the memory chunks, and all memory referenced by the mempool. The objects must not be used by other cores as they will be freed.
func (*Mempool) GenericGet ¶
GenericGet gets object from mempool with optional cache.
func (*Mempool) GenericPut ¶
GenericPut puts object back into mempool with optional cache.
func (*Mempool) InUseCount ¶
InUseCount returns the number of elements which have been allocated from the mempool.
When cache is enabled, this function has to browse the length of all lcores, so it should not be used in a data path, but only for debug purposes.
func (*Mempool) IsEmpty ¶
IsEmpty tests if the mempool is empty.
When cache is enabled, this function has to browse the length of all lcores, so it should not be used in a data path, but only for debug purposes. User-owned mempool caches are not accounted for.
func (*Mempool) IsFull ¶
IsFull tests if the mempool is full.
When cache is enabled, this function has to browse the length of all lcores, so it should not be used in a data path, but only for debug purposes. User-owned mempool caches are not accounted for.
func (*Mempool) ObjIter ¶
ObjIter calls a function for each mempool element. Iterate across all objects attached to a rte_mempool and call the callback function on it.
Specified callback function receives an argument as a slice with underlying array pointing to the consequent object in the mempool.
Returns number of objects iterated.
func (*Mempool) PopulateAnon ¶
PopulateAnon adds memory from anonymous mapping for objects in the pool at init. This function mmap an anonymous memory zone that is locked in memory to store the objects of the mempool.
func (*Mempool) PopulateDefault ¶
PopulateDefault adds memory for objects in the pool at init. This is the default function used by rte_mempool_create() to populate the mempool. It adds memory allocated using rte_memzone_reserve().
func (*Mempool) PrivBytes ¶
PrivBytes returns private data in an mempool structure in a form of slice of bytes. Feel free to edit the contents of the slice but don't extend it by appending or other tools.
func (*Mempool) SetOpsByName ¶
SetOpsByName sets the ops of a mempool. This can only be done on a mempool that is not populated, i.e. just after a call to CreateEmpty().
func (*Mempool) SetOpsRing ¶
SetOpsRing sets the ring-based operations for mempool. This can only be done on a mempool that is not populated, i.e. just after a call to CreateEmpty().
type Option ¶
type Option struct {
// contains filtered or unexported fields
}
Option is used to configure mempool at creation time.
func OptCacheSize ¶
OptCacheSize specifies cache size. If non-zero, the rte_mempool library will try to limit the accesses to the common lockless pool, by maintaining a per-lcore object cache. This argument must be lower or equal to CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE and n / 1.5 where n is number of elements. It is advised to choose cache_size to have "n modulo cache_size == 0": if this is not the case, some elements will always stay in the pool and will never be used. The access to the per-lcore table is of course faster than the multi-producer/consumer pool. The cache can be disabled if the cache_size argument is set to 0; it can be useful to avoid losing objects in cache.
func OptOpsName ¶
OptOpsName specifies mempool operations implementation. Each implementation is provided as a mempool driver so please be sure it is loaded upon start of an application. This option is used in CreateMbufPool only.
OptOpsName sets the ops of a mempool. Currently implemented in DPDK are: 'ring_mp_mc', 'ring_sp_mc', 'ring_mp_sc', 'ring_sp_sc', 'stack', 'lf_stack'.
func OptPrivateDataSize ¶
OptPrivateDataSize specifies size of the private data appended after the mempool structure. This is useful for storing some private data after the mempool structure, as is done for rte_mbuf_pool for example.