API

Full API documentation, automatically generated from doxygen comments.

class Accumulator : public dv::AccumulatorBase
#include </builds/inivation/dv/dv-processing/include/dv-processing/core/frame/accumulator.hpp>

Common accumulator class that allows to accumulate events into a frame. The class is highly configurable to adapt to various use cases. This is the preferred functionality for projecting events onto a frame.

Accumulation of the events is performed on a floating point frame, with every event contributing a fixed amount to the potential. Timestamps of the last contributions are stored as well, to allow for a decay.

Due to performance, no check on the event coordinates inside image plane is performed, unless compiled specifically in DEBUG mode. Events out of the image plane bounds will result in undefined behaviour, or program termination in DEBUG mode.

Public Types

enum class Decay

Decay function to be used to decay the surface potential.

  • NONE: Do not decay at all. The potential can be reset manually by calling the clear function

  • LINEAR: Perform a linear decay with given slope. The linear decay goes from currentpotential until the potential reaches the neutral potential

  • EXPONENTIAL: Exponential decay with time factor tau. The potential eventually converges to zero.

  • STEP: Decay sharply to neutral potential after the given time. Constant potential before.

Values:

enumerator NONE
enumerator LINEAR
enumerator EXPONENTIAL
enumerator STEP

Public Functions

inline Accumulator()

Silly default constructor. This generates an accumulator with zero size. An accumulator with zero size does not work. This constructor just exists to make it possible to default initialize an Accumulator to later redefine.

inline explicit Accumulator(const cv::Size &resolution, Accumulator::Decay decayFunction = Decay::EXPONENTIAL, double decayParam = 1.0e+6, bool synchronousDecay = false, float eventContribution = 0.15f, float maxPotential = 1.0f, float neutralPotential = 0.f, float minPotential = 0.f, bool ignorePolarity = false)

Accumulator constructor Creates a new Accumulator with the given params. By selecting the params the right way, the Accumulator can be used for a multitude of applications. The class also provides static factory functions that adjust the parameters for common use cases.

Parameters:
  • resolution – The size of the resulting frame. This must be at least the dimensions of the eventstream supposed to be added to the accumulator, otherwise this will result in memory errors.

  • decayFunction – The decay function to be used in this accumulator. The decay function is one of NONE, LINEAR, EXPONENTIAL, STEP. The function behave like their mathematical definitions, with LINEAR AND STEP going back to the neutralPotential over time, EXPONENTIAL going back to 0.

  • decayParam – The parameter to tune the decay function. The parameter has a different meaning depending on the decay function chosen: NONE: The parameter is ignored LINEAR: The paramaeter describes the (negative) slope of the linear function EXPONENTIAL: The parameter describes tau, by which the time difference is divided.

  • synchronousDecay – if set to true, all pixel values get decayed to the same time as soon as the frame is generated. If set to false, pixel values remain at the state they had when the last contribution came in.

  • eventContribution – The contribution a single event has onto the potential surface. This value gets interpreted positively or negatively depending on the event polarity

  • maxPotential – The upper cut-off value at which the potential surface is clipped

  • neutralPotential – The potential the decay function converges to over time.

  • minPotential – The lower cut-off value at which the potential surface is clipped

  • ignorePolarity – Describes if the polarity of the events should be kept or ignored. If set to true, all events behave like positive events.

inline virtual void accumulate(const EventStore &packet) override

Accumulates all the events in the supplied packet and puts them onto the accumulation surface.

Parameters:

packet – The packet containing the events that should be accumulated.

inline virtual dv::Frame generateFrame() override

Generates the accumulation frame (potential surface) at the time of the last consumed event. The function writes the output image into the given frame argument. The output frame will contain data with type CV_8U.

Returns:

accumulated frame

inline void clear()

Clears the potential surface by setting it to the neutral value. This function does not reset the time surface.

inline void setIgnorePolarity(const bool ignorePolarity)

If set to true, all events will incur a positive contribution.

Parameters:

ignorePolarity – The new value to set

inline void setEventContribution(float eventContribution)

Contribution to the potential surface an event shall incur. This contribution is either counted positively (for positive events or when rectifyPolatity is set).

Parameters:

eventContribution – The contribution a single event shall incur

inline void setMaxPotential(float maxPotential)
Parameters:

maxPotential – the max potential at which the surface should be capped at

inline void setNeutralPotential(const float neutralPotential)

Set a new neutral potential value. This will also reset the cached potential surface to the given new value.

Parameters:

neutralPotential – The neutral potential to which the decay function should go. Exponential decay always goes to 0. The parameter is ignored there.

inline void setMinPotential(float minPotential)
Parameters:

minPotential – the min potential at which the surface should be capped at

inline void setDecayFunction(Decay decayFunction)
Parameters:

decayFunction – The decay function the module should use to perform the decay

inline void setDecayParam(double decayParam)

The decay param. This is slope for linear decay, tau for exponential decay

Parameters:

decayParam – The param to be used

inline void setSynchronousDecay(bool synchronousDecay)

If set to true, all valued get decayed to the frame generation time at frame generation. If set to false, the values only get decayed on activity.

Parameters:

synchronousDecay – the new value for synchronoues decay

inline bool isIgnorePolarity() const

Check whether polarity of events is ignored.

Returns:

True if polarity is ignored, false otherwise.

inline float getEventContribution() const
inline float getMaxPotential() const
inline float getNeutralPotential() const
inline float getMinPotential() const
inline Decay getDecayFunction() const
inline double getDecayParam() const
inline Accumulator &operator<<(const EventStore &store)

Accumulates the event store into the accumulator.

Parameters:

store – The event store to be accumulated.

Returns:

A reference to this Accumulator.

inline cv::Mat getPotentialSurface() const

Retrieve a copy of the currently accumulated potential surface. Potential surface contains raw floating point values aggregated by the accumulator, the values are within the configured range of [minPotential; maxPotential]. This returns a deep copy of the potential surface.

Returns:

Potential surface image containing CV_32FC1 data.

Private Functions

inline void decay(int16_t x, int16_t y, int64_t time)

INTERNAL_USE_ONLY Decays the potential at coordinates x, y to the given time, respecting the decay function. Updates the time surface to the last decay.

Parameters:
  • x – The x coordinate of the value to be decayed

  • y – The y coordinate of the value to be decayed

  • time – The time to which the value should be decayed to.

inline void contribute(int16_t x, int16_t y, bool polarity)

INTERNAL_USE_ONLY Contributes the effect of a single event onto the potential surface.

Parameters:
  • x – The x coordinate of where to contribute to

  • y – The y coordinate of where to contribute to

  • polarity – The polarity of the contribution

Private Members

bool rectifyPolarity_ = false
float eventContribution_ = .0
float maxPotential_ = .0
float neutralPotential_ = .0
float minPotential_ = .0
Decay decayFunction_ = Decay::NONE
double decayParam_ = .0
bool synchronousDecay_ = false
TimeSurface decayTimeSurface_
cv::Mat potentialSurface_
int64_t highestTime_ = 0
int64_t lowestTime_ = -1
bool resetTimestamp = true

Friends

inline friend std::ostream &operator<<(std::ostream &os, const Decay &var)
class AccumulatorBase
#include </builds/inivation/dv/dv-processing/include/dv-processing/core/frame/accumulator_base.hpp>

An accumulator base that can be used to implement different types of accumulators. Two provided implementations are the dv::Accumulator which is highly configurable and provides numerous ways of generating a frame from events. Another implementation is the dv::EdgeMapAccumulator which accumulates event in a histogram representation with configurable contribution, but it is more efficient compared to generic accumulator since it uses 8-bit unsigned integers as internal memory type.

Subclassed by dv::Accumulator, dv::EdgeMapAccumulator

Public Types

typedef std::shared_ptr<AccumulatorBase> SharedPtr
typedef std::unique_ptr<AccumulatorBase> UniquePtr

Public Functions

inline explicit AccumulatorBase(const cv::Size &resolution)

Accumulator constructor from known event camera sensor dimensions.

Parameters:

resolution – Sensor dimensions

virtual void accumulate(const EventStore &packet) = 0

Accumulate given event store packet into a frame.

Parameters:

packet – Event packet to be accumulated.

inline cv::Size getResolution() const

Get the image dimensions expected by the accumulator.

Returns:

Image dimensions

virtual dv::Frame generateFrame() = 0

Generates the accumulation frame (potential surface) at the time of the last consumed event. The function returns an OpenCV frame to work with.

Returns:

An OpenCV frame containing the accumulated potential surface.

inline dv::Frame &operator>>(dv::Frame &mat)

Output stream operator support for frame generation.

Parameters:

mat – Output image

Returns:

Output image

inline void accept(const EventStore &packet)

Accumulate the given packet.

Parameters:

packet – Input event packet.

virtual ~AccumulatorBase() = default

Protected Attributes

cv::Size mResolution
template<concepts::AddressableEvent EventType, class EventPacketType>
class AddressableEventStorage
#include </builds/inivation/dv/dv-processing/include/dv-processing/core/core.hpp>

EventStore class. An EventStore is a collection of consecutive events, all monotonically increasing in time. EventStore is the basic data structure for handling event data. Event packets hold their data in shards of fixed size. Copying an EventStore results in a shallow copy with shared ownership of the shards that are common to both EventStores. EventStores can be sliced by number of events or by time. Slicing creates a shallow copy of the EventPackage.

Public Types

using value_type = EventType
using const_value_type = const EventType
using pointer = EventType*
using const_pointer = const EventType*
using reference = EventType&
using const_reference = const EventType&
using size_type = size_t
using difference_type = ptrdiff_t
using packet_type = EventPacketType
using const_packet_type = const EventPacketType
using iterator = AddressableEventStorageIterator<EventType, EventPacketType>
using const_iterator = iterator

Public Functions

AddressableEventStorage() = default

Default constructor. Creates an empty EventStore. This does not allocate any memory as long as there is no data.

inline void add(const AddressableEventStorage &store)

Merges the contents of the supplied Event Store into the current event store. This operation can cause event data copies if that results in more optimal memory layout, otherwise the operation only performs shallow copies of the data by sharing the ownership with previous event storage. The two event stores have to be in ascending order.

Parameters:

store – the store to be added to this store

inline Eigen::Matrix<int64_t, Eigen::Dynamic, 1> timestamps() const

Retrieve timestamps of events into a one-dimensional eigen matrix. This performs a copy of the values. The values are guaranteed to be monotonically increasing.

Returns:

A one-dimensional eigen matrix containing timestamps of events.

inline Eigen::Matrix<int16_t, Eigen::Dynamic, 2> coordinates() const

Retrieve coordinates of events in a 2xN eigen matrix. Method performs a copy of the values. Coordinates maintain the same order as within the event store. First column is the x coordinate, second column is the y coordinate.

Returns:

A two-dimensional eigen matrix containing x and y coordinates of events.

inline Eigen::Matrix<uint8_t, Eigen::Dynamic, 1> polarities() const

Retrieve polarities of events in a one-dimensional eigen matrix. Method performs a copy of the values. Polarities maintain the same order as within the event store. Polarities are converted into unsigned 8-bit integer values, where 0 stands for negative polarity event and 1 stands for positive polarity event.

Returns:

A one-dimensional eigen matrix containing polarities of events.

inline EigenEvents eigen() const

Convert the event store into eigen matrices. This function performs a deep copy of the memory.

Returns:

Events in represented in eigen matrices.

inline explicit AddressableEventStorage(std::shared_ptr<const EventPacketType> packet)

Creates a new EventStore with the data from an EventPacket. This is a shallow operation. No data is copied. The EventStore gains shared ownership of the supplied data. This constructor also allows the implicit conversion from dv::InputVectorDataWrapper<dv::EventPacket, dv::Event> to dv::AddressableEventStorage<dv::Event, dv::EventPacket> Implicit conversion intended.

Parameters:

packet – the packet to construct the EventStore from

inline AddressableEventStorage &operator=(std::shared_ptr<const EventPacketType> packet)

Assignment operator for packet const-pointer type. Will construct a new EventStore within the variable.

Parameters:

packet – A pointer to the event data packet.

Returns:

inline void push_back(const EventType &event)

Adds a single Event to the EventStore. This will potentially allocate more memory when the currently available shards are exhausted. Any new memory receives exclusive ownership by this packet.

Parameters:

event – A reference to the event to be added.

inline void push_back(EventType &&event)

Moves a single Event into the EventStore. This will potentially allocate more memory when the currently available shards are exhausted. Any new memory receives exclusive ownership by this packet.

Parameters:

event – A movable reference to the event to be added.

template<class ...Args>
inline EventType &emplace_back(Args&&... args)

Construct an event at the end of the storage.

Template Parameters:

_constr_args – Argument template

Parameters:

_args – Argument values

Returns:

Reference to the last newly created element

inline AddressableEventStorage operator+(const AddressableEventStorage &other) const

Returns a new EventStore that is the sum of this event store as well as the supplied event store. This is a const operation that does not modify this event store. The returned event store holds all the data of this store and the other. This is a shallow operation, no event data has to be copied for this.

Parameters:

other – The other store to be added

Returns:

A new EventStore, containing the events from this and the other store

inline void operator+=(const AddressableEventStorage &other)

Adds all the events of the other event store to this event store.

Parameters:

other – The event store to be added

inline AddressableEventStorage &operator<<(const EventType &event)

Adds the given event to the end of this EventStore.

Parameters:

event – The event to be added

Returns:

A reference to this EventStore.

inline size_t size() const noexcept

Returns the total size of the EventStore.

Returns:

The total size (in events) of the packet.

inline AddressableEventStorage slice(const size_t start, const size_t length) const

Returns a new EventStore which is a shallow representation of a slice of this EventStore. The slice is from start (number of, events, minimum 0, maximum getLength()) and has a length of length.

As a slice is a shallow representation, no EventData gets copied by this operation. The resulting EventStore receives shared ownership over the relevant parts of the data. Should the original EventStore get out of scope, memory that is not relevant to the sliced EventStore will get freed.

Parameters:
  • start – The start index of the slice (in number of events)

  • length – The desired length of the slice (in number of events)

Returns:

A new EventStore object which references to the sliced, shared data. No Event data is copied.

inline AddressableEventStorage<EventType, EventPacketType> slice(const size_t start) const

Returns a new EventStore which is a shallow representation of a slice of this EventStore. The slice is from start (number of, events, minimum 0, maximum getLength()) and goes to the end of the EventStore. This method slices off the front of an EventStore.

As a slice is a shallow representation, no EventData gets copied by this operation. The resulting EventStore receives shared ownership over the relevant parts of the data. Should the original EventStore get out of scope, memory that is not relevant to the sliced EventStore will get freed.

Parameters:

start – The start index of the slice (in number of events). The slice will be from this index to the end of the packet.

Returns:

A new EventStore object which references to the sliced, shared data. No Event data is copied.

inline AddressableEventStorage sliceTime(const int64_t startTime, const int64_t endTime, size_t &retStart, size_t &retEnd) const

Returns a new EventStore which is a shallow representation of a slice of this EventStore. The slice is from a specific startTime (in event timestamps, microseconds) to a specific endTime (event timestamps, microseconds). The actual size (in events) of the resulting packet depends on the event rate in the requested time interval. The resulting packet may be empty, if there is no event that happened in the requested interval.

As a slice is a shallow representation, no EventData gets copied by this operation. The resulting EventStore receives shared ownership over the relevant parts of the data. Should the original EventStore get out of scope, memory that is not relevant to the sliced EventStore will get freed.

The sliced output will be in the time range [startTime, endTime), endTime is exclusive.

Parameters:
  • startTime – The start time of the required slice (inclusive)

  • endTime – The end time of the required time (exclusive)

  • retStart – parameter that will get set to the actual index (in number of events) at which the start of the slice occured.

  • retEnd – parameter that will get set to the actual index (in number of events) at which the end of the slice occured

Returns:

A new EventStore object that is a shallow representation to the sliced, shared data. No data is copied over.

inline AddressableEventStorage sliceTime(const int64_t startTime, const int64_t endTime) const

Returns a new EventStore which is a shallow representation of a slice of this EventStore. The slice is from a specific startTime (in event timestamps, microseconds) to a specific endTime (event timestamps, microseconds). The actual size (in events) of the resulting packet depends on the event rate in the requested time interval. The resulting packet may be empty, if there is no event that happend in the requested interval.

As a slice is a shallow representation, no EventData gets copied by this operation. The resulting EventStore receives shared ownership over the relevant parts of the data. Should the original EventStore get out of scope, memory that is not relevant to the sliced EventStore will get freed.

The sliced output will be in the time range [startTime, endTime), endTime is exclusive.

Parameters:
  • startTime – The start time of the required slice (inclusive)

  • endTime – The end time of the required time (exclusive)

Returns:

A new EventStore object that is a shallow representation to the sliced, shared data. No data is copied over.

inline AddressableEventStorage sliceBack(const size_t length) const

Returns a new EventStore which is a shallow representation of a slice of this EventStore. Returns a slice which contains events from the back of the storage, it will contain no more events than given length variable.

As a slice is a shallow representation, no EventData gets copied by this operation. The resulting EventStore receives shared ownership over the relevant parts of the data. Should the original EventStore get out of scope, memory that is not relevant to the sliced EventStore will get freed.

Parameters:

length – Maximum number of events contained in the resulting slice.

Returns:

A new EventStore object that is a shallow representation to the sliced, shared data. No data is copied over.

inline AddressableEventStorage sliceTime(const int64_t startTime) const

Returns a new EventStore which is a shallow representation of a slice of this EventStore. The slice is from a specific startTime (in event timestamps, microseconds) to the end of the packet. The actual size (in events) of the resulting packet depends on the event rate in the requested time interval. The resulting packet may be empty, if there is no event that happened in the requested interval.

As a slice is a shallow representation, no EventData gets copied by this operation. The resulting EventStore receives shared ownership over the relevant parts of the data. Should the original EventStore get out of scope, memory that is not relevant to the sliced EventStore will get freed.

Parameters:

startTime – The start time of the required slice, if positive. If negative, the number of microseconds from the end of the store

Returns:

A new EventStore object that is a shallow representation to the sliced, shared data. No data is copied over.

inline AddressableEventStorage sliceRate(const double targetRate) const

Slices events from back of the EventStore, so that the EventStore would only contain a number of events of a given event rate. Useful for performance limited applications when it is required to limit the rate of events to maintain stable execution time.

Parameters:

targetRate – Target event rate in events per second.

Returns:

New event store which contains number of events within the target event rate.

inline const_iterator begin() const noexcept

Returns an iterator to the begin of the EventStore

Returns:

an iterator to the begin of the EventStore

inline const_iterator end() const noexcept

Returns an iterator to the end of the EventStore

Returns:

an iterator to the end of the EventStore

inline const_reference front() const

Returns a reference to the first element of the packet

Returns:

a reference to the first element to the packet

inline const_reference back() const

Returns a reference to the last element of the packet

Returns:

a reference to the last element to the packet

inline int64_t getLowestTime() const

Returns the timestamp of the first event in the packet. This is also the lowest timestamp in the packet, as the events are required to be monotonic.

Returns:

The lowest timestamp present in the packet. 0 if the packet is empty.

inline int64_t getHighestTime() const

Returns the timestamp of the last event in the packet. This is also the highest timestamp in the packet, as the events are required to be monotonic.

Returns:

The highest timestamp present in the packet. 0 if the packet is empty

inline bool isEmpty() const

Returns true if the packet is empty (does not contain any events).

Returns:

Returns true if the packet is empty (does not contain any events).

inline void erase(const size_t start, const size_t length)

Erase given range of events from the event store. This does not necessarily delete the underlying data since event store maps the data using smart pointers, the data will be cleared only in the case that none of the stores is mapping the data. This erase function does not affect data shared with other event stores.

Parameters:
  • start – Start index of events to erase

  • length – Number of events to erase

inline size_t eraseTime(const int64_t startTime, const int64_t endTime)

Erase events in the range between given timestamps. This does not necessarily delete the underlying data since event store maps the data using smart pointers, the data will be cleared only in the case that none of the stores is mapping the data. This erase function does not affect data shared with other event stores.

Parameters:
  • startTime – Start timestamp for events to be erased, including this exact timestamp

  • endTime – End timestamp for events to be erased, up to this time, events with this exact timestamp are not going to be erased.

Returns:

Number of events deleted

inline const EventType &operator[](const size_t index) const

Return an event at given index.

Parameters:

index – Index of the event

Returns:

Reference to the event at the index.

inline const EventType &at(const size_t index) const

Return an event at given index.

Parameters:

index – Index of the event

Returns:

Reference to the event at the index.

inline void retainDuration(const dv::Duration duration)

Retain a certain duration of event data in the event store. This will retain latest events and delete oldest data. The duration is just a hint of minimum amount of duration to keep, the exact duration will always be slightly greater (depending on event rate and memory allocation).

Parameters:

duration – Minimum amount of time to keep in the event store. Events are erased in batches, so this guarantees only to maintain the batches of events within this duration.

inline dv::Duration duration() const

Get the duration of events contained.

Returns:

Duration of stored events in microseconds.

inline bool isWithinStoreTimeRange(const int64_t timestamp) const

Checks whether given timestamp is within the time range of the event store.

Parameters:

timestamp – Microsecond Unix timestamp to check.

Returns:

True if the timestamp is within the time of event store, false otherwise.

inline size_t getShardCapacity() const

Get currently used default shard (data partial) capacity value.

Returns:

Default capacity for new shards.

inline void setShardCapacity(const size_t shardCapacity)

Set a new capacity for shards (data partials). Setting this value does not affect already allocated shards and will be used only when a new shard needs to be allocated. If passed in capacity is set to 0, the setter will use a capacity value of 1, because that is the lowest allowed capacity value.

Parameters:

shardCapacity – Capacity of events for newly allocated shards.

inline size_t getShardCount() const

Get the amount of shards that are currently referenced by the event store.

Returns:

Number of referenced shards (data partials).

inline double rate() const

Get the event rate (events per second) for the events stored in this storage.

Returns:

Events per second within this storage.

inline EventPacketType toPacket() const

Convert event store into a continuous memory packet. This performs a deep copy of underlying data.

Returns:

Event packet with a copy of all stored events in this event store.

Protected Types

using PartialEventDataType = PartialEventData<EventType, EventPacketType>

Protected Functions

inline explicit AddressableEventStorage(const std::vector<PartialEventDataType> &dataPartials)

INTERNAL USE ONLY Creates a new EventStore based on the supplied PartialEventData objects. Offsets and meta information is recomputed from the supplied list. The packet gets shared ownership of all underlying data of the PartialEventData slices in dataPartials.

Parameters:

dataPartials – vector of PartialEventData to construct this package from.

inline PartialEventDataType &_getLastNonFullPartial()

Retrieve the last partial that can store events. If available partial is full or no partials available at all, this function will instantiate, add the partial to the store, and return a reference to that partial.

Returns:

Last data partial that can store an additional event.

Protected Attributes

std::vector<PartialEventDataType> dataPartials_

internal list of the shards.

std::vector<size_t> partialOffsets_

The exact number-of-events global offsets of the shards

size_t totalLength_ = {0}

The total length of the event package

size_t shardCapacity_ = {10000}

Default capacity for the data partials

Friends

friend class dv::io::MonoCameraWriter
friend class dv::io::NetworkWriter
inline friend std::ostream &operator<<(std::ostream &os, const AddressableEventStorage &storage)
template<concepts::AddressableEvent EventType, class EventPacketType>
class AddressableEventStorageIterator
#include </builds/inivation/dv/dv-processing/include/dv-processing/core/core.hpp>

Iterator for the EventStore class.

Public Types

using iterator_category = std::bidirectional_iterator_tag
using value_type = const EventType
using pointer = const EventType*
using reference = const EventType&
using size_type = size_t
using difference_type = ptrdiff_t

Public Functions

inline AddressableEventStorageIterator()

Default constructor. Creates a new iterator at the beginning of the packet

inline explicit AddressableEventStorageIterator(const std::vector<PartialEventData<EventType, EventPacketType>> *dataPartialsPtr, const bool front)

Creates a new Iterator either at the beginning or at the end of the package

Parameters:
  • dataPartialsPtr – to the partials (shards) of the packet

  • front – iterator will be at the beginning (true) of the packet, or at the end (false) of the packet.

inline AddressableEventStorageIterator(const std::vector<PartialEventData<EventType, EventPacketType>> *dataPartialsPtr, const size_t partialIndex, const size_t offset)

INTERNAL USE ONLY Creates a new iterator at the specific internal position supplied

Parameters:
  • dataPartialsPtr – Pointer to the partials (shards) of the packet

  • partialIndex – Index pointing to the active shard

  • offset – Offset in the active shard

inline reference operator*() const noexcept
Returns:

A reference to the Event at the current iterator position

inline pointer operator->() const noexcept
Returns:

A pointer to the Event at current iterator position

inline AddressableEventStorageIterator &operator++() noexcept

Increments the iterator by one

Returns:

A reference to the the same iterator, incremented by one

inline const AddressableEventStorageIterator operator++(int) noexcept

Post-increments the iterator by one

Returns:

A new iterator at the current position. Increments original iterator by one.

inline AddressableEventStorageIterator &operator+=(const size_type add) noexcept

Increments iterator by a fixed number and returns reference to itself

Parameters:

add – amount one whishes to increment the iterator

Returns:

reference to itseld incremented by add

inline AddressableEventStorageIterator &operator--() noexcept

Decrements the iterator by one

Returns:

A reference to the the same iterator, decremented by one

inline const AddressableEventStorageIterator operator--(int) noexcept

Post-decrement the iterator by one

Returns:

A new iterator at the current position. Decrements original iterator by one.

inline AddressableEventStorageIterator &operator-=(const size_type sub) noexcept

Decrements iterator by a fixed number and returns reference to itself

Parameters:

sub – amount one whishes to decrement the iterator

Returns:

reference to itseld decremented by sub

inline bool operator==(const AddressableEventStorageIterator &rhs) const noexcept
Parameters:

rhs – iterator to compare to

Returns:

true if both iterators point to the same element

inline bool operator!=(const AddressableEventStorageIterator &rhs) const noexcept
Parameters:

rhs – iterator to compare to

Returns:

true if both iterators point to different elements

Private Functions

inline void increment()

Increments the iterator to the next event. If the iterator goes beyond available data, it remains at this position.

inline void decrement()

Decrements the iterator to the previous event. If the iterator goes below zero, it remains at zero.

Private Members

const std::vector<PartialEventData<EventType, EventPacketType>> *dataPartialsPtr_
size_t partialIndex_

The current partial (shard) we point to

size_t offset_

The current offset inside the shard we point to

template<class EventStoreType>
class AddressableStereoEventStreamSlicer

Public Functions

inline void accept(const std::optional<EventStoreType> &left, const std::optional<EventStoreType> &right)

Adds EventStores from the left and right camera. Performs job evaluation immediately.

Parameters:
  • leftEvents – the EventStore from left camera.

  • rightEvents – the EventStore from right camera.

inline int doEveryNumberOfEvents(const size_t n, std::function<void(const EventStoreType&, const EventStoreType&)> callback)

Perform an action on the stereo stream data every given amount of events. Event count is evaluated on the left camera stream and according time interval of data is sliced from the right camera event stream. Sliced data is passed into the callback function as soon as it arrived, first argument is left camera events and second is right camera events. Since right camera events are sliced by the time interval of left camera, the amount of events on right camera can be different.

See also

AddressableEventStreamSlicer::doEveryNumberOfEvents

Parameters:
  • n – the interval (in number of events) in which the callback should be called.

  • callback – the callback function that gets called on the data every interval.

Returns:

Job identifier

inline int doEveryTimeInterval(const dv::Duration interval, std::function<void(const EventStoreType&, const EventStoreType&)> callback)

Perform an action on the stereo stream data every given time interval. Event period is evaluated on the left camera stream and according time interval of data is sliced from the right camera event stream. Sliced data is passed into the callback function as soon as it arrived, first argument is left camera events and second is right camera events.

See also

AddressableEventStreamSlicer::doEveryTimeInterval

Parameters:
  • interval – Time interval to call the callback function. The callback is called based on timestamps of left camera.

  • callback – Function to be executed

Returns:

Job identifier.

inline bool hasJob(const int job)

Returns true if the slicer contains the slicejob with the provided id

Parameters:

job – the id of the slicejob in question

Returns:

true, if the slicer contains the given slicejob

inline void removeJob(const int job)

Removes the given job from the list of current jobs.

Parameters:

job – The job id to be removed

Protected Functions

inline void clearRightEventsBuffer(const int64_t timestampFrom)

Perform book-keeping of the right camera buffer by retaining data from a given timestamp. Events are “forgot” only if minimum amount and time duration values are maintained according to slicing configuration.

Parameters:

timestampFrom – Perform book-keeping by retaining data from this timestamp onward.

Protected Attributes

std::optional<size_t> minimumEvents = std::nullopt
std::optional<dv::Duration> minimumTime = std::nullopt
StreamSlicer<EventStoreType> slicer
EventStoreType leftEvents
EventStoreType rightEvents
int64_t rightEventSeek = -1
struct AedatFileError

Public Types

using Info = std::filesystem::path
struct AedatFileParseError

Public Types

using Info = std::filesystem::path

Public Static Functions

static inline std::string format(const Info &info)
struct AedatVersionError

Public Types

using Info = int32_t

Public Static Functions

static inline std::string format(const Info &info)
template<dv::concepts::TimeSurface<dv::EventStore> TimeSurface = dv::TimeSurface, size_t radius1 = 5, size_t radius2 = 6>
class ArcCornerDetector

Public Types

using UniquePtr = std::unique_ptr<ArcCornerDetector>
using SharedPtr = std::shared_ptr<ArcCornerDetector>

Public Functions

ArcCornerDetector() = delete
template<typename ...TIME_SURFACE_ADDITIONAL_ARGS>
inline ArcCornerDetector(const cv::Size resolution, const typename TimeSurface::Scalar range, const bool resetTsAtEachIteration, TIME_SURFACE_ADDITIONAL_ARGS&&... timeSurfaceAdditionalArgs)

Constructor

Template Parameters:

TIME_SURFACE_ADDITIONAL_ARGS – Types of the additional arguments passed to the time surface constructor

Parameters:
  • resolution – camera dimensions

  • range – the range within which the timestamps of a corner should be for it to be detected as a corner

  • resetTsAtEachIteration – set to true if the time surface should be reset at each iteration

  • timeSurfaceAdditionalArgs – arguments passed to the time surface constructor in addition to the resolution

inline std::vector<dv::TimedKeyPoint> detect(const dv::EventStore &events, const cv::Rect &roi, const cv::Mat &mask)

Runs the detection algorithm.

A corner is defined by two arcs of different radii containing timestamps which satisfy the following conditions:

  • All timestamps that are on the corner are within a range of mCornerRange.

  • No timestamp that is outside of this corner is greater than or equal to the minimum timestamp within the corner

  • Length of the arc is within the ranges [ArcLimits::MIN_ARC_SIZE_FACTOR * circumference, ArcLimits::MAX_ARC_SIZE_FACTOR * circumference].

    See also

    ArcLimits.

Parameters:
  • events – events

  • roi – region of interest

  • mask – mask containing zeros for all pixels which should be ignored and nonzero for all others

Returns:

a vector containing the detected keypoints. The response is defined as the difference between the minimum timestamp within the arc and the maximum timestamp outside of the arc.

inline auto getTimeSurface(const bool polarity) const

Returns the TimeSurface for a given polarity

Parameters:

polarity – the polarity

Returns:

the requested time surface

Private Functions

inline auto insideCorner(const int64_t ts1, const int64_t ts2)
template<typename ITERATOR>
inline auto expandArc(const ITERATOR &maxTimestampLoc, const int64_t maxTimestampValue, const dv::Event &event, const CircularTimeSurfaceView &circle)
template<typename ITERATOR>
inline auto checkSurroundingTimestamps(const ITERATOR &arcBegin, const ITERATOR arcEnd, const int64_t minTimestampInArc, const dv::Event &event, const CircularTimeSurfaceView &circle)

Private Members

std::array<TimeSurface, 2> mTimeSurfaces
int64_t mCornerRange
bool mResetTsAfterDetection
std::array<CircularTimeSurfaceView, 2> mCircles
std::array<ArcLimits, 2> mArcLimits
class ArcLimits

Public Functions

inline explicit ArcLimits(const size_t circumference)
inline auto satisfied(const size_t arcSize) const

Private Members

const size_t mCircumference
const size_t mMinSize
const size_t mMaxSize

Private Static Attributes

static constexpr float MIN_ARC_SIZE_FACTOR = 0.125f
static constexpr float MAX_ARC_SIZE_FACTOR = 0.4f
template<dv::concepts::EventStorage EventStoreClass = dv::EventStore>
class BackgroundActivityNoiseFilter : public dv::EventFilterBase<dv::EventStore>

Public Functions

inline explicit BackgroundActivityNoiseFilter(const cv::Size &resolution, const dv::Duration timeDelta = dv::Duration(2 '000))

Initiate a background activity noise filter, which test the neighbourhoods of incoming events for other supporting events that happened within the background activity period.

Parameters:
  • resolution – Sensor resolution.

  • timeDelta – Background activity duration.

inline virtual bool retain(const typename EventStoreClass::value_type &evt) noexcept override

Test the background activity, if the event neighbourhood has at least one event that was triggered within the background activity duration, the event will not be considered noise and should be retained, and discarded otherwise.

Parameters:

evt – Event to be checked.

Returns:

True to retain event, false to discard.

inline BackgroundActivityNoiseFilter &operator<<(const EventStoreClass &events)

Accept events using the input stream operator.

Parameters:

events – Input events.

Returns:

inline dv::Duration getBackgroundActivityDuration() const

Get currently configured background activity duration value.

Returns:

Background activity duration value.

inline void setBackgroundActivityDuration(const dv::Duration timeDelta)

Set new background activity duration value.

Parameters:

timeDelta – Background activity duration value.

Protected Functions

inline bool doBackgroundActivityLookup_unsafe(const int16_t x, int16_t y, const int64_t timestamp)
inline bool doBackgroundActivityLookup(const int16_t x, int16_t y, const int64_t timestamp)

Protected Attributes

cv::Size mResolutionLimits
dv::TimeSurface mTimeSurface
int64_t mBackgroundActivityDuration = 2000
struct BadAlloc : public dv::exceptions::info::EmptyException
template<dv::concepts::EventStorage EventStoreClass = dv::EventStore>
class BandCutFilter : public dv::noise::BaseFrequencyFilter<dv::EventStore>

Public Functions

inline explicit BandCutFilter(const cv::Size &resolution, const float lowerCutOffFrequency, const float upperCutOffFrequency)

A band-cut event frequency filter. Discards events at a pixel location with a frequency inside a given frequency band defined by a lower cutoff frequency and an upper cutoff frequency.

Parameters:
  • resolution – Sensor resolution.

  • lowerCutOffFrequency – Lower filter cutoff frequency. Together with #highCutOffFrequency, defines the frequency band for the band-pass filter. All events with a frequency outside of this band are discarded.

  • upperCutOffFrequency – Upper filter cutoff frequency. Together with #lowCutOffFrequency, defines the frequency band for the band-pass filter. All events with a frequency outside of this band are discarded.

inline BandCutFilter &operator<<(const EventStoreClass &events)

Accept events using the input stream operator.

Parameters:

events – Input events.

Returns:

inline float getLowerCutOffFrequency() const

Get the lower cutoff frequency of the frequency band.

Returns:

Currently configured low cutoff frequency of the frequency band.

inline float getUpperCutOffFrequency() const

Get the upper cutoff frequency of the frequency band.

Returns:

Currently configured upper cutoff frequency of the frequency band.

inline void setLowerCutOffFrequency(const float frequency)

Set a new value for the lower cutoff frequency of the frequency band.

Parameters:

frequency – New lower cutoff frequency value.

inline void setUpperCutOffFrequency(const float frequency)

Set a new value for the upper cutoff frequency of the frequency band.

Parameters:

frequency – New upper cutoff frequency value.

template<dv::concepts::EventStorage EventStoreClass = dv::EventStore>
class BandPassFilter : public dv::noise::BaseFrequencyFilter<dv::EventStore>
#include </builds/inivation/dv/dv-processing/include/dv-processing/noise/frequency_filters.hpp>

A band-pass event frequency filter. Discards events at a pixel location with a frequency outside a given frequency band defined by a lower cutoff frequency and an upper cutoff frequency.

Template Parameters:

EventStoreClass – Type of event store.

Public Functions

inline explicit BandPassFilter(const cv::Size &resolution, const float lowerCutOffFrequency, const float upperCutOffFrequency)

A band-pass event frequency filter. Discards events at a pixel location with a frequency outside a given frequency band defined by a lower cutoff frequency and an upper cutoff frequency.

Parameters:
  • resolution – Sensor resolution.

  • lowerCutOffFrequency – Lower filter cutoff frequency. Together with #highCutOffFrequency, defines the frequency band for the band-pass filter. All events with a frequency outside of this band are discarded.

  • upperCutOffFrequency – Upper filter cutoff frequency. Together with #lowCutOffFrequency, defines the frequency band for the band-pass filter. All events with a frequency outside of this band are discarded.

inline BandPassFilter &operator<<(const EventStoreClass &events)

Accept events using the input stream operator.

Parameters:

events – Input events.

Returns:

inline float getLowerCutOffFrequency() const

Get the lower cutoff frequency of the frequency band.

Returns:

Currently configured low cutoff frequency of the frequency band.

inline float getUpperCutOffFrequency() const

Get the upper cutoff frequency of the frequency band.

Returns:

Currently configured upper cutoff frequency of the frequency band.

inline void setLowerCutOffFrequency(const float frequency)

Set a new value for the lower cutoff frequency of the frequency band.

Parameters:

frequency – New lower cutoff frequency value.

inline void setUpperCutOffFrequency(const float frequency)

Set a new value for the upper cutoff frequency of the frequency band.

Parameters:

frequency – New upper cutoff frequency value.

template<dv::concepts::EventStorage EventStoreClass = dv::EventStore>
class BaseFrequencyFilter : public dv::EventFilterBase<dv::EventStore>
#include </builds/inivation/dv/dv-processing/include/dv-processing/noise/frequency_filters.hpp>

A base class for basic frequency filters. Different frequency filters (low-pass, high-pass, band-pass, etc.) are derived as special cases of this base filter. Handles data input and output, as well as checking if an event should be retained. Derived classes only have to define the required behavior by passing correct arguments to the constructor.

Template Parameters:

EventStoreClass – Type of event store.

Subclassed by dv::noise::BandCutFilter< EventStoreClass >, dv::noise::BandPassFilter< EventStoreClass >, dv::noise::HighPassFilter< EventStoreClass >, dv::noise::LowPassFilter< EventStoreClass >

Public Functions

inline explicit BaseFrequencyFilter(const cv::Size &resolution, const std::optional<float> lowerCutOffFrequency, const std::optional<float> upperCutOffFrequency, const FrequencyFilterType filterType)

Construct a base event frequency filter for filtering events based on defined thresholds for the upper/lower cutoff frequencies. Different filter behaviors can be implemented by passing different arguments to the constructor (low-pass, high-pass, band-pass, etc.)

Parameters:
  • resolution – Sensor resolution.

  • lowerCutOffFrequency – Lower cutoff frequency for filter. Ignored if std::nullopt

  • upperCutOffFrequency – Upper cutoff frequency for filter. Ignored if std::nullopt

  • filterType – Filter type, whether the filter behaves as a cut or a pass for the given cutoff frequencies

inline virtual bool retain(const typename EventStoreClass::value_type &event) noexcept override

Test whether the frequency of an event is less than the cutoff frequency.

Parameters:

event – Event to be tested.

Returns:

True - the frequency of the tested event at that pixel location is less than the cutoff frequency, false otherwise.

Protected Attributes

dv::TimeSurface mTimeSurface
std::optional<int64_t> mLowCutOffPeriod
std::optional<int64_t> mHighCutOffPeriod
FrequencyFilterType mFilterType
struct BoundingBox : public flatbuffers::NativeTable

Public Types

typedef BoundingBoxFlatbuffer TableType

Public Functions

inline BoundingBox()
inline BoundingBox(int64_t _timestamp, float _topLeftX, float _topLeftY, float _bottomRightX, float _bottomRightY, float _confidence, const std::string &_label)

Public Members

int64_t timestamp
float topLeftX
float topLeftY
float bottomRightX
float bottomRightY
float confidence
std::string label

Public Static Functions

static inline constexpr const char *GetFullyQualifiedName()
struct BoundingBoxBuilder

Public Functions

inline void add_timestamp(int64_t timestamp)
inline void add_topLeftX(float topLeftX)
inline void add_topLeftY(float topLeftY)
inline void add_bottomRightX(float bottomRightX)
inline void add_bottomRightY(float bottomRightY)
inline void add_confidence(float confidence)
inline void add_label(flatbuffers::Offset<flatbuffers::String> label)
inline explicit BoundingBoxBuilder(flatbuffers::FlatBufferBuilder &_fbb)
BoundingBoxBuilder &operator=(const BoundingBoxBuilder&)
inline flatbuffers::Offset<BoundingBoxFlatbuffer> Finish()

Public Members

flatbuffers::FlatBufferBuilder &fbb_
flatbuffers::uoffset_t start_
struct BoundingBoxFlatbuffer : private flatbuffers::Table

Public Types

typedef BoundingBox NativeTableType

Public Functions

inline int64_t timestamp() const

Timestamp (µs).

inline float topLeftX() const

top left corner of bounding box x-coordinate.

inline float topLeftY() const

top left corner of bounding box y-coordinate.

inline float bottomRightX() const

bottom right corner of bounding box x-coordinate.

inline float bottomRightY() const

bottom right corner of bounding box y-coordinate.

inline float confidence() const

confidence of the given bounding box.

inline const flatbuffers::String *label() const

Label for the given bounding box.

inline bool Verify(flatbuffers::Verifier &verifier) const
inline BoundingBox *UnPack(const flatbuffers::resolver_function_t *_resolver = nullptr) const
inline void UnPackTo(BoundingBox *_o, const flatbuffers::resolver_function_t *_resolver = nullptr) const

Public Static Functions

static inline const flatbuffers::TypeTable *MiniReflectTypeTable()
static inline constexpr const char *GetFullyQualifiedName()
static inline void UnPackToFrom(BoundingBox *_o, const BoundingBoxFlatbuffer *_fb, const flatbuffers::resolver_function_t *_resolver = nullptr)
static inline flatbuffers::Offset<BoundingBoxFlatbuffer> Pack(flatbuffers::FlatBufferBuilder &_fbb, const BoundingBox *_o, const flatbuffers::rehasher_function_t *_rehasher = nullptr)
struct BoundingBoxPacket : public flatbuffers::NativeTable

Public Types

typedef BoundingBoxPacketFlatbuffer TableType

Public Functions

inline BoundingBoxPacket()
inline BoundingBoxPacket(const std::vector<BoundingBox> &_elements)

Public Members

std::vector<BoundingBox> elements

Public Static Functions

static inline constexpr const char *GetFullyQualifiedName()

Friends

inline friend std::ostream &operator<<(std::ostream &os, const BoundingBoxPacket &packet)
struct BoundingBoxPacketBuilder

Public Functions

inline void add_elements(flatbuffers::Offset<flatbuffers::Vector<flatbuffers::Offset<BoundingBoxFlatbuffer>>> elements)
inline explicit BoundingBoxPacketBuilder(flatbuffers::FlatBufferBuilder &_fbb)
BoundingBoxPacketBuilder &operator=(const BoundingBoxPacketBuilder&)
inline flatbuffers::Offset<BoundingBoxPacketFlatbuffer> Finish()

Public Members

flatbuffers::FlatBufferBuilder &fbb_
flatbuffers::uoffset_t start_
struct BoundingBoxPacketFlatbuffer : private flatbuffers::Table

Public Types

typedef BoundingBoxPacket NativeTableType

Public Functions

inline const flatbuffers::Vector<flatbuffers::Offset<BoundingBoxFlatbuffer>> *elements() const
inline bool Verify(flatbuffers::Verifier &verifier) const
inline BoundingBoxPacket *UnPack(const flatbuffers::resolver_function_t *_resolver = nullptr) const
inline void UnPackTo(BoundingBoxPacket *_o, const flatbuffers::resolver_function_t *_resolver = nullptr) const

Public Static Functions

static inline const flatbuffers::TypeTable *MiniReflectTypeTable()
static inline constexpr const char *GetFullyQualifiedName()
static inline void UnPackToFrom(BoundingBoxPacket *_o, const BoundingBoxPacketFlatbuffer *_fb, const flatbuffers::resolver_function_t *_resolver = nullptr)
static inline flatbuffers::Offset<BoundingBoxPacketFlatbuffer> Pack(flatbuffers::FlatBufferBuilder &_fbb, const BoundingBoxPacket *_o, const flatbuffers::rehasher_function_t *_rehasher = nullptr)

Public Static Attributes

static constexpr const char *identifier = "BBOX"
class CalibrationSet
#include </builds/inivation/dv/dv-processing/include/dv-processing/camera/calibration_set.hpp>

CalibrationSet class is used to store, serialize and deserialize various camera related calibrations - intrinsic, extrinsic, IMU calibrations. Supports multi-camera and multi sensor setups.

Each calibration for each sensor received a designation string which consist of a letter determining the type of sensor and a numeric index automatically generated for each sensor. Designation string look like this: “C0” - camera with index 0 “S0” - IMU sensor with index 0 “C0C1” - stereo calibration where C0 is the left camera and C1 is the right camera in the camera rig setup.

Designation indexes are automatically incremented by the order they are added to the calibration set.

Public Types

using CameraCalibrationMap = std::map<std::string, calibrations::CameraCalibration, std::less<>>
using IMUCalibrationMap = std::map<std::string, calibrations::IMUCalibration, std::less<>>
using StereoCalibrationMap = std::map<std::string, calibrations::StereoCalibration, std::less<>>

Public Functions

CalibrationSet() = default
inline boost::property_tree::ptree toPropertyTree() const

Serialize calibration data into a property tree that can be saved into a file using boost::property_tree::write_json or other property_tree serialization method.

Returns:

Property tree containing calibration data.

inline std::vector<std::string> getCameraList() const

Get a list of cameras available by their designation.

Returns:

Vector of available camera designations.

inline std::vector<std::string> getImuList() const

Get a list camera designations which have imu calibrations available in this calibration set.

Returns:

Vector of available imu designations.

inline std::vector<std::string> getStereoList() const

Get a list of designations of stereo calibrations available here.

Returns:

Vector of available stereo calibrations designations.

inline std::optional<calibrations::CameraCalibration> getCameraCalibration(const std::string_view designation) const

Retrieve a camera calibration by designation (e.g. “C0”).

Designation string consists of a letter determining the type of sensor and a numeric index automatically generated for each sensor. Designation string look like this: “C0” - camera with index 0 “S0” - IMU sensor with index 0 “C0C1” - stereo calibration where C0 is the left camera and C1 is the right camera in the camera rig setup.

Parameters:

designation – Camera designation string.

Returns:

Camera instrinsics calibration, std::nullopt if given designation is not found.

inline std::optional<calibrations::IMUCalibration> getImuCalibration(const std::string_view designation) const

Get IMU calibration by IMU sensor designation (e.g. “S0”).

Designation string consists of a letter determining the type of sensor and a numeric index automatically generated for each sensor. Designation string look like this: “C0” - camera with index 0 “S0” - IMU sensor with index 0 “C0C1” - stereo calibration where C0 is the left camera and C1 is the right camera in the camera rig setup.

Parameters:

designationIMU designation string.

Returns:

IMU extrinsic calibration, std::nullopt if given designation is not found.

inline std::optional<calibrations::StereoCalibration> getStereoCalibration(const std::string_view designation) const

Get stereo calibration by stereo rig designation (e.g. “C0C1”).Retrieve the full list of IMU extrinsic calibrations.

Designation string consists of a letter determining the type of sensor and a numeric index automatically generated for each sensor. Designation string look like this: “C0” - camera with index 0 “S0” - IMU sensor with index 0 “C0C1” - stereo calibration where C0 is the left camera and C1 is the right camera in the camera rig setup.

Parameters:

designation – Stereo rig designation string.

Returns:

Stereo extrinsic calibration, std::nullopt if given designation is not found.

inline std::optional<calibrations::CameraCalibration> getCameraCalibrationByName(const std::string_view camera) const

Retrieve a camera calibration by camera name, which consist of model and serial number concatenation with an underscore separator (e.g. “DVXplorer_DXA00000”).

Camera name is usually available in recording files and when connected directly to a camera.

Parameters:

camera – Name of the camera.

Returns:

Camera intrinsic calibration, std::nullopt if given camera name is not found.

inline std::optional<calibrations::IMUCalibration> getImuCalibrationByName(const std::string_view camera) const

Retrieve an IMU calibration by camera name, which consist of model and serial number concatenation with an underscore separator (e.g. “DVXplorer_DXA00000”).

Camera name is usually available in recording files and when connected directly to a camera.

Parameters:

camera – Name of the camera.

Returns:

IMU extrinsics calibration, std::nullopt if given camera name is not found.

inline std::optional<calibrations::StereoCalibration> getStereoCalibrationByLeftCameraName(const std::string_view camera) const

Retrieve a stereo calibration by matching camera name to left camera name in the stereo calibrations. Camera name consist of model and serial number concatenation with an underscore separator (e.g. “DVXplorer_DXA00000”).

Camera name is usually available in recording files and when connected directly to a camera.

Parameters:

camera – Name of the camera.

Returns:

Stereo extrinsic calibration, std::nullopt if given camera name is not found.

inline std::optional<calibrations::StereoCalibration> getStereoCalibrationByRightCameraName(const std::string_view camera) const

Retrieve a stereo calibration by matching camera name to right camera name in the stereo calibrations. Camera name consist of model and serial number concatenation with an underscore separator (e.g. “DVXplorer_DXA00000”).

Camera name is usually available in recording files and when connected directly to a camera.

Parameters:

camera – Name of the camera.

Returns:

Stereo extrinsic calibration, std::nullopt if given camera name is not found.

inline void updateCameraCalibration(const calibrations::CameraCalibration &calibration)

Update Camera calibration for the given camera name.

Parameters:

calibration – Camera calibration instance.

inline void updateImuCalibration(const calibrations::IMUCalibration &calibration)

Update IMU calibration for the camera name.

Parameters:

calibrationIMU calibration instance.

inline void updateStereoCameraCalibration(const calibrations::StereoCalibration &calibration)

Update Stereo Camera calibration for the given camera name.

Parameters:

calibration – Stereo calibration instance.

inline void addCameraCalibration(const calibrations::CameraCalibration &calibration)

Add an intrinsic calibration to the camera calibration set. Camera designation is going to be generated automatically.

Parameters:

calibration – Camera intrinsics calibration.

inline void addImuCalibration(const calibrations::IMUCalibration &calibration)

Add an IMU extrinsics calibration to the calibration set.

Parameters:

calibrationIMU extrinsic calibration.

inline void addStereoCalibration(const calibrations::StereoCalibration &calibration)

Add a stereo calibration to the calibration set. Intrinsic calibrations of the sensors should already be added using addCameraCalibration prior to adding the stereo extrinsic calibration.

Parameters:

calibration – Stereo calibration.

Throws:

Throws – an invalid argument exception if the intrinsic calibration of given camera sensors are not available in the set or stereo calibration for the given cameras already exist/

inline const CameraCalibrationMap &getCameraCalibrations() const

Retrieve the full list of camera intrinsic calibrations.

Returns:

std::map containing camera calibrations where keys are camera designation strings.

inline const IMUCalibrationMap &getImuCalibrations() const

Retrieve the full list of IMU extrinsic calibrations.

Returns:

std::map containing IMU calibrations where keys are IMU sensor designation strings.

inline const StereoCalibrationMap &getStereoCalibrations() const

Retrieve the full list of stereo extrinsic calibrations.

Returns:

std::map containing stereo calibrations where keys are stereo rig camera designation strings.

inline void writeToFile(const std::filesystem::path &outputFile) const

Write the contents of this calibration set into a file at given path.

This function requires that supplied path contains “.json” extension.

Parameters:

outputFile – Output file path with “.json” extension to write the contents of the calibration set.

Public Static Functions

static inline CalibrationSet LoadFromFile(const std::filesystem::path &path)

Create a calibration file representation from a persistent file. Supports legacy “.xml” calibration files produced by DV as well as JSON files containing calibration of a new format.

The file format is distinguished using the file path extension.

Parameters:

path – Path to calibration file.

Returns:

CalibrationFile instanced containing parsed calibration values.

Private Functions

inline explicit CalibrationSet(const boost::property_tree::ptree &tree)

Private Members

size_t cameraIndex = 0
size_t imuIndex = 0
CameraCalibrationMap cameras
IMUCalibrationMap imus
StereoCalibrationMap stereo

Private Static Functions

static inline CalibrationSet cameraRigCalibrationFromJsonFile(const std::filesystem::path &path)
static inline calibrations::CameraCalibration oneCameraCalibrationFromXML(const cv::FileNode &node, const std::string_view cameraName, const bool cameraIsMaster)
static inline CalibrationSet cameraRigCalibrationFromXmlFile(const std::filesystem::path &path)
struct CameraCalibration

Public Functions

CameraCalibration() = default
inline explicit CameraCalibration(const std::string_view name_, const std::string_view position_, const bool master_, const cv::Size &resolution_, const cv::Point2f &principalPoint_, const cv::Point2f &focalLength_, const std::span<const float> distortion_, const DistortionModel distortionModel_, const dv::kinematics::Transformationf &transformationToC0_, const std::optional<Metadata> &metadata_)

Construct the camera calibration

Parameters:
  • name_ – Camera name (e.g. “DVXplorer_DXA02137”)

  • position_ – Description of the location of the camera in the camera rig (e.g. “left”)

  • master_ – Whether camera was a master camera during calibration

  • resolution_ – Camera resolution

  • principalPoint_ – Principal point

  • focalLength_ – Focal length

  • distortion_ – Distortion coefficients

  • distortionModel_ – Distortion model used (can be empty string or “radialTangential”)

  • transformationToC0_ – Transformation from this camera to camera zero

  • metadata_Metadata

inline explicit CameraCalibration(const boost::property_tree::ptree &tree)

Parse a property tree and initialize camera calibration out of it.

Parameters:

tree – Serialized property tree containing camera intrinsics calibration.

inline boost::property_tree::ptree toPropertyTree() const

Serialize the CameraCalibration structure into a property tree.

Returns:

Serialized property tree.

inline bool operator==(const CameraCalibration &rhs) const

Equality operator for the class, compares each member of the class.

Parameters:

rhs – Other instance of this class

Returns:

inline cv::Matx33f getCameraMatrix() const

Get camera matrix in the format: | mFx 0 mCx | | 0 mFy mCy | | 0 0 1 | for direct OpenCV compatibility.

Returns:

3x3 Camera matrix with pixel length values

inline dv::camera::CameraGeometry getCameraGeometry() const

Retrieve camera geometry instance from this calibration instance. Distortion model is going to be ignored if the CameraGeometry class doesn’t support the distortion model.

CameraGeometry class only supports “radialTangential” distortion model.

Returns:

Camera geometry class that implements geometrical transformations of pixel coordinates.

inline std::string getDistortionModelString() const

Get distortion model name as a string.

Returns:

Distortion model name.

Public Members

std::string name

Camera name (e.g. “DVXplorer_DXA02137”)

std::string position

Description of the location of the camera in the camera rig (e.g. “left”)

bool master = false

Indicate whether it is the master camera in a multi-camera rig.

cv::Size resolution

Camera resolution width.

cv::Point2f principalPoint

Intersection of optical axis and image plane.

cv::Point2f focalLength

Focal length.

std::vector<float> distortion

Distortion coefficients.

DistortionModel distortionModel = DistortionModel::RADIAL_TANGENTIAL

Distortion model used.

dv::kinematics::Transformationf transformationToC0

Transformation from camera zero to this camera.

std::optional<Metadata> metadata

Metadata.

Protected Static Functions

template<typename T>
static inline void pushVectorToTree(const std::string &key, const std::vector<T> &vals, boost::property_tree::ptree &tree)

Push a vector of the given type to the property tree at the given key.

Template Parameters:

T – Datatype of vector

Parameters:
  • key – Key in property tree where data will be added

  • transform – Transform to add to property tree

  • tree – Property tree to add data to

static inline void pushTransformToTree(const std::string &key, const dv::kinematics::Transformationf &transform, boost::property_tree::ptree &tree)

Push kinematics transformation to the property tree at the given key.

Parameters:
  • key – Key in property tree where data will be added

  • transform – Transform to add to property tree

  • tree – Property tree to add data to

template<typename T>
static inline std::vector<T> getVectorFromTree(const std::string &key, const boost::property_tree::ptree &tree)

Retrieve a vector of the given type from the property tree from the given key.

Template Parameters:

T – Datatype of vector

Parameters:
  • key – Key in property tree where to get data from

  • tree – Property tree to get data from

Returns:

A sequence value in a std::vector container.

static inline dv::kinematics::Transformationf getTransformFromTree(const std::string &key, const boost::property_tree::ptree &tree)

Retrieve a kinematics transformation from the property tree from the given key.

Parameters:
  • key – Key where kinematics transform is stored in property tree

  • tree – Property tree to get data from

Returns:

A kinematics transform from the property tree.

template<class Container, typename Scalar>
static inline Container parsePair(const boost::property_tree::ptree &child, const std::string &name, std::optional<Scalar> defaults = std::nullopt)
template<class Container, typename Scalar>
static inline Container parseTripple(const boost::property_tree::ptree &child, const std::string &name, std::optional<Scalar> defaults = std::nullopt)
template<class MetadataClass>
static inline std::optional<MetadataClass> getOptionalMetadata(const boost::property_tree::ptree &tree, const std::string &path)

Friends

friend struct IMUCalibration
friend struct StereoCalibration
inline friend std::ostream &operator<<(std::ostream &os, const CameraCalibration &calibration)

Serialize the object into a stream.

Parameters:
  • os

  • calibration

Returns:

class CameraGeometry

Public Types

enum class FunctionImplementation

Values:

enumerator LUT
enumerator SUB_PIXEL
using SharedPtr = std::shared_ptr<CameraGeometry>
using UniquePtr = std::unique_ptr<CameraGeometry>

Public Functions

inline CameraGeometry(const std::span<const float> distortion, const float fx, const float fy, const float cx, const float cy, const cv::Size &resolution, const DistortionModel distortionModel)

Create a camera geometry model with distortion model. Currently only radial tangential model is supported.

Parameters:
  • distortion – Distortion coefficient (4 or 5 coefficient radtan model).

  • fx – Focal length X measured in pixels.

  • fy – Focal length Y measured in pixels.

  • cx – Central point coordinate X in pixels.

  • cy – Central point coordinate Y in pixels.

  • resolution – Sensor resolution.

inline CameraGeometry(const float fx, const float fy, const float cx, const float cy, const cv::Size &resolution)

Create a camera geometry model without distortion model. Currently only radial tangential model is supported.

Any calls to function dependent on distortion will cause exceptions or segfaults.

Parameters:
  • fx – Focal length X measured in pixels.

  • fy – Focal length Y measured in pixels.

  • cx – Central point coordinate X in pixels.

  • cy – Central point coordinate Y in pixels.

  • resolution – Sensor resolution.

template<concepts::Coordinate2DConstructible Output, concepts::Coordinate2D Input>
inline Output undistort(const Input &point) const

Returns pixel coordinates of given point with applied back projection, undistortion, and projection. This function uses look-up table and is designed for minimal execution speed.

WARNING: will cause a segfault if coordinates are out-of-bounds or if distortion model is not available.

Parameters:

point – Pixel coordinate

Returns:

Undistorted pixel coordinate

inline dv::EventStore undistortEvents(const dv::EventStore &events) const

Undistort event coordinates, discards events which fall beyond camera resolution.

Parameters:

events – Input events

Returns:

A new event store containing the same events with undistorted coordinates

template<concepts::Coordinate2DMutableIterable Output, concepts::Coordinate2DIterable Input>
inline Output undistortSequence(const Input &coordinates) const

Undistort point coordinates.

Parameters:

coordinates – Input point coordinates

Returns:

A new vector containing the points with undistorted coordinates

template<concepts::Coordinate3DConstructible Output, concepts::Coordinate3D Input>
inline Output distort(const Input &undistortedPoint) const

Apply distortion to a 3D point.

Parameters:

point – Point in 3D space

Returns:

Distorted point

template<concepts::Coordinate3DMutableIterable Output, concepts::Coordinate3DIterable Input>
inline Output distortSequence(const Input &points) const

Apply direct distortion on the 3D points.

Parameters:

points – Input points

Returns:

Distorted points

template<concepts::Coordinate3DConstructible Output, concepts::Coordinate2D Input, FunctionImplementation implementation = FunctionImplementation::LUT>
inline Output backProject(const Input &pixel) const

Back-project pixel coordinates into a unit ray vector of depth = 1.0 meters.

Parameters:

pixel – Pixel to be projected

Template Parameters:

implementation – Specify the internal implementation to perform the computations, SubPixel performs all computations without any optimization, LUT option avoids computation by performing a look-up table operation instead, but rounds input coordinate values.

Returns:

Back projected unit ray

template<concepts::Coordinate3DMutableIterable Output, concepts::Coordinate2DIterable Input, FunctionImplementation implementation = FunctionImplementation::LUT>
inline Output backProjectSequence(const Input &points) const

Back project a sequence of 2D point into 3D unit ray-vectors.

Parameters:

points – Input points.

Template Parameters:

implementation – Specify the internal implementation to performthe computations, SubPixel performs all computations without any optimization, LUT option avoids computation by perfoming a look-up table operation instead, but rounds input coordinate values.

Returns:

A sequence of back-projected unit ray vectors.

template<concepts::Coordinate3DConstructible Output, concepts::Coordinate2D Input>
inline Output backProjectUndistort(const Input &pixel) const

Returns a unit ray of given coordinates with applied back projection and undistortion. This function uses look-up table and is designed for minimal execution speed.

WARNING: will cause a segfault if coordinates are out-of-bounds or if distortion model is not available.

Parameters:

pixel – Pixel coordinate

Returns:

Back projected and undistorted unit ray

template<concepts::Coordinate3DMutableIterable Output, concepts::Coordinate2DIterable Input>
inline Output backProjectUndistortSequence(const Input &points) const

Undistort and back project a batch of points. Output is normalized point coordinates as unit rays.

Parameters:

points – Input points.

Returns:

Undistorted and back projected points.

template<concepts::Coordinate2DConstructible Output, concepts::Coordinate3D Input>
inline Output project(const Input &points) const

Project a 3D point into pixel plane.

WARNING: Does not perform range checking!

Parameters:

points – 3D points to be projected

Returns:

Projected pixel coordinates

template<concepts::Coordinate2DMutableIterable Output, concepts::Coordinate3DIterable Input>
inline Output projectSequence(const Input &points, const bool dimensionCheck = true) const

Project a batch of 3D points into pixel plane.

Parameters:
  • points – Points to be projected.

  • dimensionCheck – Whether to perform resolution check, if true, output points outside of valid frame resolution will be omitted. If disabled, output point count and order will be the same as input points.

Returns:

Projected points in pixel plane.

template<concepts::Coordinate2D Input>
inline bool isWithinDimensions(const Input &point) const

Check whether given coordinates are within valid range.

Parameters:

point – Pixel coordinates

Returns:

True if the coordinate values are within camera resolution, false otherwise.

inline bool isUndistortionAvailable() const

Checks whether this camera geometry calibration contains coefficient for an undistortion model.

Returns:

True if undistortion is available, false otherwise

inline cv::Matx33f getCameraMatrix() const

Get camera matrix in the format: | mFx 0 mCx | | 0 mFy mCy | | 0 0 1 |

Returns:

3x3 Camera matrix with pixel length values

template<concepts::Coordinate2DConstructible Output = cv::Point2f>
inline Output getFocalLength() const

Focal length

Returns:

Focal length in pixels

template<concepts::Coordinate2DConstructible Output = cv::Point2f>
inline Output getCentralPoint() const

Central point coordinates

Returns:

Central point coordinates in pixels

inline std::vector<float> getDistortion() const

Get distortion coefficients

Returns:

Vector containing distortion coefficients

inline DistortionModel getDistortionModel() const

Get distortion model

Returns:

DistortionModel type

inline cv::Size getResolution() const

Get the camera resolution.

Returns:

Camera sensor resolution

Private Functions

inline void generateLUTs()

Generates internal distortion look-up table to speed up undistortion.

template<concepts::Coordinate3DConstructible Output, concepts::Coordinate3D Input>
inline Output distortRadialTangential(const Input &point) const

Distort the Input point according to the Radial Tangential distortion model.

Template Parameters:
  • Output

  • Input

Parameters:

point

Returns:

the distorted point in the 3D space

template<concepts::Coordinate3DConstructible Output, concepts::Coordinate3D Input>
inline Output distortEquidistant(const Input &point) const

Distort the Input point according to the Equidistant distortion model.

Template Parameters:
  • Output

  • Input

Parameters:

point

Returns:

the distorted point in the 3D space

Private Members

std::vector<cv::Point3f> mDistortionLUT

Row-based distortion look-up table. Access index by: index = (y * width) + x

std::vector<cv::Point3f> mBackProjectLUT

Row-based distortion look-up table. Access index by: index = (y * width) + x

std::vector<cv::Point2f> mDistortionPixelLUT

Row-based undistorted coordinate look-up table, containing undistorted points in pixel space. Access index by: index = (y * width) + x

std::vector<float> mDistortion

Distortion coefficients

float mFx

Focal length on x axis in pixels

float mFy

Focal length on y axis in pixels

float mCx

Central point coordinates on x axis

float mCy

Central point coordinates on x axis

cv::Size mResolution

Sensor resolution

float mMaxX

Max floating point coordinate x address value

float mMaxY

Max floating point coordinate y address value

DistortionModel mDistortionModel

Distortion model used

Friends

inline friend std::ostream &operator<<(std::ostream &os, const FunctionImplementation &var)
class CameraInputBase : public dv::io::InputBase
#include </builds/inivation/dv/dv-processing/include/dv-processing/io/camera/camera_input_base.hpp>

Camera input base class to abstract live camera and recorded files with a common interface.

Subclassed by dv::io::camera::DVXplorerM, dv::io::camera::SyncCameraInputBase

Public Types

enum class Flatten

Event flattening modes, used to change polarity of events returned by the camera.

Values:

enumerator NONE

No change.

enumerator FLATTEN_ON

All events become ON events.

enumerator ON_ONLY
enumerator OFF_ONLY

Public Functions

virtual bool getFlipHorizontal() const = 0

Status of horizontal events flip.

Returns:

status of horizontal events flip.

virtual void setFlipHorizontal(bool flipHorizontalEvents) = 0

Flip events horizontally.

Parameters:

flipHorizontalEvents – flip events horizontally.

virtual bool getFlipVertical() const = 0

Status of vertical events flip.

Returns:

status of vertical events flip.

virtual void setFlipVertical(bool flipVerticalEvents) = 0

Flip events vertically.

Parameters:

flipVerticalEvents – flip events vertically.

virtual Flatten getFlatten() const = 0

Status of event polarity flattening.

Returns:

status of event polarity flattening.

virtual void setFlatten(Flatten flattenEvents) = 0

Flatten events polarity.

Parameters:

flattenEvents – flattening mode.

virtual cv::Rect getCropArea() const = 0

Get events Region of Interest (ROI).

Returns:

get events Region of Interest (ROI).

virtual void setCropArea(cv::Rect cropAreaEvents) = 0

Set events Region of Interest (ROI). Usually hardware accelerated.

Parameters:

cropAreaEvents – region of interest (ROI) position and size.

virtual imu::ImuModel getImuModel() const = 0

Return IMU model used on device.

Returns:

IMU model in use.

virtual float getPixelPitch() const = 0

Return pixel pitch distance for the connected camera model. The value is returned in meters.

Returns:

Pixel pitch distance in meters according to the connected device.

virtual std::chrono::microseconds getTimeInterval() const = 0

Get the time interval for data commit.

Returns:

Time interval in microseconds.

virtual void setTimeInterval(std::chrono::microseconds timeInterval) = 0

Set a new time interval value for data commit. Data is put in the queues for getNextEventBatch(), readNext(), … at this interval’s rate.

Parameters:

timeInterval – New time interval value in microseconds.

virtual std::chrono::microseconds getTimestampOffset() const = 0

Get the timestamp offset.

Returns:

Absolute timestamp offset value in microseconds.

inline std::optional<dv::EventPacket> getNextEventPacket()

Parse and retrieve next event packet (internal format).

Returns:

Event packet or std::nullopt if no events were received since last read.

inline virtual std::optional<dv::EventStore> getNextEventBatch() override

Parse and retrieve next event batch.

Returns:

Event batch or std::nullopt if no events were received since last read.

inline virtual std::optional<dv::Frame> getNextFrame() override

Parse and retrieve next frame.

Returns:

Frame or std::nullopt if no frames were received since last read.

inline virtual std::optional<std::vector<dv::IMU>> getNextImuBatch() override

Parse and retrieve next IMU data batch.

Returns:

IMU data batch or std::nullopt if no IMU data was received since last read.

inline virtual std::optional<std::vector<dv::Trigger>> getNextTriggerBatch() override

Parse and retrieve next trigger data batch.

Returns:

Trigger data batch or std::nullopt if no triggers were received since last read.

inline virtual bool isStreamAvailable(const std::string_view streamName) const override

Check whether a stream with given name is available.

Returns:

True if data stream is available, false otherwise.

inline int64_t getEventSeekTime() const

Get latest timestamp of event data stream that has been read from the capture class.

Returns:

Latest processed event timestamp; returns -1 if no data was processed or stream is unavailable.

inline int64_t getFrameSeekTime() const

Get latest timestamp of frames stream that has been read from the capture class.

Returns:

Latest processed frame timestamp; returns -1 if no data was processed or stream is unavailable.

inline int64_t getImuSeekTime() const

Get latest timestamp of imu data that has been read from the capture class.

Returns:

Latest processed imu data timestamp; returns -1 if no data was processed or stream is unavailable.

inline int64_t getTriggerSeekTime() const

Get latest timestamp of trigger data stream that has been read from the capture class.

Returns:

Latest processed trigger timestamp; returns -1 if no data was processed or stream is unavailable.

inline DataReadVariant readNext()

Read a packet from the camera and return a variant of any packet. You can use std::visit with dv::io::DataReadHandler to handle each type of packet using callback methods. This method might not maintain timestamp monotonicity between different stream types.

Returns:

A variant containing data packet from the camera.

inline bool handleNext(DataReadHandler &handler)

Read next packet from the camera and use a handler object to handle all types of packets. The function returns a true if end-of-file was not reached, so this function call can be used in a while loop like so:

while (camera.handleNext(handler)) {
        // While-loop executes after each packet
}

Parameters:

handler – Handler instance that contains callback functions to handle different packets.

Returns:

False to indicate end of data stream, true to continue.

Protected Attributes

SortedPacketBuffers mBuffers

Friends

inline friend std::ostream &operator<<(std::ostream &os, const Flatten &var)
template<size_t radius>
struct CircleCoordinates
template<>
struct CircleCoordinates<3>

Public Static Attributes

static std::vector<Eigen::Vector2i, Eigen::aligned_allocator<Eigen::Vector2i>> coords{{Eigen::Vector2i{0, 3}, Eigen::Vector2i{1, 3}, Eigen::Vector2i{2, 2}, Eigen::Vector2i{3, 1}, Eigen::Vector2i{3, 0}, Eigen::Vector2i{3, -1}, Eigen::Vector2i{2, -2}, Eigen::Vector2i{1, -3}, Eigen::Vector2i{0, -3}, Eigen::Vector2i{-1, -3}, Eigen::Vector2i{-2, -2}, Eigen::Vector2i{-3, -1}, Eigen::Vector2i{-3, 0}, Eigen::Vector2i{-3, 1}, Eigen::Vector2i{-2, 2}, Eigen::Vector2i{-1, 3}}}
template<>
struct CircleCoordinates<4>

Public Static Attributes

static std::vector<Eigen::Vector2i, Eigen::aligned_allocator<Eigen::Vector2i>> coords{{Eigen::Vector2i{0, 4}, Eigen::Vector2i{1, 4}, Eigen::Vector2i{2, 3}, Eigen::Vector2i{3, 2}, Eigen::Vector2i{4, 1}, Eigen::Vector2i{4, 0}, Eigen::Vector2i{4, -1}, Eigen::Vector2i{3, -2}, Eigen::Vector2i{2, -3}, Eigen::Vector2i{1, -4}, Eigen::Vector2i{0, -4}, Eigen::Vector2i{-1, -4}, Eigen::Vector2i{-2, -3}, Eigen::Vector2i{-3, -2}, Eigen::Vector2i{-4, -1}, Eigen::Vector2i{-4, 0}, Eigen::Vector2i{-4, 1}, Eigen::Vector2i{-3, 2}, Eigen::Vector2i{-2, 3}, Eigen::Vector2i{-1, 4}}}
template<>
struct CircleCoordinates<5>

Public Static Attributes

static std::vector<Eigen::Vector2i, Eigen::aligned_allocator<Eigen::Vector2i>> coords{{Eigen::Vector2i{0, 5}, Eigen::Vector2i{1, 5}, Eigen::Vector2i{2, 5}, Eigen::Vector2i{3, 4}, Eigen::Vector2i{4, 3}, Eigen::Vector2i{5, 2}, Eigen::Vector2i{5, 1}, Eigen::Vector2i{5, 0}, Eigen::Vector2i{5, -1}, Eigen::Vector2i{5, -2}, Eigen::Vector2i{4, -3}, Eigen::Vector2i{3, -4}, Eigen::Vector2i{2, -5}, Eigen::Vector2i{1, -5}, Eigen::Vector2i{0, -5}, Eigen::Vector2i{-1, -5}, Eigen::Vector2i{-2, -5}, Eigen::Vector2i{-3, -4}, Eigen::Vector2i{-4, -3}, Eigen::Vector2i{-5, -2}, Eigen::Vector2i{-5, -1}, Eigen::Vector2i{-5, 0}, Eigen::Vector2i{-5, 1}, Eigen::Vector2i{-5, 2}, Eigen::Vector2i{-4, 3}, Eigen::Vector2i{-3, 4}, Eigen::Vector2i{-2, 5}, Eigen::Vector2i{-1, 5}}}
template<>
struct CircleCoordinates<6>

Public Static Attributes

static std::vector<Eigen::Vector2i, Eigen::aligned_allocator<Eigen::Vector2i>> coords{{Eigen::Vector2i{0, 6}, Eigen::Vector2i{1, 6}, Eigen::Vector2i{2, 6}, Eigen::Vector2i{3, 5}, Eigen::Vector2i{4, 4}, Eigen::Vector2i{5, 3}, Eigen::Vector2i{6, 2}, Eigen::Vector2i{6, 1}, Eigen::Vector2i{6, 0}, Eigen::Vector2i{6, -1}, Eigen::Vector2i{6, -2}, Eigen::Vector2i{5, -3}, Eigen::Vector2i{4, -4}, Eigen::Vector2i{3, -5}, Eigen::Vector2i{2, -6}, Eigen::Vector2i{1, -6}, Eigen::Vector2i{0, -6}, Eigen::Vector2i{-1, -6}, Eigen::Vector2i{-2, -6}, Eigen::Vector2i{-3, -5}, Eigen::Vector2i{-4, -4}, Eigen::Vector2i{-5, -3}, Eigen::Vector2i{-6, -2}, Eigen::Vector2i{-6, -1}, Eigen::Vector2i{-6, 0}, Eigen::Vector2i{-6, 1}, Eigen::Vector2i{-6, 2}, Eigen::Vector2i{-5, 3}, Eigen::Vector2i{-4, 4}, Eigen::Vector2i{-3, 5}, Eigen::Vector2i{-2, 6}, Eigen::Vector2i{-1, 6}}}
template<>
struct CircleCoordinates<7>

Public Static Attributes

static std::vector<Eigen::Vector2i, Eigen::aligned_allocator<Eigen::Vector2i>> coords{{Eigen::Vector2i{0, 7}, Eigen::Vector2i{1, 7}, Eigen::Vector2i{2, 7}, Eigen::Vector2i{3, 7}, Eigen::Vector2i{4, 6}, Eigen::Vector2i{5, 5}, Eigen::Vector2i{6, 4}, Eigen::Vector2i{7, 3}, Eigen::Vector2i{7, 2}, Eigen::Vector2i{7, 1}, Eigen::Vector2i{7, 0}, Eigen::Vector2i{7, -1}, Eigen::Vector2i{7, -2}, Eigen::Vector2i{7, -3}, Eigen::Vector2i{6, -4}, Eigen::Vector2i{5, -5}, Eigen::Vector2i{4, -6}, Eigen::Vector2i{3, -7}, Eigen::Vector2i{2, -7}, Eigen::Vector2i{1, -7}, Eigen::Vector2i{0, -7}, Eigen::Vector2i{-1, -7}, Eigen::Vector2i{-2, -7}, Eigen::Vector2i{-3, -7}, Eigen::Vector2i{-4, -6}, Eigen::Vector2i{-5, -5}, Eigen::Vector2i{-6, -4}, Eigen::Vector2i{-7, -3}, Eigen::Vector2i{-7, -2}, Eigen::Vector2i{-7, -1}, Eigen::Vector2i{-7, 0}, Eigen::Vector2i{-7, 1}, Eigen::Vector2i{-7, 2}, Eigen::Vector2i{-7, 3}, Eigen::Vector2i{-6, 4}, Eigen::Vector2i{-5, 5}, Eigen::Vector2i{-4, 6}, Eigen::Vector2i{-3, 7}, Eigen::Vector2i{-2, 7}, Eigen::Vector2i{-1, 7}, Eigen::Vector2i{0, 7}}}
class CircularTimeSurfaceView

Public Types

using CoordVector = std::vector<Eigen::Vector2i, Eigen::aligned_allocator<Eigen::Vector2i>>

Public Functions

inline explicit CircularTimeSurfaceView(CoordVector &coords)
inline explicit CircularTimeSurfaceView(CoordVector &&coords)
inline auto getTimestamp(const dv::Event &e, const Eigen::Vector2i &circleCoords, const TimeSurface &ts) const
template<typename ITERATOR>
inline auto circularIncrement(const ITERATOR it) const
template<typename ITERATOR>
inline auto circularDecrement(const ITERATOR it) const

Public Members

CoordVector mCoords
struct CoarseFineBias

On-chip coarse-fine bias current configuration. See ‘https://docs.inivation.com/hardware/hardware-advanced-usage/biasing.html’ for more details.

Public Functions

constexpr CoarseFineBias() = default
inline constexpr CoarseFineBias(const CoarseFineBiasSex sex)
inline constexpr CoarseFineBias(const uint8_t coarse, const uint8_t fine, const CoarseFineBiasSex sex, const CoarseFineBiasType type = CoarseFineBiasType::NORMAL, const bool enable = true)

Public Members

uint8_t coarseValue = {0}

Coarse current, from 0 to 7, creates big variations in output current.

uint8_t fineValue = {0}

Fine current, from 0 to 255, creates small variations in output current.

bool enabled = {false}

Whether this bias is enabled or not.

CoarseFineBiasSex sex = {CoarseFineBiasSex::N_TYPE}

Bias sex: true for ‘N’ type, false for ‘P’ type.

CoarseFineBiasType type = {CoarseFineBiasType::NORMAL}

Bias type: true for ‘Normal’, false for ‘Cascode’.

CoarseFineBiasCurrentLevel currentLevel = {CoarseFineBiasCurrentLevel::NORMAL}

Bias current level: true for ‘Normal, false for ‘Low’.

class CompressionSupport

Subclassed by dv::io::compression::Lz4CompressionSupport, dv::io::compression::NoneCompressionSupport, dv::io::compression::ZstdCompressionSupport

Public Functions

inline explicit CompressionSupport(const CompressionType type)
virtual ~CompressionSupport() = default
virtual void compress(dv::io::support::IODataBuffer &packet) = 0
inline CompressionType getCompressionType() const

Private Members

CompressionType mType
class Config
#include </builds/inivation/dv/dv-processing/include/dv-processing/io/mono_camera_writer.hpp>

A configuration structure for the MonoCameraWriter.

Public Functions

inline void addStreamMetadata(const std::string &name, const std::pair<std::string, dv::io::support::VariantValueOwning> &metadataEntry)

Add a metadata entry for a data type stream.

Parameters:
  • name – Name of the stream.

  • metadataEntry – Metadata entry consisting of a pair, where first element is the key name of the stream and second element is the value.

inline void addEventStream(const cv::Size &resolution, const std::string &name = "events", const std::optional<std::string> &source = std::nullopt)

Add an event stream with a given resolution.

Parameters:
  • resolution – Resolution of the event sensor.

  • name – Name of the stream

  • source – Name of the source camera.

inline void addFrameStream(const cv::Size &resolution, const std::string &name = "frames", const std::optional<std::string> &source = std::nullopt)

Add a frame stream with a given resolution.

Parameters:
  • resolution – Resolution of the frame sensor.

  • name – Name of the stream

  • source – Name of the source camera.

inline void addImuStream(const std::string &name = "imu", const std::optional<std::string> &source = std::nullopt)

Add an imu data stream.

Parameters:

nameStream name, with a default value of “imu”.

inline void addTriggerStream(const std::string &name = "triggers", const std::optional<std::string> &source = std::nullopt)

Add a trigger stream.

Parameters:

nameStream name, with a default value of “triggers”.

template<class PacketType>
inline void addStream(const std::string &name, const std::optional<std::string> &source = std::nullopt)

Add a stream of given data type.

Template Parameters:

PacketTypeStream data packet type.

Parameters:
  • name – Name for the stream.

  • source – Camera name for the source of the data, usually a concatenation of “MODEL_SERIAL”, e.g. “DVXplorer_DXA000000”

inline std::optional<cv::Size> findStreamResolution(const std::string &name) const

Parse resolution of the stream from metadata of the stream. Resolution should be set as two metadata parameters: “sizeX” and “sizeY” parameters.

Parameters:

nameStream name.

Returns:

Configured resolution. std::nullopt if unavailable or incorrectly configured.

inline explicit Config(const std::string &cameraName, CompressionType compression = CompressionType::LZ4)

Create a config instance

Parameters:
  • cameraName

  • compression

Public Members

dv::CompressionType compression

Compression type for this file.

std::string cameraName

Camera name that produces the data, usually contains production serial number.

Private Members

std::map<std::string, std::string> customDataStreams
std::map<std::string, std::map<std::string, dv::io::support::VariantValueOwning>> customDataStreamsMetadata

Friends

friend class dv::io::MonoCameraWriter
friend class dv::io::StereoCameraWriter
class Connection : public std::enable_shared_from_this<Connection>

Connection helper class that maintains shared pointer to itself when called on the public API methods.

This class should be wrapped in a shared pointer and start method should be called. This will intrinsically increment the reference count to maintain the pointer to itself even if the wrapper shared_ptr goes out-of-scope until the instance gets API calls to write data into the buffer. During destruction, the instance will remove it’s own pointer from a connection list in the top-level class.

(Personal comment by Rokas): this seems over-engineered and unnecessary, but it’s the way ASIO works and, although there are other ways to implement it, it just doesn’t work with other approaches leading to undefined behaviors.

Public Functions

inline Connection(WriteOrderedSocket &&socket, NetworkWriter *const server)
inline ~Connection()
inline void start()
inline void close()
inline void writePacket(const std::shared_ptr<const dv::io::support::IODataBuffer> &packet)
inline bool isOpen() const

Private Functions

inline void writeIOHeader(const std::shared_ptr<const dv::io::support::IODataBuffer> &ioHeader)
inline void keepAliveByReading()
inline void handleError(const boost::system::error_code &error, const std::string_view message)

Private Members

NetworkWriter *mParent
WriteOrderedSocket mSocket
uint8_t mKeepAliveReadSpace = {0}
template<class Functor>
class ContrastMaximizationWrapper
#include </builds/inivation/dv/dv-processing/include/dv-processing/optimization/contrast_maximization_wrapper.hpp>

Wrapper for all contrast maximization algorithms. For more information about contrast maximization please check “contrast_maximization_rotation.hpp” or “contrast_maximization_translation_and_depth.hpp”. This wrapper is mainly meant to set the non linear differentiation parameters (see contructor for more information). In addition, the class expose to user only “optimize” function which returns a struct containing the result of the non-linear optimization (successful or not), number of iteration of the optimization and optimized parameters.

Template Parameters:

Functor – Functor that handles optimization. Cost is computed by overriding operator() method. For an example of a functor please check “contrast_maximization_rotation.hpp” or “contrast_maximization_translation_and_depth.hpp”.

Public Functions

inline ContrastMaximizationWrapper(std::unique_ptr<Functor> functor_, float learningRate, float epsfcn = 0, float ftol = 0.000345267, float gtol = 0, float xtol = 0.000345267, int maxfev = 400)
Parameters:
  • functor_ – functor handling contrast maximization optimization. the functor should inherit “OptimizationFunctor” and overload the “int operator()” method to compute cost for contrast maximization and optimize pre-defined parameters.

  • learningRate – constant multiplying input value to find new value at which function will be evaluated. E.g. assuming function is evaluated at x –> f(x), next input sample x’ is computed as x’ = abs(x) * learningRate.

  • epsfcn – error precision

  • ftol – tolerance for the norm of the vector function

  • gtol – tolerance for the norm of the gradient of the error vector

  • xtol – tolerance for the norm of the solution vector

  • maxfev – max number of function evaluations Note that default parameters are taken from default parameters of LevenbergMarquardt optimizer.

inline optimizationOutput optimize(const Eigen::VectorXf &initialValues)

Function optimizing cost defined in mFunctor (inside operator() method).

Parameters:

initialValues – Initial values of variables to be optimized.

Returns:

optimized variable that minimize cost.

Private Members

std::unique_ptr<Functor> mFunctor = nullptr
optimizationParameters mParams
struct controlInBuffer

Public Members

uint8_t setup[LIBUSB_CONTROL_SETUP_SIZE] = {}
uint8_t buffer[MAX_CONTROL_TRANSFER_SIZE] = {}
controlInCallbackType callback = {}
struct controlOutBuffer

Public Members

uint8_t setup[LIBUSB_CONTROL_SETUP_SIZE] = {}
uint8_t buffer[MAX_CONTROL_TRANSFER_SIZE] = {}
controlOutCallbackType callback = {}
struct DataReadHandler
#include </builds/inivation/dv/dv-processing/include/dv-processing/io/data_read_handler.hpp>

Read handler that can handle all supported types in MonoCameraRecording.

Public Types

enum class OutputFlag

Values:

enumerator END_OF_FILE
enumerator CONTINUE

Public Functions

inline void operator()(const dv::EventStore &events)

Internal call to handle input data

Parameters:

events

inline void operator()(const dv::Frame &frame)

Internal call to handle input data

Parameters:

frame

inline void operator()(const std::vector<dv::Trigger> &triggers)

Internal call to handle input data

Parameters:

triggers

inline void operator()(const std::vector<dv::IMU> &imu)

Internal call to handle input data

Parameters:

imu

inline void operator()(const OutputFlag flag)

Internal call to handle input data

Parameters:

flag

Public Members

std::optional<std::function<void(const dv::EventStore&)>> mEventHandler = std::nullopt

Event handler that is going to be called on each arriving event batch.

std::optional<std::function<void(const dv::Frame&)>> mFrameHandler = std::nullopt

Frame handler that is called on each arriving frame.

std::optional<std::function<void(const std::vector<dv::IMU>&)>> mImuHandler = std::nullopt

IMU data handler that is going to be called on each arriving imu data batch.

std::optional<std::function<void(const std::vector<dv::Trigger>&)>> mTriggersHandler = std::nullopt

Trigger data handler that is going to be called on each arriving trigger data batch.

std::optional<std::function<void(const OutputFlag)>> mOutputFlagHandler = std::nullopt

A handler for output flags that can indicate some file behaviour, e.g. end-of-file.

bool eof = false

Is end of file reached.

int64_t seek = -1

Timestamp holding latest seek position of the recording

class DAVIS : public dv::io::camera::USBDevice, public dv::io::camera::SyncCameraInputBase

Public Types

enum class Davis240BiasCF

Values:

enumerator Diff
enumerator On
enumerator Off
enumerator ApsCascode
enumerator DiffCascode
enumerator ApsReadoutSourceFollower
enumerator LocalBuffer
enumerator PixelInverter
enumerator Photoreceptor
enumerator PhotoreceptorSourceFollower
enumerator Refractory
enumerator AERPullDown
enumerator LCOLTimeout
enumerator AERPullUpX
enumerator AERPullUpY
enumerator PadFollower
enumerator ApsOverflowLevel
enumerator BiasBuffer
enum class Davis346BiasVDAC

Values:

enumerator ApsOverflowLevel
enumerator ApsCascode
enumerator ADCReferenceHigh
enumerator ADCReferenceLow
enumerator ADCTestVoltage
enum class Davis346BiasCF

Values:

enumerator LocalBuffer
enumerator PadFollower
enumerator Diff
enumerator On
enumerator Off
enumerator PixelInverter
enumerator Photoreceptor
enumerator PhotoreceptorSourceFollower
enumerator Refractory
enumerator ReadoutBuffer
enumerator ApsReadoutSourceFollower
enumerator ADCComparator
enumerator COLSelectLow
enumerator DACBuffer
enumerator LCOLTimeout
enumerator AERPullDown
enumerator AERPullUpX
enumerator AERPullUpY
enumerator BiasBuffer
enum class CDavisBiasVDAC

Values:

enumerator ApsCascode
enumerator OVG1Low
enumerator OVG2Low
enumerator TX2OVG2High
enumerator Gnd07
enumerator ADCTestVoltage
enumerator ADCReferenceHigh
enumerator ADCReferenceLow
enum class CDavisBiasCF

Values:

enumerator LocalBuffer
enumerator PadFollower
enumerator PixelInverter
enumerator Diff
enumerator On
enumerator Off
enumerator Photoreceptor
enumerator PhotoreceptorSourceFollower
enumerator Refractory
enumerator ArrayBiasBuffer
enumerator ArrayLogicBuffer
enumerator FallTime
enumerator RiseTime
enumerator ReadoutBuffer
enumerator ApsReadoutSourceFollower
enumerator ADCComparator
enumerator DACBuffer
enumerator LCOLTimeout
enumerator AERPullDown
enumerator AERPullUpX
enumerator AERPullUpY
enumerator BiasBuffer
using AutoExposureCallback = std::function<dv::Duration(const dv::Frame &frame)>

Public Functions

inline explicit DAVIS(const LogLevel deviceLogLevel = LogLevel::LVL_WARNING, loggerCallbackType deviceLogger = {})

Open the first DAVIS camera that can be found. Throws if device cannot be opened.

Parameters:
  • deviceLogLevel – initial log-level

  • deviceLogger – per-device logging callback, must be thread-safe

inline explicit DAVIS(const std::string_view filterBySerialNumber, const LogLevel deviceLogLevel = LogLevel::LVL_WARNING, loggerCallbackType deviceLogger = {})

Open the DAVIS camera with the specified serial number. Throws if device cannot be opened.

Parameters:
  • filterBySerialNumber – serial number to search for

  • deviceLogLevel – initial log-level

  • deviceLogger – per-device logging callback, must be thread-safe

inline explicit DAVIS(const DeviceDescriptor &deviceToOpen, const LogLevel deviceLogLevel = LogLevel::LVL_WARNING, loggerCallbackType deviceLogger = {})

Open the DAVIS camera corresponding to the specified descriptor. Throws if device cannot be opened.

Parameters:
  • deviceToOpen – device descriptor structure

  • deviceLogLevel – initial log-level

  • deviceLogger – per-device logging callback, must be thread-safe

inline ~DAVIS() override
inline virtual std::string getCameraName() const override

Get camera name, which is a combination of the camera model and the serial number.

Returns:

String containing the camera model and serial number separated by an underscore character.

inline uint32_t getLogicVersion() const

Get camera FPGA logic version.

Returns:

camera FPGA logic version

inline uint32_t getLogicPatchLevel() const

Get camera FPGA logic patch level.

Returns:

camera FPGA logic patch level

inline parser::DAVIS::SensorModel getSensorModel() const

Get the exact model of the DAVIS sensor chip mounted in this camera.

Returns:

camera DAVIS sensor chip model

inline virtual std::optional<cv::Size> getEventResolution() const override

Get event stream resolution.

Returns:

Event stream resolution, std::nullopt if event stream is unavailable.

inline virtual std::optional<cv::Size> getFrameResolution() const override

Retrieve frame stream resolution.

Returns:

Frame stream resolution or std::nullopt if the frame stream is not available.

inline virtual imu::ImuModel getImuModel() const override

Return IMU model used on device.

Returns:

IMU model in use.

inline virtual float getPixelPitch() const override

Return pixel pitch distance for the connected camera model. The value is returned in meters.

Returns:

Pixel pitch distance in meters according to the connected device.

inline virtual bool isMaster() const override

Report if this camera is a clock synchronization master.

Returns:

true if clock master, false otherwise.

inline virtual bool isEventStreamAvailable() const override

Check whether event stream is available.

Returns:

True if event stream is available, false otherwise.

inline virtual bool isFrameStreamAvailable() const override

Check whether frame stream is available.

Returns:

True if frame stream is available, false otherwise.

inline virtual bool isImuStreamAvailable() const override

Check whether IMU data is available.

Returns:

True if IMU data stream is available, false otherwise.

inline virtual bool isTriggerStreamAvailable() const override

Check whether trigger data is available.

Returns:

True if trigger data stream is available, false otherwise.

inline virtual bool isRunning() const override

Check whether any input data streams have terminated. For a live camera this should check if the device is still connected and functioning, while for a recording file this should check if any of the data streams have reached end-of-file (EOF). For a network input, this indicates the network stream is still connected.

Returns:

True if data read on all streams is still possible, false otherwise.

inline virtual bool isRunning(const std::string_view streamName) const override

Check whether the input data stream with the specified name is still active.

Returns:

True if data read on this stream is possible, false otherwise.

inline virtual bool isRunningAny() const override

Check whether any input data streams are still available. For a live camera this should check if the device is still connected and functioning and at least one data stream is active (different than isRunning()), while for a recording file this should check if any of the data streams have not yet reached end-of-file (EOF) and are still readable. For a network input, this indicates the network stream is still connected.

Returns:

True if data read on at least one stream is still possible, false otherwise.

inline virtual std::chrono::microseconds getTimeInterval() const override

Get the time interval for data commit.

Returns:

Time interval in microseconds.

inline virtual void setTimeInterval(const std::chrono::microseconds timeInterval) override

Set a new time interval value for data commit. Data is put in the queues for getNextEventBatch(), readNext(), … at this interval’s rate.

Parameters:

timeInterval – New time interval value in microseconds.

inline virtual std::chrono::microseconds getTimestampOffset() const override

Get the timestamp offset.

Returns:

Absolute timestamp offset value in microseconds.

inline dv::PixelArrangement getPixelArrangementEvents() const

Get pixel color filter arrangement for event readout on color DAVIS. Takes flips into account.

Returns:

current color filter

inline dv::PixelArrangement getPixelArrangementFrames() const

Get pixel color filter arrangement for frame readout on color DAVIS. Takes flips into account.

Returns:

current color filter

inline dv::EventColor getEventColor(const dv::Event &event) const

Determine the color of the Bayer color filter for a specific event, based on its pixel address. WHITE means White/No Filter. Please take into account that there are usually twice as many green pixels as there are red or blue ones.

Parameters:

event – event pixel to get color for.

Returns:

event color filter value.

inline bool isEventsRunning() const

Report if the event output is running.

Returns:

true if events active, false otherwise.

inline void setEventsRunning(const bool run)

Enable or disable the event output.

Parameters:

run – whether to enable the event output or not.

inline virtual bool getFlipHorizontal() const override

Status of horizontal events flip.

Returns:

status of horizontal events flip.

inline virtual void setFlipHorizontal(const bool flipHorizontalEvents) override

Flip events horizontally.

Parameters:

flipHorizontalEvents – flip events horizontally.

inline virtual bool getFlipVertical() const override

Status of vertical events flip.

Returns:

status of vertical events flip.

inline virtual void setFlipVertical(const bool flipVerticalEvents) override

Flip events vertically.

Parameters:

flipVerticalEvents – flip events vertically.

inline bool getFlipHorizontalEvents() const

Status of horizontal events flip.

Returns:

status of horizontal events flip.

inline void setFlipHorizontalEvents(const bool flipHorizontalEvents)

Flip events horizontally.

Parameters:

flipHorizontalEvents – flip events horizontally.

inline bool getFlipVerticalEvents() const

Status of vertical events flip.

Returns:

status of vertical events flip.

inline void setFlipVerticalEvents(const bool flipVerticalEvents)

Flip events vertically.

Parameters:

flipVerticalEvents – flip events vertically.

inline bool getExternalAERControl() const

Get external AER control status. If true, the AER handshake with the sensor chip is not done by our camera’s FPGA, but by some external system provided by the customer.

Returns:

external AER control status

inline void setExternalAERControl(const bool externalAERControl)

Enable or disable external AER control feature. If true, the AER handshake with the sensor chip is not done by our camera’s FPGA, but by some external system provided by the customer. This also disables the normal event output (isEventsRunning() returns false). Disabling this will not re-enable the normal event output automatically!

Parameters:

externalAERControl – true to enable external AER control, false to disable.

inline uint16_t getBackgroundActivityFilter() const

Get background activity noise filter time delta value. If 0, filtering is disabled.

Returns:

time delta value in 250µs multiples

inline void setBackgroundActivityFilter(const uint16_t timeIn250us)

Set background activity noise filter time delta value. If 0, filtering is disabled.

Parameters:

timeIn250us – time delta value in 250µs multiples

inline uint16_t getRefractoryPeriodFilter() const

Get refractory period filter time delta value. If 0, filtering is disabled.

Returns:

time delta value in 250µs multiples

inline void setRefractoryPeriodFilter(const uint16_t timeIn250us)

Set refractory period filter time delta value. If 0, filtering is disabled.

Parameters:

timeIn250us – time delta value in 250µs multiples

inline uint8_t getSkipFilter() const

Get event skip filter skip value. If 0, filtering is disabled.

Returns:

drop every Nth event

inline void setSkipFilter(const uint8_t skipEvery)

Set event skip filter skip value. Drops every Nth event. If 0, filtering is disabled.

Parameters:

skipEvery – drop every Nth event

inline virtual Flatten getFlatten() const override

Status of event polarity flattening.

Returns:

status of event polarity flattening.

inline virtual void setFlatten(const Flatten flatten) override

Flatten events polarity.

Parameters:

flattenEvents – flattening mode.

inline virtual cv::Rect getCropArea() const override

Get events Region of Interest (ROI).

Returns:

get events Region of Interest (ROI).

inline virtual void setCropArea(const cv::Rect cropAreaEvents) override

Set events Region of Interest (ROI). Usually hardware accelerated.

Parameters:

cropAreaEvents – region of interest (ROI) position and size.

inline cv::Rect getCropAreaEvents() const

Get events Region of Interest (ROI).

Returns:

get events Region of Interest (ROI).

inline void setCropAreaEvents(cv::Rect cropAreaEvents)

Set events Region of Interest (ROI). Usually hardware accelerated.

Parameters:

cropAreaEvents – region of interest (ROI) position and size.

inline bool isFramesRunning() const

Report if the frame output is running.

Returns:

true if frames active, false otherwise.

inline void setFramesRunning(const bool run)

Enable or disable the frame output.

Parameters:

run – whether to enable the frame output or not.

inline void snapshot()

Takes a snapshot (one frame), like a photo-camera. More efficient implementation that just toggling setFramesRunning() on and off. The frame output should not be running prior to calling this, as it only makes sense if frames are not being generated at the time. Also, setFrameInterval() should be set to zero if only doing snapshots, to ensure a quicker readiness for the next one, since the delay is always observed after taking a frame.

inline dv::io::camera::parser::DAVIS::ColorMode getColorMode() const

Get frames color mode.

Returns:

frame color mode

inline void setColorMode(const dv::io::camera::parser::DAVIS::ColorMode colorMode)

Set frames color mode. For monochrome sensors, all modes are equivalent. For color sensors, DEFAULT will output an RGB frame after debayering, GRAYSCALE will debayer to one channel, and ORIGINAL will return the unchanged readout from the pixels, in case you want to do debayering yourself.

Parameters:

colorMode – frame color mode

inline bool getAutoExposure() const

Report if auto-exposure feature is enabled.

Returns:

true if auto-exposure enabled, false otherwise.

inline void setAutoExposure(const bool enabled)

Enable auto-exposure feature. Automatic exposure control, tries to set the exposure value automatically to an appropriate value to maximize information in the scene and minimize under- and over-exposure.

Parameters:

enabled – true to enable auto-exposure, false to disable.

inline void setAutoExposureCallback(AutoExposureCallback callback)

Set custom callback function to compute new exposure value based on current frame output.

Parameters:

callback

inline std::chrono::microseconds getExposureDuration() const

Get current frame exposure duration. Also works with auto-exposure, reporting last automatically set value.

Returns:

current exposure duration in microseconds.

inline void setExposureDuration(const std::chrono::microseconds exposureDurationUs)

Set frame exposure duration. Auto-exposure must be disabled.

Parameters:

exposureDurationUs – exposure duration in microseconds.

inline std::chrono::microseconds getFrameInterval() const

Get frame interval, the time between two consecutive frames.

Returns:

frame interval in microseconds.

inline void setFrameInterval(const std::chrono::microseconds frameIntervalUs)

Set frame interval, the time between two consecutive frames.

Parameters:

frameIntervalUs – frame interval in microseconds.

inline bool getFlipHorizontalFrames() const

Status of horizontal frames flip.

Returns:

status of horizontal frames flip.

inline void setFlipHorizontalFrames(const bool flipHorizontalFrames)

Flip frames horizontally.

Parameters:

flipHorizontalFrames – flip frames horizontally.

inline bool getFlipVerticalFrames() const

Status of vertical frames flip.

Returns:

status of vertical frames flip.

inline void setFlipVerticalFrames(const bool flipVerticalFrames)

Flip frames vertically.

Parameters:

flipVerticalFrames – flip frames vertically.

inline cv::Rect getCropAreaFrames() const

Get frames Region of Interest (ROI).

Returns:

get frames Region of Interest (ROI).

inline void setCropAreaFrames(cv::Rect cropAreaFrames)

Set frames Region of Interest (ROI). Usually hardware accelerated.

Parameters:

cropAreaFrames – region of interest (ROI) position and size.

inline bool isImuRunningAccelerometer() const

Status of IMU accelerometer.

Returns:

true if enabled, false otherwise.

inline void setImuRunningAccelerometer(const bool run)

Enable or disable IMU accelerometer.

Parameters:

run – true to enable, false to disable.

inline bool isImuRunningGyroscope() const

Status of IMU gyroscope.

Returns:

true if enabled, false otherwise.

inline void setImuRunningGyroscope(const bool run)

Enable or disable IMU gyroscope.

Parameters:

run – true to enable, false to disable.

inline bool isImuRunningTemperature() const

Status of IMU temperature measurement.

Returns:

true if enabled, false otherwise.

inline void setImuRunningTemperature(const bool run)

Enable or disable IMU temperature measurement.

Parameters:

run – true to enable, false to disable.

inline bool getIMUFlipX() const

Status of IMU X axis flipping.

Returns:

true if enabled, false otherwise.

inline void setIMUFlipX(const bool flipX)

Enable or disable IMU X axis flipping. Will negate (flip) all returned X axis values.

Parameters:

flipX – true to enable, false to disable.

inline bool getIMUFlipY() const

Status of IMU Y axis flipping.

Returns:

true if enabled, false otherwise.

inline void setIMUFlipY(const bool flipY)

Enable or disable IMU Y axis flipping. Will negate (flip) all returned Y axis values.

Parameters:

flipY – true to enable, false to disable.

inline bool getIMUFlipZ() const

Status of IMU Z axis flipping.

Returns:

true if enabled, false otherwise.

inline void setIMUFlipZ(const bool flipZ)

Enable or disable IMU Z axis flipping. Will negate (flip) all returned Z axis values.

Parameters:

flipZ – true to enable, false to disable.

inline imu::InvensenseAccelRange getImuAccelRange() const

Get current IMU accelerometer range.

Returns:

accelerometer range.

inline void setImuAccelRange(const imu::InvensenseAccelRange range)

Set IMU accelerometer range.

Parameters:

range – accelerometer range.

inline imu::InvensenseGyroRange getImuGyroRange() const

Get current IMU gyroscope range.

Returns:

gyroscope range.

inline void setImuGyroRange(const imu::InvensenseGyroRange range)

Set IMU gyroscope range.

Parameters:

range – gyroscope range.

inline bool isDetectorRunning() const

Report status of external signal detector.

Returns:

true if running, false otherwise.

inline void setDetectorRunning(const bool run)

Enable or disable external signal detector.

Parameters:

run – true to enable, false to disable.

inline bool getDetectorRisingEdges() const

Report status of rising edge detection on the SIGNAL_IN line.

Returns:

true if enabled, false otherwise.

inline void setDetectorRisingEdges(const bool detectRising)

Detect rising edges (low to high transitions) on the SIGNAL_IN line.

Parameters:

detectRising – true to enable, false to disable.

inline bool getDetectorFallingEdges() const

Report status of falling edge detection on the SIGNAL_IN line.

Returns:

true if enabled, false otherwise.

inline void setDetectorFallingEdges(const bool detectFalling)

Detect falling edges (high to low transitions) on the SIGNAL_IN line.

Parameters:

detectFalling – true to enable, false to disable.

inline bool isGeneratorRunning() const

Report status of external signal generator.

Returns:

true if running, false otherwise.

inline void setGeneratorRunning(const bool run)

Enable or disable external signal generator (PWM-like output).

Parameters:

run – true to enable, false to disable.

inline std::chrono::microseconds getGeneratorLowTime() const

Get current PWM low time.

Returns:

low time in microseconds.

inline void setGeneratorLowTime(const std::chrono::microseconds lowTimeUs)

Set PWM low time for external signal generator.

Parameters:

lowTimeUs – low time in microseconds.

inline std::chrono::microseconds getGeneratorHighTime() const

Get current PWM high time.

Returns:

high time in microseconds.

inline void setGeneratorHighTime(const std::chrono::microseconds highTimeUs)

Set PWM high time for external signal generator.

Parameters:

highTimeUs – high time in microseconds.

inline bool getGeneratorInjectTriggerOnRisingEdge() const

Report status of trigger event injection feature for external signal generator rising edges.

Returns:

true if enabled, false otherwise.

inline void setGeneratorInjectTriggerOnRisingEdge(const bool injectRising)

Inject a trigger event of type EXTERNAL_GENERATOR_RISING_EDGE into the event stream from the device, every time a rising edge is generated by the PWM-like output of the external signal generator.

Parameters:

injectRising – true to inject trigger event, false to disable.

inline bool getGeneratorInjectTriggerOnFallingEdge() const

Report status of trigger event injection feature for external signal generator falling edges.

Returns:

true if enabled, false otherwise.

inline void setGeneratorInjectTriggerOnFallingEdge(const bool injectFalling)

Inject a trigger event of type EXTERNAL_GENERATOR_FALLING_EDGE into the event stream from the device, every time a falling edge is generated by the PWM-like output of the external signal generator.

Parameters:

injectFalling – true to inject trigger event, false to disable.

inline std::pair<uint8_t, uint8_t> getDavis240BiasCoarseFine(const Davis240BiasCF bias) const

Get low-level coarse/fine values for a specific bias.

Parameters:

bias – bias to query.

Returns:

coarse/fine values.

inline void setDavis240BiasCoarseFine(const Davis240BiasCF bias, const uint8_t coarse, const uint8_t fine)

Set specific bias to new low-level coarse/fine values.

Parameters:
  • bias – bias to configure.

  • coarse – coarse bias value.

  • fine – fine bias value.

inline uint8_t getDavis346BiasVoltage(const Davis346BiasVDAC bias) const

Get voltage value in 52.4 milliVolt increments for a specific voltage bias.

Parameters:

bias – voltage bias to query.

Returns:

voltage value in 52.4 milliVolt increments, from 0 to 63.

inline void setDavis346BiasVoltage(const Davis346BiasVDAC bias, const uint8_t voltageValue)

Set specific voltage bias to a new value in 52.4 milliVolt increments.

Parameters:
  • bias – voltage bias to configure.

  • voltageValue – voltage value in 52.4 milliVolt increments, from 0 to 63.

inline std::pair<uint8_t, uint8_t> getDavis346BiasCoarseFine(const Davis346BiasCF bias) const

Get low-level coarse/fine values for a specific bias.

Parameters:

bias – bias to query.

Returns:

coarse/fine values.

inline void setDavis346BiasCoarseFine(const Davis346BiasCF bias, const uint8_t coarse, const uint8_t fine)

Set specific bias to new low-level coarse/fine values.

Parameters:
  • bias – bias to configure.

  • coarse – coarse bias value.

  • fine – fine bias value.

inline uint8_t getCDavisBiasVoltage(const CDavisBiasVDAC bias) const

Get voltage value in 52.4 milliVolt increments for a specific voltage bias.

Parameters:

bias – voltage bias to query.

Returns:

voltage value in 52.4 milliVolt increments, from 0 to 63.

inline void setCDavisBiasVoltage(const CDavisBiasVDAC bias, const uint8_t voltageValue)

Set specific voltage bias to a new value in 52.4 milliVolt increments.

Parameters:
  • bias – voltage bias to configure.

  • voltageValue – voltage value in 52.4 milliVolt increments, from 0 to 63.

inline std::pair<uint8_t, uint8_t> getCDavisBiasCoarseFine(const CDavisBiasCF bias) const

Get low-level coarse/fine values for a specific bias.

Parameters:

bias – bias to query.

Returns:

coarse/fine values.

inline void setCDavisBiasCoarseFine(const CDavisBiasCF bias, const uint8_t coarse, const uint8_t fine)

Set specific bias to new low-level coarse/fine values.

Parameters:
  • bias – bias to configure.

  • coarse – coarse bias value.

  • fine – fine bias value.

inline bool getCDavisAdjustOVG1Low() const
inline void setCDavisAdjustOVG1Low(const bool enable)
inline bool getCDavisAdjustOVG2Low() const
inline void setCDavisAdjustOVG2Low(const bool enable)
inline bool getCDavisAdjustTX2OVGHigh() const
inline void setCDavisAdjustTX2OVGHigh(const bool enable)
inline std::chrono::microseconds getUSBEarlyPacketDelay() const

Get value of USB early packet timeout.

Returns:

timeout in microseconds.

inline void setUSBEarlyPacketDelay(const std::chrono::microseconds earlyPacketDelayUs)

Send data over USB early if this timeout is reached, instead of waiting on buffers being full. Timeout on device is in 125µs time-slices.

Parameters:

earlyPacketDelayUs – timeout in microseconds.

Public Members

bool muxHasStatistics

Feature test: Multiplexer statistics support (event drops).

bool dvsHasPixelFilter

Feature test: DVS pixel-level filtering.

bool dvsHasBackgroundActivityFilter

Feature test: DVS Background Activity filter (and Refractory Period filter).

bool dvsHasROIFilter

Feature test: DVS ROI filter.

bool dvsHasSkipFilter

Feature test: DVS event skip filter.

bool dvsHasPolarityFilter

Feature test: DVS polarity suppression filter.

bool dvsHasStatistics

Feature test: DVS statistics support.

bool apsHasGlobalShutter

Feature test: APS supports Global Shutter.

bool extInputHasGenerator

Feature test: External Input module supports Signal-Generation.

cv::Mat pixelHistogram
cv::Mat msvHistogram

Public Static Functions

static inline auto findDevices(const std::string_view filterBySerialNumber = {})

Find connected DAVIS cameras.

Parameters:

filterBySerialNumber – only search for devices with this serial number

Returns:

a descriptor structure describing a compatible device

Protected Functions

inline virtual void sendTimestampReset() override

Send a timestamp reset command to the device.

inline virtual void setTimestampOffset(const std::chrono::microseconds timestampOffset) override

Set a new timestamp offset value for the camera.

Parameters:

timestampOffset – New timestamp offset value in microseconds.

inline std::chrono::microseconds getGeneratorHighTimeInternal() const
inline std::chrono::microseconds getGeneratorLowTimeInternal() const
inline void setCropAreaEventsInternal(cv::Rect cropAreaEvents)
inline void setCropAreaFramesInternal(cv::Rect cropAreaFrames)

Private Types

enum class CoarseFineBiasSex

Coarse-fine bias sex: true for ‘N’ type, false for ‘P’ type.

Values:

enumerator N_TYPE
enumerator P_TYPE
enum class CoarseFineBiasType

Coarse-fine bias type: true for ‘Normal’, false for ‘Cascode’.

Values:

enumerator NORMAL
enumerator CASCODE
enum class CoarseFineBiasCurrentLevel

Coarse-fine bias current level: true for ‘Normal, false for ‘Low’.

Values:

enumerator NORMAL
enumerator LOW
enum class ShiftedSourceBiasOperatingMode

Shifted-source bias operating mode.

Values:

enumerator SHIFTED_SOURCE

Standard mode.

enumerator HI_Z

High impedance (driven from outside).

enumerator TIED_TO_RAIL

Tied to ground (SSN) or VDD (SSP).

enum class ShiftedSourceBiasVoltageLevel

Shifted-source bias voltage level.

Values:

enumerator SPLIT_GATE

Standard mode (200-400mV).

enumerator SINGLE_DIODE

Higher shifted-source voltage (one cascode).

enumerator DOUBLE_DIODE

Even higher shifted-source voltage (two cascodes).

Private Functions

inline void shutdownCallback()
inline void usbDataCallback(const std::span<const uint8_t> data)
inline void dataParserCallback(parser::ParsedData data)
inline void timeInitCallback()
inline void sendVDACBias(const int address)
inline void sendCoarseFineBias(const int address)
inline void sendShiftedSourceBias(const int address)
inline dv::Duration computeAutomaticExposure(const dv::Frame &frame)

Private Members

uint32_t mLogicVersion
uint32_t mLogicPatch
cv::Size mEventResolution
cv::Size mFrameResolution
parser::DAVIS::SensorModel mSensorModel
imu::ImuModel mImuModel
float mLogicClockActual
float mUSBClockActual
float mADCClockActual
std::unique_ptr<parser::DAVIS::Parser> mParser
std::atomic<dv::PixelArrangement> mDvsColorFilter
mutable std::mutex mConfigLock
cv::Rect mCropAreaEvents
cv::Rect mCropAreaFrames
std::array<VDACBias, 8> mVDACBiases = {}
std::array<CoarseFineBias, 35> mCoarseFineBiases = {}
ShiftedSourceBias mShiftedSourceNBias = {}
ShiftedSourceBias mShiftedSourcePBias = {}
bool mAutoExposureEnabled = {false}
AutoExposureCallback mAutoExposureCallback = {}
std::chrono::microseconds mAutoExposureLast = {1}
mutable std::mutex mCallbackConfigLock
std::atomic<bool> mIsRunning = {true}
std::atomic<bool> mTimestampMaster = {true}
struct dv::io::camera::DAVIS mInfo
struct dv::io::camera::DAVIS mAutoExposure

Private Static Functions

static inline constexpr uint16_t caerBiasVDACGenerate(const VDACBias vdacBias)

Transform VDAC bias structure into internal integer representation, suited for sending directly to the device via caerDeviceConfigSet().

Parameters:

vdacBias – VDAC bias structure.

Returns:

internal integer representation for device configuration.

static inline constexpr VDACBias caerBiasVDACParse(const uint16_t vdacBias)

Transform internal integer representation, as received by calls to caerDeviceConfigGet(), into a VDAC bias structure, for easier handling and understanding of the various parameters.

Parameters:

vdacBias – internal integer representation from device.

Returns:

VDAC bias structure.

static inline constexpr uint16_t caerBiasCoarseFineGenerate(const CoarseFineBias coarseFineBias)

Transform coarse-fine bias structure into internal integer representation, suited for sending directly to the device via caerDeviceConfigSet().

Parameters:

coarseFineBias – coarse-fine bias structure.

Returns:

internal integer representation for device configuration.

static inline constexpr CoarseFineBias caerBiasCoarseFineParse(const uint16_t coarseFineBias)

Transform internal integer representation, as received by calls to caerDeviceConfigGet(), into a coarse-fine bias structure, for easier handling and understanding of the various parameters.

Parameters:

coarseFineBias – internal integer representation from device.

Returns:

coarse-fine bias structure.

static inline constexpr uint16_t caerBiasShiftedSourceGenerate(const ShiftedSourceBias shiftedSourceBias)

Transform shifted-source bias structure into internal integer representation, suited for sending directly to the device via caerDeviceConfigSet().

Parameters:

shiftedSourceBias – shifted-source bias structure.

Returns:

internal integer representation for device configuration.

static inline constexpr ShiftedSourceBias caerBiasShiftedSourceParse(const uint16_t shiftedSourceBias)

Transform internal integer representation, as received by calls to caerDeviceConfigGet(), into a shifted-source bias structure, for easier handling and understanding of the various parameters.

Parameters:

shiftedSourceBias – internal integer representation from device.

Returns:

shifted-source bias structure.

Private Static Attributes

static constexpr uint16_t MODULE_MULTIPLEXER = {0}

Module address: device-side Multiplexer configuration. The Multiplexer is responsible for mixing, timestamping and outputting (via USB) the various event types generated by the device. It is also responsible for timestamp generation and synchronization.

static constexpr uint16_t MODULE_DVS = {1}

Module address: device-side DVS configuration. The DVS state machine handshakes with the chip’s AER bus and gets the polarity events from it. It supports various configurable delays, as well as advanced filtering capabilities on the polarity events.

static constexpr uint16_t MODULE_APS = {2}

Module address: device-side APS (Frame) configuration. The APS (Active-Pixel-Sensor) is responsible for getting the normal, synchronous frame from the camera chip. It supports various options for very precise timing control, as well as Region of Interest imaging.

static constexpr uint16_t MODULE_IMU = {3}

Module address: device-side IMU (Inertial Measurement Unit) configuration. The IMU module connects to the external IMU chip and sends data on the device’s movement in space. It can configure various options on the external chip, such as accelerometer range or gyroscope refresh rate.

static constexpr uint16_t MODULE_EXTERNAL_INPUT = {4}

Module address: device-side External Input (signal detector/generator) configuration. The External Input module is used to detect external signals on the external input jack and inject an event into the event stream when this happens. It can detect pulses of a specific length or rising and falling edges. On some systems, a signal generator module is also present, which can generate PWM-like pulsed signals with configurable timing.

static constexpr uint16_t MODULE_BIAS = {5}

Module address: device-side chip bias configuration. Shared with DAVIS_CONFIG_CHIP. This state machine is responsible for configuring the chip’s bias generator.

static constexpr uint16_t MODULE_CHIP = {5}

Module address: device-side chip control configuration. Shared with DAVIS_CONFIG_BIAS. This state machine is responsible for configuring the chip’s internal control shift registers, to set special options.

static constexpr uint16_t MODULE_SYSINFO = {6}

Module address: device-side system information. The system information module provides various details on the device, such as currently installed logic revision or clock speeds. All its parameters are read-only. This is reserved for internal use and should not be used by anything other than libcaer. Please see the ‘struct caer_davis_info’ documentation for more details on what information is available.

static constexpr uint16_t MODULE_USB = {9}

Module address: device-side USB output configuration. The USB output module forwards the data from the device and the FPGA/CPLD to the USB chip, usually a Cypress FX2 or FX3.

static constexpr uint16_t MUX_RUN = {0}

Parameter address for module DAVIS_CONFIG_MUX: run the Multiplexer state machine, which is responsible for mixing the various event types at the device level, timestamping them and outputting them via USB or other connectors.

static constexpr uint16_t MUX_TIMESTAMP_RUN = {1}

Parameter address for module DAVIS_CONFIG_MUX: run the Timestamp Generator inside the Multiplexer state machine, which will provide microsecond accurate timestamps to the events passing through.

static constexpr uint16_t MUX_TIMESTAMP_RESET = {2}

Parameter address for module DAVIS_CONFIG_MUX: reset the Timestamp Generator to zero. This also sends a reset pulse to all connected slave devices, resetting their timestamp too.

static constexpr uint16_t MUX_RUN_CHIP = {3}

Parameter address for module DAVIS_CONFIG_MUX: power up the chip’s bias generator, enabling the chip to work.

static constexpr uint16_t MUX_DROP_EXTINPUT_ON_TRANSFER_STALL = {4}

Parameter address for module DAVIS_CONFIG_MUX: drop External Input events if the USB output FIFO is full, instead of having them pile up at the input FIFOs.

static constexpr uint16_t MUX_DROP_DVS_ON_TRANSFER_STALL = {5}

Parameter address for module DAVIS_CONFIG_MUX: drop DVS events if the USB output FIFO is full, instead of having them pile up at the input FIFOs.

static constexpr uint16_t MUX_HAS_STATISTICS = {80}

Parameter address for module DAVIS_CONFIG_MUX: read-only parameter, information about the presence of the statistics feature. This is reserved for internal use and should not be used by anything other than libcaer. Please see the ‘struct caer_davis_info’ documentation to get this information.

static constexpr uint16_t MUX_STATISTICS_EXTINPUT_DROPPED = {81}

Parameter address for module DAVIS_CONFIG_MUX: read-only parameter, representing the number of dropped External Input events on the device due to full USB buffers. This is a 64bit value, and should always be read using the function: caerDeviceConfigGet64().

static constexpr uint16_t MUX_STATISTICS_DVS_DROPPED = {83}

Parameter address for module DAVIS_CONFIG_MUX: read-only parameter, representing the number of dropped DVS events on the device due to full USB buffers. This is a 64bit value, and should always be read using the function: caerDeviceConfigGet64().

static constexpr uint16_t DVS_SIZE_COLUMNS = {0}

Parameter address for module DAVIS_CONFIG_DVS: read-only parameter, contains the X axis resolution of the DVS events returned by the camera. This is reserved for internal use and should not be used by anything other than libcaer. Please see the ‘struct caer_davis_info’ documentation to get proper size information that already considers the rotation and orientation settings.

static constexpr uint16_t DVS_SIZE_ROWS = {1}

Parameter address for module DAVIS_CONFIG_DVS: read-only parameter, contains the Y axis resolution of the DVS events returned by the camera. This is reserved for internal use and should not be used by anything other than libcaer. Please see the ‘struct caer_davis_info’ documentation to get proper size information that already considers the rotation and orientation settings.

static constexpr uint16_t DVS_ORIENTATION_INFO = {2}

Parameter address for module DAVIS_CONFIG_DVS: read-only parameter, contains information on the orientation of the X/Y axes, whether they should be inverted or not on the host when parsing incoming events. Bit 2: dvsInvertXY Bit 1: reserved Bit 0: reserved This is reserved for internal use and should not be used by anything other than libcaer. Please see the ‘struct caer_davis_info’ documentation to get proper size information that already considers the rotation and orientation settings.

static constexpr uint16_t DVS_RUN = {3}

Parameter address for module DAVIS_CONFIG_DVS: run the DVS state machine and get polarity events from the chip by handshaking with its AER bus.

static constexpr uint16_t DVS_WAIT_ON_TRANSFER_STALL = {4}

Parameter address for module DAVIS_CONFIG_DVS: if the output FIFO for this module is full, stall the AER handshake with the chip and wait until it’s free again, instead of just continuing the handshake and dropping the resulting events.

static constexpr uint16_t DVS_EXTERNAL_AER_CONTROL = {5}

Parameter address for module DAVIS_CONFIG_DVS: enable external AER control. This ensures the chip and the DVS pixel array are running, but doesn’t do the handshake and leaves the ACK pin in high-impedance, to allow for an external system to take over the AER communication with the chip. DAVIS_CONFIG_DVS_RUN has to be turned off for this to work.

static constexpr uint16_t DVS_HAS_PIXEL_FILTER = {10}

Parameter address for module DAVIS_CONFIG_DVS: read-only parameter, information about the presence of the pixel filter feature. This is reserved for internal use and should not be used by anything other than libcaer. Please see the ‘struct caer_davis_info’ documentation to get this information.

static constexpr uint16_t DVS_FILTER_PIXEL_0_ROW = {11}

Parameter address for module DAVIS_CONFIG_DVS: the pixel filter completely suppresses up to eight pixels in the DVS array, filtering out all events produced by them. This is the pixel 0, Y axis setting.

static constexpr uint16_t DVS_FILTER_PIXEL_0_COLUMN = {12}

Parameter address for module DAVIS_CONFIG_DVS: the pixel filter completely suppresses up to eight pixels in the DVS array, filtering out all events produced by them. This is the pixel 0, X axis setting.

static constexpr uint16_t DVS_FILTER_PIXEL_1_ROW = {13}

Parameter address for module DAVIS_CONFIG_DVS: the pixel filter completely suppresses up to eight pixels in the DVS array, filtering out all events produced by them. This is the pixel 1, Y axis setting.

static constexpr uint16_t DVS_FILTER_PIXEL_1_COLUMN = {14}

Parameter address for module DAVIS_CONFIG_DVS: the pixel filter completely suppresses up to eight pixels in the DVS array, filtering out all events produced by them. This is the pixel 1, X axis setting.

static constexpr uint16_t DVS_FILTER_PIXEL_2_ROW = {15}

Parameter address for module DAVIS_CONFIG_DVS: the pixel filter completely suppresses up to eight pixels in the DVS array, filtering out all events produced by them. This is the pixel 2, Y axis setting.

static constexpr uint16_t DVS_FILTER_PIXEL_2_COLUMN = {16}

Parameter address for module DAVIS_CONFIG_DVS: the pixel filter completely suppresses up to eight pixels in the DVS array, filtering out all events produced by them. This is the pixel 2, X axis setting.

static constexpr uint16_t DVS_FILTER_PIXEL_3_ROW = {17}

Parameter address for module DAVIS_CONFIG_DVS: the pixel filter completely suppresses up to eight pixels in the DVS array, filtering out all events produced by them. This is the pixel 3, Y axis setting.

static constexpr uint16_t DVS_FILTER_PIXEL_3_COLUMN = {18}

Parameter address for module DAVIS_CONFIG_DVS: the pixel filter completely suppresses up to eight pixels in the DVS array, filtering out all events produced by them. This is the pixel 3, X axis setting.

static constexpr uint16_t DVS_FILTER_PIXEL_4_ROW = {19}

Parameter address for module DAVIS_CONFIG_DVS: the pixel filter completely suppresses up to eight pixels in the DVS array, filtering out all events produced by them. This is the pixel 4, Y axis setting.

static constexpr uint16_t DVS_FILTER_PIXEL_4_COLUMN = {20}

Parameter address for module DAVIS_CONFIG_DVS: the pixel filter completely suppresses up to eight pixels in the DVS array, filtering out all events produced by them. This is the pixel 4, X axis setting.

static constexpr uint16_t DVS_FILTER_PIXEL_5_ROW = {21}

Parameter address for module DAVIS_CONFIG_DVS: the pixel filter completely suppresses up to eight pixels in the DVS array, filtering out all events produced by them. This is the pixel 5, Y axis setting.

static constexpr uint16_t DVS_FILTER_PIXEL_5_COLUMN = {22}

Parameter address for module DAVIS_CONFIG_DVS: the pixel filter completely suppresses up to eight pixels in the DVS array, filtering out all events produced by them. This is the pixel 5, X axis setting.

static constexpr uint16_t DVS_FILTER_PIXEL_6_ROW = {23}

Parameter address for module DAVIS_CONFIG_DVS: the pixel filter completely suppresses up to eight pixels in the DVS array, filtering out all events produced by them. This is the pixel 6, Y axis setting.

static constexpr uint16_t DVS_FILTER_PIXEL_6_COLUMN = {24}

Parameter address for module DAVIS_CONFIG_DVS: the pixel filter completely suppresses up to eight pixels in the DVS array, filtering out all events produced by them. This is the pixel 6, X axis setting.

static constexpr uint16_t DVS_FILTER_PIXEL_7_ROW = {25}

Parameter address for module DAVIS_CONFIG_DVS: the pixel filter completely suppresses up to eight pixels in the DVS array, filtering out all events produced by them. This is the pixel 7, Y axis setting.

static constexpr uint16_t DVS_FILTER_PIXEL_7_COLUMN = {26}

Parameter address for module DAVIS_CONFIG_DVS: the pixel filter completely suppresses up to eight pixels in the DVS array, filtering out all events produced by them. This is the pixel 7, X axis setting.

static constexpr uint16_t DVS_HAS_BACKGROUND_ACTIVITY_FILTER = {30}

Parameter address for module DAVIS_CONFIG_DVS: read-only parameter, information about the presence of the background-activity filter feature. This is reserved for internal use and should not be used by anything other than libcaer. Please see the ‘struct caer_davis_info’ documentation to get this information.

static constexpr uint16_t DVS_FILTER_BACKGROUND_ACTIVITY = {31}

Parameter address for module DAVIS_CONFIG_DVS: enable the background-activity filter, which tries to remove events caused by transistor leakage, by rejecting uncorrelated events.

static constexpr uint16_t DVS_FILTER_BACKGROUND_ACTIVITY_TIME = {32}

Parameter address for module DAVIS_CONFIG_DVS: specify the time difference constant for the background-activity filter. Range: 0 - 4095, in 250µs units. Events that are correlated within this time-frame are let through, while others are filtered out.

static constexpr uint16_t DVS_FILTER_REFRACTORY_PERIOD = {33}

Parameter address for module DAVIS_CONFIG_DVS: enable the refractory period filter, which limits the firing rate of pixels. This is supported together with the background-activity filter.

static constexpr uint16_t DVS_FILTER_REFRACTORY_PERIOD_TIME = {34}

Parameter address for module DAVIS_CONFIG_DVS: specify the time constant for the refractory period filter. Range: 0 - 4095, in 250µs units. Pixels will be inhibited from generating new events during this time after the last even has fired.

static constexpr uint16_t DVS_HAS_ROI_FILTER = {40}

Parameter address for module DAVIS_CONFIG_DVS: read-only parameter, information about the presence of the ROI filter feature. This is reserved for internal use and should not be used by anything other than libcaer. Please see the ‘struct caer_davis_info’ documentation to get this information.

static constexpr uint16_t DVS_FILTER_ROI_START_COLUMN = {41}

Parameter address for module DAVIS_CONFIG_DVS: start position on the X axis for Region of Interest. Must be between 0 and DVS_SIZE_X-1, and be smaller or equal to DAVIS_CONFIG_DVS_FILTER_ROI_END_COLUMN.

static constexpr uint16_t DVS_FILTER_ROI_START_ROW = {42}

Parameter address for module DAVIS_CONFIG_DVS: start position on the Y axis for Region of Interest. Must be between 0 and DVS_SIZE_Y-1, and be smaller or equal to DAVIS_CONFIG_DVS_FILTER_ROI_END_ROW.

static constexpr uint16_t DVS_FILTER_ROI_END_COLUMN = {43}

Parameter address for module DAVIS_CONFIG_DVS: end position on the X axis for Region of Interest. Must be between 0 and DVS_SIZE_X-1, and be greater or equal to DAVIS_CONFIG_DVS_FILTER_ROI_START_COLUMN.

static constexpr uint16_t DVS_FILTER_ROI_END_ROW = {44}

Parameter address for module DAVIS_CONFIG_DVS: end position on the Y axis for Region of Interest. Must be between 0 and DVS_SIZE_Y-1, and be greater or equal to DAVIS_CONFIG_DVS_FILTER_ROI_START_ROW.

static constexpr uint16_t DVS_HAS_SKIP_FILTER = {50}

Parameter address for module DAVIS_CONFIG_DVS: read-only parameter, information about the presence of the event skip filter feature. This is reserved for internal use and should not be used by anything other than libcaer. Please see the ‘struct caer_davis_info’ documentation to get this information.

static constexpr uint16_t DVS_FILTER_SKIP_EVENTS = {51}

Parameter address for module DAVIS_CONFIG_DVS: enable the event skip filter, which simply throws away one event every N events (decimation filter).

static constexpr uint16_t DVS_FILTER_SKIP_EVENTS_EVERY = {52}

Parameter address for module DAVIS_CONFIG_DVS: number of events to let through before skipping one. Range: 0 - 255 events.

static constexpr uint16_t DVS_HAS_POLARITY_FILTER = {60}

Parameter address for module DAVIS_CONFIG_DVS: read-only parameter, information about the presence of the polarity suppression filter feature. This is reserved for internal use and should not be used by anything other than libcaer. Please see the ‘struct caer_davis_info’ documentation to get this information.

static constexpr uint16_t DVS_FILTER_POLARITY_FLATTEN = {61}

Parameter address for module DAVIS_CONFIG_DVS: flatten all polarities to OFF (0).

static constexpr uint16_t DVS_FILTER_POLARITY_SUPPRESS = {62}

Parameter address for module DAVIS_CONFIG_DVS: suppress one of the two ON/OFF polarities completely. Use DAVIS_CONFIG_DVS_FILTER_POLARITY_IGNORE to select which.

static constexpr uint16_t DVS_FILTER_POLARITY_SUPPRESS_TYPE = {63}

Parameter address for module DAVIS_CONFIG_DVS: polarity to suppress (0=OFF, 1=ON). Use DAVIS_CONFIG_DVS_FILTER_POLARITY_IGNORE to enable.

static constexpr uint16_t DVS_HAS_STATISTICS = {80}

Parameter address for module DAVIS_CONFIG_DVS: read-only parameter, information about the presence of the statistics feature. This is reserved for internal use and should not be used by anything other than libcaer. Please see the ‘struct caer_davis_info’ documentation to get this information.

static constexpr uint16_t DVS_STATISTICS_EVENTS_ROW = {81}

Parameter address for module DAVIS_CONFIG_DVS: read-only parameter, representing the number of row event transactions completed on the device. This is a 64bit value, and should always be read using the function: caerDeviceConfigGet64().

static constexpr uint16_t DVS_STATISTICS_EVENTS_COLUMN = {83}

Parameter address for module DAVIS_CONFIG_DVS: read-only parameter, representing the number of column event transactions completed on the device. This is a 64bit value, and should always be read using the function: caerDeviceConfigGet64().

static constexpr uint16_t DVS_STATISTICS_EVENTS_DROPPED = {85}

Parameter address for module DAVIS_CONFIG_DVS: read-only parameter, representing the number of dropped transaction sequences on the device due to full buffers. This is a 64bit value, and should always be read using the function: caerDeviceConfigGet64().

static constexpr uint16_t DVS_STATISTICS_FILTERED_PIXELS = {87}

Parameter address for module DAVIS_CONFIG_DVS: read-only parameter, representing the number of dropped events due to the pixel filter. This is a 64bit value, and should always be read using the function: caerDeviceConfigGet64().

static constexpr uint16_t DVS_STATISTICS_FILTERED_BACKGROUND_ACTIVITY = {89}

Parameter address for module DAVIS_CONFIG_DVS: read-only parameter, representing the number of dropped events due to the background-activity filter. This is a 64bit value, and should always be read using the function: caerDeviceConfigGet64().

static constexpr uint16_t DVS_STATISTICS_FILTERED_REFRACTORY_PERIOD = {91}

Parameter address for module DAVIS_CONFIG_DVS: read-only parameter, representing the number of dropped events due to the refractory period filter. This is a 64bit value, and should always be read using the function: caerDeviceConfigGet64().

static constexpr uint16_t APS_SIZE_COLUMNS = {0}

Parameter address for module DAVIS_CONFIG_APS: read-only parameter, contains the X axis resolution of the APS frames returned by the camera. This is reserved for internal use and should not be used by anything other than libcaer. Please see the ‘struct caer_davis_info’ documentation to get proper size information that already considers the rotation and orientation settings.

static constexpr uint16_t APS_SIZE_ROWS = {1}

Parameter address for module DAVIS_CONFIG_APS: read-only parameter, contains the Y axis resolution of the APS frames returned by the camera. This is reserved for internal use and should not be used by anything other than libcaer. Please see the ‘struct caer_davis_info’ documentation to get proper size information that already considers the rotation and orientation settings.

static constexpr uint16_t APS_ORIENTATION_INFO = {2}

Parameter address for module DAVIS_CONFIG_APS: read-only parameter, contains information on the orientation of the X/Y axes, whether they should be inverted or not on the host when parsing incoming pixels, as well as if the X or Y axes need to be flipped when reading the pixels. Bit 2: apsInvertXY Bit 1: apsFlipX Bit 0: apsFlipY This is reserved for internal use and should not be used by anything other than libcaer. Please see the ‘struct caer_davis_info’ documentation to get proper size information that already considers the rotation and orientation settings.

static constexpr uint16_t APS_COLOR_FILTER = {3}

Parameter address for module DAVIS_CONFIG_APS: read-only parameter, contains information on the type of color filter present on the device. This is reserved for internal use and should not be used by anything other than libcaer. Please see the ‘struct caer_davis_info’ documentation to get proper color filter information.

static constexpr uint16_t APS_RUN = {4}

Parameter address for module DAVIS_CONFIG_APS: enable the APS module and take intensity images of the scene. While this parameter is enabled, frames will be taken continuously. To slow down the frame-rate, see DAVIS_CONFIG_APS_FRAME_DELAY. To only take snapshots, see DAVIS_CONFIG_APS_SNAPSHOT.

static constexpr uint16_t APS_WAIT_ON_TRANSFER_STALL = {5}

Parameter address for module DAVIS_CONFIG_APS: if the output FIFO for this module is full, stall the APS state machine and wait until it’s free again, instead of just dropping the pixels as they are being read out. This guarantees a complete frame readout, at the possible cost of slight timing differences between pixels. If disabled, incomplete frames may be transmitted and will then be dropped on the host, resulting in lower frame-rates, especially during high DVS traffic.

static constexpr uint16_t APS_HAS_GLOBAL_SHUTTER = {6}

Parameter address for module DAVIS_CONFIG_APS: read-only parameter, information about the presence of the global shutter feature. This is reserved for internal use and should not be used by anything other than libcaer. Please see the ‘struct caer_davis_info’ documentation to get this information.

static constexpr uint16_t APS_GLOBAL_SHUTTER = {7}

Parameter address for module DAVIS_CONFIG_APS: enable Global Shutter mode instead of Rolling Shutter. The Global Shutter eliminates motion artifacts, but is noisier than the Rolling Shutter (worse quality).

static constexpr uint16_t APS_START_COLUMN_0 = {8}

Parameter address for module DAVIS_CONFIG_APS: start position on the X axis for Region of Interest 0. Must be between 0 and APS_SIZE_X-1, and be smaller or equal to DAVIS_CONFIG_APS_END_COLUMN_0.

static constexpr uint16_t APS_START_ROW_0 = {9}

Parameter address for module DAVIS_CONFIG_APS: start position on the Y axis for Region of Interest 0. Must be between 0 and APS_SIZE_Y-1, and be smaller or equal to DAVIS_CONFIG_APS_END_ROW_0.

static constexpr uint16_t APS_END_COLUMN_0 = {10}

Parameter address for module DAVIS_CONFIG_APS: end position on the X axis for Region of Interest 0. Must be between 0 and APS_SIZE_X-1, and be greater or equal to DAVIS_CONFIG_APS_START_COLUMN_0.

static constexpr uint16_t APS_END_ROW_0 = {11}

Parameter address for module DAVIS_CONFIG_APS: end position on the Y axis for Region of Interest 0. Must be between 0 and APS_SIZE_Y-1, and be greater or equal to DAVIS_CONFIG_APS_START_ROW_0.

static constexpr uint16_t APS_EXPOSURE = {12}

Parameter address for module DAVIS_CONFIG_APS: frame exposure time. Range: 0-4194303, in microseconds (maximum ~4s). Very precise for Global Shutter, slightly less exact for Rolling Shutter due to column-based timing constraints.

static constexpr uint16_t APS_FRAME_INTERVAL = {13}

Parameter address for module DAVIS_CONFIG_APS: time between consecutive frames. Range: 0-8388607, in microseconds (maximum ~8s). This can be used to set a frame-rate. Please note the frame-rate is best-effort, and may not be met if readout and exposure times exceed this value.

static constexpr uint16_t APS_TRANSFER = {14}

Parameter address for module DAVIS_CONFIG_APS (only for CDAVIS chip): charge transfer time in ADCClock cycles.

static constexpr uint16_t APS_RSFDSETTLE = {15}

Parameter address for module DAVIS_CONFIG_APS (only for CDAVIS chip): Rolling Shutter FD settle time in ADCClock cycles.

static constexpr uint16_t APS_GSPDRESET = {16}

Parameter address for module DAVIS_CONFIG_APS (only for CDAVIS chip): Global Shutter PD reset time in ADCClock cycles.

static constexpr uint16_t APS_GSRESETFALL = {17}

Parameter address for module DAVIS_CONFIG_APS (only for CDAVIS chip): Global Shutter Reset Fall time in ADCClock cycles.

static constexpr uint16_t APS_GSTXFALL = {18}

Parameter address for module DAVIS_CONFIG_APS (only for CDAVIS chip): Global Shutter Transfer Fall time in ADCClock cycles.

static constexpr uint16_t APS_GSFDRESET = {19}

Parameter address for module DAVIS_CONFIG_APS (only for CDAVIS chip): Global Shutter FD reset time in ADCClock cycles.

static constexpr uint16_t IMU_TYPE = {0}

Parameter address for module DAVIS_CONFIG_IMU: read-only parameter, contains information on the type of IMU chip being used in this device: 0 - no IMU present 1 - InvenSense MPU 6050/6150 2 - InvenSense MPU 9250 This is reserved for internal use and should not be used by anything other than libcaer.

static constexpr uint16_t IMU_ORIENTATION_INFO = {1}

Parameter address for module DAVIS_CONFIG_IMU: read-only parameter, contains information on the orientation of the X/Y/Z axes, whether they should be flipped or not on the host when parsing incoming IMU data samples. Bit 2: imuFlipX Bit 1: imuFlipY Bit 0: imuFlipZ This is reserved for internal use and should not be used by anything other than libcaer. Generated IMU events are already properly flipped when returned to the user.

static constexpr uint16_t IMU_RUN_ACCELEROMETER = {2}

Parameter address for module DAVIS_CONFIG_IMU: enable the IMU’s accelerometer. This takes the IMU chip out of sleep.

static constexpr uint16_t IMU_RUN_GYROSCOPE = {3}

Parameter address for module DAVIS_CONFIG_IMU: enable the IMU’s gyroscope. This takes the IMU chip out of sleep.

static constexpr uint16_t IMU_RUN_TEMPERATURE = {4}

Parameter address for module DAVIS_CONFIG_IMU: enable the IMU’s temperature sensor. This takes the IMU chip out of sleep.

static constexpr uint16_t IMU_SAMPLE_RATE_DIVIDER = {5}

Parameter address for module DAVIS_CONFIG_IMU: this specifies the divider from the Gyroscope Output Rate used to generate the Sample Rate for the IMU. Valid values are from 0 to 255. The Sample Rate is generated like this: Sample Rate = Gyroscope Output Rate / (1 + DAVIS_CONFIG_IMU_SAMPLE_RATE_DIVIDER) where Gyroscope Output Rate = 8 kHz when DAVIS_CONFIG_IMU_DIGITAL_LOW_PASS_FILTER is disabled (set to 0 or 7), and 1 kHz when enabled. Note: the accelerometer output rate is 1 kHz. This means that for a Sample Rate greater than 1 kHz, the same accelerometer sample may be output multiple times.

static constexpr uint16_t IMU_ACCEL_DLPF = {6}

Parameter address for module DAVIS_CONFIG_IMU: this configures the digital low-pass filter for both the accelerometer and the gyroscope on InvenSense MPU 6050/6150 IMU devices, or for the accelerometer only on InvenSense MPU 9250. Valid values are from 0 to 7 and have the following meaning:

On InvenSense MPU 6050/6150: 0 - Accel: BW=260Hz, Delay=0ms, FS=1kHz - Gyro: BW=256Hz, Delay=0.98ms, FS=8kHz 1 - Accel: BW=184Hz, Delay=2.0ms, FS=1kHz - Gyro: BW=188Hz, Delay=1.9ms, FS=1kHz 2 - Accel: BW=94Hz, Delay=3.0ms, FS=1kHz - Gyro: BW=98Hz, Delay=2.8ms, FS=1kHz 3 - Accel: BW=44Hz, Delay=4.9ms, FS=1kHz - Gyro: BW=42Hz, Delay=4.8ms, FS=1kHz 4 - Accel: BW=21Hz, Delay=8.5ms, FS=1kHz - Gyro: BW=20Hz, Delay=8.3ms, FS=1kHz 5 - Accel: BW=10Hz, Delay=13.8ms, FS=1kHz - Gyro: BW=10Hz, Delay=13.4ms, FS=1kHz 6 - Accel: BW=5Hz, Delay=19.0ms, FS=1kHz - Gyro: BW=5Hz, Delay=18.6ms, FS=1kHz 7 - Accel: RESERVED, FS=1kHz - Gyro: RESERVED, FS=8kHz

On InvenSense MPU 9250: 0 - Accel: BW=218.1Hz, Delay=1.88ms, FS=1kHz 1 - Accel: BW=218.1Hz, Delay=1.88ms, FS=1kHz 2 - Accel: BW=99Hz, Delay=2.88ms, FS=1kHz 3 - Accel: BW=44.8Hz, Delay=4.88ms, FS=1kHz 4 - Accel: BW=21.2Hz, Delay=8.87ms, FS=1kHz 5 - Accel: BW=10.2Hz, Delay=16.83ms, FS=1kHz 6 - Accel: BW=5.05Hz, Delay=32.48ms, FS=1kHz 7 - Accel: BW=420Hz, Delay=1.38ms, FS=1kHz

static constexpr uint16_t IMU_ACCEL_FULL_SCALE = {7}

Parameter address for module DAVIS_CONFIG_IMU: select the full scale range of the accelerometer outputs. Valid values are: 0 - +- 2 g 1 - +- 4 g 2 - +- 8 g 3 - +- 16 g

static constexpr uint16_t IMU_GYRO_DLPF = {9}

Parameter address for module DAVIS_CONFIG_IMU: this configures the digital low-pass filter for the gyroscope on devices using the InvenSense MPU 9250. Valid values are from 0 to 7 and have the following meaning:

0 - Gyro: BW=250Hz, Delay=0.97ms, FS=8kHz 1 - Gyro: BW=184Hz, Delay=2.9ms, FS=1kHz 2 - Gyro: BW=92Hz, Delay=3.9ms, FS=1kHz 3 - Gyro: BW=41Hz, Delay=5.9ms, FS=1kHz 4 - Gyro: BW=20Hz, Delay=9.9ms, FS=1kHz 5 - Gyro: BW=10Hz, Delay=17.85ms, FS=1kHz 6 - Gyro: BW=5Hz, Delay=33.48ms, FS=1kHz 7 - Gyro: BW=3600Hz, Delay=0.17ms, FS=8kHz

static constexpr uint16_t IMU_GYRO_FULL_SCALE = {10}

Parameter address for module DAVIS_CONFIG_IMU: select the full scale range of the gyroscope outputs. Valid values are: 0 - +- 250 °/s 1 - +- 500 °/s 2 - +- 1000 °/s 3 - +- 2000 °/s

static constexpr uint16_t EXTINPUT_RUN_DETECTOR = {0}

Parameter address for module DAVIS_CONFIG_EXTINPUT: enable the signal detector module. It generates events when it sees certain types of signals, such as edges or pulses of a defined length, on the IN JACK signal. This can be useful to inject events into the event stream in response to external stimuli or controls, such as turning on a LED lamp.

static constexpr uint16_t EXTINPUT_DETECT_RISING_EDGES = {1}

Parameter address for module DAVIS_CONFIG_EXTINPUT: send a special EXTERNAL_INPUT_RISING_EDGE event when a rising edge is detected (transition from low voltage to high).

static constexpr uint16_t EXTINPUT_DETECT_FALLING_EDGES = {2}

Parameter address for module DAVIS_CONFIG_EXTINPUT: send a special EXTERNAL_INPUT_FALLING_EDGE event when a falling edge is detected (transition from high voltage to low).

static constexpr uint16_t EXTINPUT_DETECT_PULSES = {3}

Parameter address for module DAVIS_CONFIG_EXTINPUT: send a special EXTERNAL_INPUT_PULSE event when a pulse, of a specified, configurable polarity and length, is detected. See DAVIS_CONFIG_EXTINPUT_DETECT_PULSE_POLARITY and DAVIS_CONFIG_EXTINPUT_DETECT_PULSE_LENGTH for more details.

static constexpr uint16_t EXTINPUT_DETECT_PULSE_POLARITY = {4}

Parameter address for module DAVIS_CONFIG_EXTINPUT: the polarity the pulse must exhibit to be detected as such. ‘1’ means active high; a pulse will start when the signal goes from low to high and will continue to be seen as the same pulse as long as it stays high. ‘0’ means active low; a pulse will start when the signal goes from high to low and will continue to be seen as the same pulse as long as it stays low.

static constexpr uint16_t EXTINPUT_DETECT_PULSE_LENGTH = {5}

Parameter address for module DAVIS_CONFIG_EXTINPUT: the minimal length that a pulse must have to trigger the sending of a special event. Range: 1-1048575, in microseconds.

static constexpr uint16_t EXTINPUT_HAS_GENERATOR = {10}

Parameter address for module DAVIS_CONFIG_EXTINPUT: read-only parameter, information about the presence of the signal generator feature. This is reserved for internal use and should not be used by anything other than libcaer. Please see the ‘struct caer_davis_info’ documentation to get this information.

static constexpr uint16_t EXTINPUT_RUN_GENERATOR = {11}

Parameter address for module DAVIS_CONFIG_EXTINPUT: enable the signal generator module. It generates a PWM-like signal based on configurable parameters and outputs it on the OUT JACK signal.

static constexpr uint16_t EXTINPUT_GENERATE_PULSE_POLARITY = {12}

Parameter address for module DAVIS_CONFIG_EXTINPUT: polarity of the PWM-like signal to be generated. ‘1’ means active high, ‘0’ means active low.

static constexpr uint16_t EXTINPUT_GENERATE_PULSE_INTERVAL = {13}

Parameter address for module DAVIS_CONFIG_EXTINPUT: the interval between the start of two consecutive pulses. Range: 1-1048575, in microseconds. This must be bigger or equal to DAVIS_CONFIG_EXTINPUT_GENERATE_PULSE_LENGTH. To generate a signal with 50% duty cycle, this would have to be exactly double of DAVIS_CONFIG_EXTINPUT_GENERATE_PULSE_LENGTH.

static constexpr uint16_t EXTINPUT_GENERATE_PULSE_LENGTH = {14}

Parameter address for module DAVIS_CONFIG_EXTINPUT: the length a pulse stays active. Range: 1-1048575, in microseconds. This must be smaller or equal to DAVIS_CONFIG_EXTINPUT_GENERATE_PULSE_INTERVAL. To generate a signal with 50% duty cycle, this would have to be exactly half of DAVIS_CONFIG_EXTINPUT_GENERATE_PULSE_INTERVAL.

static constexpr uint16_t EXTINPUT_GENERATE_INJECT_ON_RISING_EDGE = {15}

Parameter address for module DAVIS_CONFIG_EXTINPUT: enables event injection when a rising edge occurs in the generated signal; a special event EXTERNAL_GENERATOR_RISING_EDGE is emitted into the event stream.

static constexpr uint16_t EXTINPUT_GENERATE_INJECT_ON_FALLING_EDGE = {16}

Parameter address for module DAVIS_CONFIG_EXTINPUT: enables event injection when a falling edge occurs in the generated signal; a special event EXTERNAL_GENERATOR_FALLING_EDGE is emitted into the event stream.

static constexpr uint16_t SYSINFO_LOGIC_VERSION = {0}

Parameter address for module DAVIS_CONFIG_SYSINFO: read-only parameter, the version of the logic currently running on the device’s FPGA/CPLD. This is reserved for internal use and should not be used by anything other than libcaer. Please see the ‘struct caer_davis_info’ documentation to get this information.

static constexpr uint16_t SYSINFO_CHIP_IDENTIFIER = {1}

Parameter address for module DAVIS_CONFIG_SYSINFO: read-only parameter, an integer used to identify the different types of sensor chips used on the device. This is reserved for internal use and should not be used by anything other than libcaer. Please see the ‘struct caer_davis_info’ documentation to get this information.

static constexpr uint16_t SYSINFO_DEVICE_IS_MASTER = {2}

Parameter address for module DAVIS_CONFIG_SYSINFO: read-only parameter, whether the device is currently a timestamp master or slave when synchronizing multiple devices together. This is reserved for internal use and should not be used by anything other than libcaer. Please see the ‘struct caer_davis_info’ documentation to get this information.

static constexpr uint16_t SYSINFO_LOGIC_CLOCK = {3}

Parameter address for module DAVIS_CONFIG_SYSINFO: read-only parameter, the frequency in MHz at which the main FPGA/CPLD logic is running. This is reserved for internal use and should not be used by anything other than libcaer. Please see the ‘struct caer_davis_info’ documentation to get this information.

static constexpr uint16_t SYSINFO_ADC_CLOCK = {4}

Parameter address for module DAVIS_CONFIG_SYSINFO: read-only parameter, the frequency in MHz at which the FPGA/CPLD logic related to APS frame grabbing is running. This is reserved for internal use and should not be used by anything other than libcaer. Please see the ‘struct caer_davis_info’ documentation to get this information.

static constexpr uint16_t SYSINFO_USB_CLOCK = {5}

Parameter address for module DAVIS_CONFIG_SYSINFO: read-only parameter, the frequency in MHz at which the FPGA/CPLD logic related to USB data transmission is running. This is reserved for internal use and should not be used by anything other than libcaer.

static constexpr uint16_t SYSINFO_CLOCK_DEVIATION = {6}

Parameter address for module DAVIS_CONFIG_SYSINFO: read-only parameter, the deviation factor for the clocks. Due to how FX3 generates the clocks, which are then used by FPGA/CPLD, they are not integers but have a fractional part. This is reserved for internal use and should not be used by anything other than libcaer.

static constexpr uint16_t SYSINFO_LOGIC_PATCH = {7}

Parameter address for module DAVIS_CONFIG_SYSINFO: read-only parameter, the patch version of the logic currently running on the device’s FPGA/CPLD. This is reserved for internal use and should not be used by anything other than libcaer.

static constexpr uint16_t USB_RUN = {0}

Parameter address for module DAVIS_CONFIG_USB: enable the USB FIFO module, which transfers the data from the FPGA/CPLD to the USB chip, to be then sent to the host. Turning this off will suppress any USB data communication!

static constexpr uint16_t USB_EARLY_PACKET_DELAY = {1}

Parameter address for module DAVIS_CONFIG_USB: the time delay after which a packet of data is committed to USB, even if it is not full yet (short USB packet). The value is in 125µs time-slices, corresponding to how USB schedules its operations (a value of 4 for example would mean waiting at most 0.5ms until sending a short USB packet to the host).

static constexpr uint16_t DAVIS240_CHIP_DIGITALMUX0 = {128}

Parameter address for module DAVIS240_CONFIG_CHIP: DAVIS240 chip configuration. These are for expert control and should never be used or changed unless for advanced debugging purposes. To change the Global Shutter configuration, please use DAVIS_CONFIG_APS_GLOBAL_SHUTTER instead. On DAVIS240B cameras, DAVIS240_CONFIG_CHIP_SPECIALPIXELCONTROL can be used to enable the test pixel array.

static constexpr uint16_t DAVIS240_CHIP_DIGITALMUX1 = {129}
static constexpr uint16_t DAVIS240_CHIP_DIGITALMUX2 = {130}
static constexpr uint16_t DAVIS240_CHIP_DIGITALMUX3 = {131}
static constexpr uint16_t DAVIS240_CHIP_ANALOGMUX0 = {132}
static constexpr uint16_t DAVIS240_CHIP_ANALOGMUX1 = {133}
static constexpr uint16_t DAVIS240_CHIP_ANALOGMUX2 = {134}
static constexpr uint16_t DAVIS240_CHIP_BIASMUX0 = {135}
static constexpr uint16_t DAVIS240_CHIP_RESETCALIBNEURON = {136}
static constexpr uint16_t DAVIS240_CHIP_TYPENCALIBNEURON = {137}
static constexpr uint16_t DAVIS240_CHIP_RESETTESTPIXEL = {138}
static constexpr uint16_t DAVIS240_CHIP_SPECIALPIXELCONTROL = {139}
static constexpr uint16_t DAVIS240_CHIP_AERNAROW = {140}
static constexpr uint16_t DAVIS240_CHIP_USEAOUT = {141}
static constexpr uint16_t DAVIS240_CHIP_GLOBAL_SHUTTER = {142}
static constexpr uint16_t DAVIS346_CHIP_DIGITALMUX0 = {128}

Parameter address for module DAVIS346_CONFIG_CHIP: DAVIS346 chip configuration. These are for expert control and should never be used or changed unless for advanced debugging purposes. To change the Global Shutter configuration, please use DAVIS_CONFIG_APS_GLOBAL_SHUTTER instead.

static constexpr uint16_t DAVIS346_CHIP_DIGITALMUX1 = {129}
static constexpr uint16_t DAVIS346_CHIP_DIGITALMUX2 = {130}
static constexpr uint16_t DAVIS346_CHIP_DIGITALMUX3 = {131}
static constexpr uint16_t DAVIS346_CHIP_ANALOGMUX0 = {132}
static constexpr uint16_t DAVIS346_CHIP_ANALOGMUX1 = {133}
static constexpr uint16_t DAVIS346_CHIP_ANALOGMUX2 = {134}
static constexpr uint16_t DAVIS346_CHIP_BIASMUX0 = {135}
static constexpr uint16_t DAVIS346_CHIP_RESETCALIBNEURON = {136}
static constexpr uint16_t DAVIS346_CHIP_TYPENCALIBNEURON = {137}
static constexpr uint16_t DAVIS346_CHIP_RESETTESTPIXEL = {138}
static constexpr uint16_t DAVIS346_CHIP_AERNAROW = {140}
static constexpr uint16_t DAVIS346_CHIP_USEAOUT = {141}
static constexpr uint16_t DAVIS346_CHIP_GLOBAL_SHUTTER = {142}
static constexpr uint16_t DAVIS346_CHIP_SELECTGRAYCOUNTER = {143}
static constexpr uint16_t DAVIS346_CHIP_TESTADC = {144}
static constexpr uint16_t CDAVIS_CHIP_DIGITALMUX0 = {128}

Parameter address for module CDAVIS_CONFIG_CHIP: CDAVIS chip configuration. These are for expert control and should never be used or changed unless for advanced debugging purposes. To change the Global Shutter configuration, please use DAVIS_CONFIG_APS_GLOBAL_SHUTTER instead.

static constexpr uint16_t CDAVIS_CHIP_DIGITALMUX1 = {129}
static constexpr uint16_t CDAVIS_CHIP_DIGITALMUX2 = {130}
static constexpr uint16_t CDAVIS_CHIP_DIGITALMUX3 = {131}
static constexpr uint16_t CDAVIS_CHIP_ANALOGMUX0 = {132}
static constexpr uint16_t CDAVIS_CHIP_ANALOGMUX1 = {133}
static constexpr uint16_t CDAVIS_CHIP_ANALOGMUX2 = {134}
static constexpr uint16_t CDAVIS_CHIP_BIASMUX0 = {135}
static constexpr uint16_t CDAVIS_CHIP_RESETCALIBNEURON = {136}
static constexpr uint16_t CDAVIS_CHIP_TYPENCALIBNEURON = {137}
static constexpr uint16_t CDAVIS_CHIP_RESETTESTPIXEL = {138}
static constexpr uint16_t CDAVIS_CHIP_AERNAROW = {140}
static constexpr uint16_t CDAVIS_CHIP_USEAOUT = {141}
static constexpr uint16_t CDAVIS_CHIP_SELECTGRAYCOUNTER = {143}
static constexpr uint16_t CDAVIS_CHIP_TESTADC = {144}
static constexpr uint16_t CDAVIS_CHIP_ADJUSTOVG1LO = {145}
static constexpr uint16_t CDAVIS_CHIP_ADJUSTOVG2LO = {146}
static constexpr uint16_t CDAVIS_CHIP_ADJUSTTX2OVG2HI = {147}
static constexpr uint16_t DAVIS240_BIAS_IFTHRBN = {15}
static constexpr uint16_t DAVIS240_BIAS_IFREFRBN = {16}
static constexpr uint16_t DAVIS240_BIAS_SSP = {20}
static constexpr uint16_t DAVIS240_BIAS_SSN = {21}
static constexpr uint16_t DAVIS346_BIAS_IFTHRBN = {27}
static constexpr uint16_t DAVIS346_BIAS_IFREFRBN = {26}
static constexpr uint16_t DAVIS346_BIAS_SSP = {35}
static constexpr uint16_t DAVIS346_BIAS_SSN = {36}
static constexpr uint16_t CDAVIS_BIAS_IFTHRBN = {9}
static constexpr uint16_t CDAVIS_BIAS_IFREFRBN = {8}
static constexpr uint16_t CDAVIS_BIAS_SSP = {35}
static constexpr uint16_t CDAVIS_BIAS_SSN = {36}
static constexpr uint16_t PID_DAVIS_FX2 = {0x841B}
static constexpr uint8_t FX2_FIRMWARE_REQUIRED_VERSION = {4}
static constexpr uint8_t FX2_LOGIC_REQUIRED_VERSION = {18}
static constexpr uint8_t FX2_LOGIC_MINIMUM_PATCH = {1}
static constexpr uint16_t PID_DAVIS_FX3 = {0x841A}
static constexpr uint8_t FX3_FIRMWARE_REQUIRED_VERSION = {6}
static constexpr uint8_t FX3_LOGIC_REQUIRED_VERSION = {18}
static constexpr uint8_t FX3_LOGIC_MINIMUM_PATCH = {1}
static constexpr size_t PIXEL_FILTER_MAX_SIZE = {8}
static constexpr uint32_t EXPOSURE_MAX = {(1 << 22) - 1}
static constexpr uint32_t FRAME_INTERVAL_MAX = {(1 << 23) - 1}
static constexpr uint32_t USB_EARLY_PACKET_DELAY_MAX = {(1 << 20) - 1}
static constexpr uint32_t EXT_INPUT_TIME_MAX = {(1 << 20) - 1}
static constexpr auto COMPATIBLE_CAMERA = [](const uint16_t vid, const uint16_t pid,[[maybe_unused]] const USBDeviceType deviceType) -> std::optional<CameraModel> {if ((vid == VID_INIVATION) && ((pid == PID_DAVIS_FX2) || (pid == PID_DAVIS_FX3))&& (deviceType != USBDeviceType::FX3_GEN2)) {return CameraModel::DAVIS;}return std::nullopt;}
static constexpr int AUTOEXPOSURE_HISTOGRAM_SIZE = {256}
static constexpr int AUTOEXPOSURE_HISTOGRAM_MSV_SIZE = {5}
static constexpr std::array<float, 2> AUTOEXPOSURE_HISTOGRAM_RANGE = {0, UINT8_MAX + 1}
static constexpr std::array<const float*, 1> AUTOEXPOSURE_HISTOGRAM_RANGES = {AUTOEXPOSURE_HISTOGRAM_RANGE.data()}
static constexpr float AUTOEXPOSURE_LOW_BOUNDARY = {0.10f}
static constexpr float AUTOEXPOSURE_HIGH_BOUNDARY = {0.90f}
static constexpr float AUTOEXPOSURE_UNDEROVER_FRAC = {0.33f}
static constexpr float AUTOEXPOSURE_UNDEROVER_CORRECTION = {14000.0f}
static constexpr float AUTOEXPOSURE_MSV_CORRECTION = {100.0f}
static constexpr int AUTOEXPOSURE_HISTOGRAM_BINS_LOW{static_cast<int>(AUTOEXPOSURE_LOW_BOUNDARY * static_cast<float>(AUTOEXPOSURE_HISTOGRAM_SIZE))}
static constexpr int AUTOEXPOSURE_HISTOGRAM_BINS_HIGH{static_cast<int>(AUTOEXPOSURE_HIGH_BOUNDARY * static_cast<float>(AUTOEXPOSURE_HISTOGRAM_SIZE))}
class DecompressionSupport

Subclassed by dv::io::compression::Lz4DecompressionSupport, dv::io::compression::NoneDecompressionSupport, dv::io::compression::ZstdDecompressionSupport

Public Functions

inline explicit DecompressionSupport(const CompressionType type)
virtual ~DecompressionSupport() = default
virtual void decompress(std::vector<std::byte> &source, std::vector<std::byte> &target) = 0
inline CompressionType getCompressionType() const

Private Members

CompressionType mType
struct Depth
#include </builds/inivation/dv/dv-processing/include/dv-processing/measurements/depth.hpp>

A depth measurement structure that contains a timestamped measurement of depth.

Public Functions

inline Depth(int64_t timestamp, float depth)

Public Members

int64_t mTimestamp

UNIX Microsecond timestamp

float mDepth

Depth measurement value, expected to be in meters.

struct DepthEventPacket : public flatbuffers::NativeTable

Public Types

typedef DepthEventPacketFlatbuffer TableType

Public Functions

inline DepthEventPacket()
inline DepthEventPacket(const std::vector<DepthEvent> &_elements)

Public Members

std::vector<DepthEvent> elements

Public Static Functions

static inline constexpr const char *GetFullyQualifiedName()

Friends

inline friend std::ostream &operator<<(std::ostream &os, const DepthEventPacket &packet)
struct DepthEventPacketBuilder

Public Functions

inline void add_elements(flatbuffers::Offset<flatbuffers::Vector<const DepthEvent*>> elements)
inline explicit DepthEventPacketBuilder(flatbuffers::FlatBufferBuilder &_fbb)
DepthEventPacketBuilder &operator=(const DepthEventPacketBuilder&)
inline flatbuffers::Offset<DepthEventPacketFlatbuffer> Finish()

Public Members

flatbuffers::FlatBufferBuilder &fbb_
flatbuffers::uoffset_t start_
struct DepthEventPacketFlatbuffer : private flatbuffers::Table

Public Types

typedef DepthEventPacket NativeTableType

Public Functions

inline const flatbuffers::Vector<const DepthEvent*> *elements() const
inline bool Verify(flatbuffers::Verifier &verifier) const
inline DepthEventPacket *UnPack(const flatbuffers::resolver_function_t *_resolver = nullptr) const
inline void UnPackTo(DepthEventPacket *_o, const flatbuffers::resolver_function_t *_resolver = nullptr) const

Public Static Functions

static inline const flatbuffers::TypeTable *MiniReflectTypeTable()
static inline constexpr const char *GetFullyQualifiedName()
static inline void UnPackToFrom(DepthEventPacket *_o, const DepthEventPacketFlatbuffer *_fb, const flatbuffers::resolver_function_t *_resolver = nullptr)
static inline flatbuffers::Offset<DepthEventPacketFlatbuffer> Pack(flatbuffers::FlatBufferBuilder &_fbb, const DepthEventPacket *_o, const flatbuffers::rehasher_function_t *_rehasher = nullptr)

Public Static Attributes

static constexpr const char *identifier = "DEVT"
struct DepthFrame : public flatbuffers::NativeTable

Public Types

typedef DepthFrameFlatbuffer TableType

Public Functions

inline DepthFrame()
inline DepthFrame(int64_t _timestamp, int16_t _sizeX, int16_t _sizeY, uint16_t _minDepth, uint16_t _maxDepth, uint16_t _step, const std::vector<uint16_t> &_depth)

Public Members

int64_t timestamp
int16_t sizeX
int16_t sizeY
uint16_t minDepth
uint16_t maxDepth
uint16_t step
std::vector<uint16_t> depth

Public Static Functions

static inline constexpr const char *GetFullyQualifiedName()

Friends

inline friend std::ostream &operator<<(std::ostream &os, const DepthFrame &frame)
struct DepthFrameBuilder

Public Functions

inline void add_timestamp(int64_t timestamp)
inline void add_sizeX(int16_t sizeX)
inline void add_sizeY(int16_t sizeY)
inline void add_minDepth(uint16_t minDepth)
inline void add_maxDepth(uint16_t maxDepth)
inline void add_step(uint16_t step)
inline void add_depth(flatbuffers::Offset<flatbuffers::Vector<uint16_t>> depth)
inline explicit DepthFrameBuilder(flatbuffers::FlatBufferBuilder &_fbb)
DepthFrameBuilder &operator=(const DepthFrameBuilder&)
inline flatbuffers::Offset<DepthFrameFlatbuffer> Finish()

Public Members

flatbuffers::FlatBufferBuilder &fbb_
flatbuffers::uoffset_t start_
struct DepthFrameFlatbuffer : private flatbuffers::Table
#include </builds/inivation/dv/dv-processing/include/dv-processing/data/depth_frame_base.hpp>

A frame containing pixel depth values in millimeters.

Public Types

typedef DepthFrame NativeTableType

Public Functions

inline int64_t timestamp() const

Central timestamp (µs), corresponds to exposure midpoint.

inline int16_t sizeX() const

Start of Frame (SOF) timestamp.

inline int16_t sizeY() const

Y axis length in pixels.

inline uint16_t minDepth() const

Minimum valid depth value.

inline uint16_t maxDepth() const

Maximum valid depth value.

inline uint16_t step() const

Depth step value, minimal depth distance that can be measured by the sensor setup.

inline const flatbuffers::Vector<uint16_t> *depth() const

Depth values, unsigned 16bit integers, millimeters from the camera frame, following the OpenNI standard. Depth value of 0 should be considered an invalid value.

inline bool Verify(flatbuffers::Verifier &verifier) const
inline DepthFrame *UnPack(const flatbuffers::resolver_function_t *_resolver = nullptr) const
inline void UnPackTo(DepthFrame *_o, const flatbuffers::resolver_function_t *_resolver = nullptr) const

Public Static Functions

static inline const flatbuffers::TypeTable *MiniReflectTypeTable()
static inline constexpr const char *GetFullyQualifiedName()
static inline void UnPackToFrom(DepthFrame *_o, const DepthFrameFlatbuffer *_fb, const flatbuffers::resolver_function_t *_resolver = nullptr)
static inline flatbuffers::Offset<DepthFrameFlatbuffer> Pack(flatbuffers::FlatBufferBuilder &_fbb, const DepthFrame *_o, const flatbuffers::rehasher_function_t *_rehasher = nullptr)

Public Static Attributes

static constexpr const char *identifier = "DFRM"
struct DeviceDescriptor
#include </builds/inivation/dv/dv-processing/include/dv-processing/io/camera/usb_device.hpp>

Structure to uniquely describe a USB device connected to the current host.

Public Members

std::string serialNumber
uint16_t vid
uint16_t pid
USBDeviceType deviceType
CameraModel cameraModel
uint8_t busNumber
uint8_t devAddress
uint8_t firmwareVersion
struct DirectoryError

Public Types

using Info = std::filesystem::path

Public Static Functions

static inline std::string format(const Info &info)
struct DirectoryNotFound

Public Types

using Info = std::filesystem::path

Public Static Functions

static inline std::string format(const Info &info)
class DVS128 : public dv::io::camera::USBDevice, public dv::io::camera::SyncCameraInputBase

Public Types

enum class Bias

Values:

enumerator CAS
enumerator INJGND
enumerator REQPD
enumerator PUX
enumerator DIFF_OFF
enumerator REQ
enumerator REFR
enumerator PUY
enumerator DIFF_ON
enumerator DIFF
enumerator FOLL
enumerator PR

Public Functions

inline explicit DVS128(const LogLevel deviceLogLevel = LogLevel::LVL_WARNING, loggerCallbackType deviceLogger = {})

Open the first DVS128 camera that can be found. Throws if device cannot be opened.

Parameters:
  • deviceLogLevel – initial log-level

  • deviceLogger – per-device logging callback, must be thread-safe

inline explicit DVS128(const std::string_view filterBySerialNumber, const LogLevel deviceLogLevel = LogLevel::LVL_WARNING, loggerCallbackType deviceLogger = {})

Open the DVS128 camera with the specified serial number. Throws if device cannot be opened.

Parameters:
  • filterBySerialNumber – serial number to search for

  • deviceLogLevel – initial log-level

  • deviceLogger – per-device logging callback, must be thread-safe

inline explicit DVS128(const DeviceDescriptor &deviceToOpen, const LogLevel deviceLogLevel = LogLevel::LVL_WARNING, loggerCallbackType deviceLogger = {})

Open the DVS128 camera corresponding to the specified descriptor. Throws if device cannot be opened.

Parameters:
  • deviceToOpen – device descriptor structure

  • deviceLogLevel – initial log-level

  • deviceLogger – per-device logging callback, must be thread-safe

inline ~DVS128() override
inline virtual std::string getCameraName() const override

Get camera name, which is a combination of the camera model and the serial number.

Returns:

String containing the camera model and serial number separated by an underscore character.

inline virtual std::optional<cv::Size> getEventResolution() const override

Get event stream resolution.

Returns:

Event stream resolution, std::nullopt if event stream is unavailable.

inline virtual std::optional<cv::Size> getFrameResolution() const override

Retrieve frame stream resolution.

Returns:

Frame stream resolution or std::nullopt if the frame stream is not available.

inline virtual imu::ImuModel getImuModel() const override

Return IMU model used on device.

Returns:

IMU model in use.

inline virtual float getPixelPitch() const override

Return pixel pitch distance for the connected camera model. The value is returned in meters.

Returns:

Pixel pitch distance in meters according to the connected device.

inline virtual bool isMaster() const override

Report if this camera is a clock synchronization master.

Returns:

true if clock master, false otherwise.

inline void setMaster(const bool master)

Control clock master functionality of this camera. By default, this is enabled and allows the camera to receive triggers on its IN pin. If disabled, the camera will expect a clock synchronization signal on its IN pin and act as a secondary camera.

Parameters:

master – true if clock synchronization master, false if secondary camera

inline virtual bool isEventStreamAvailable() const override

Check whether event stream is available.

Returns:

True if event stream is available, false otherwise.

inline virtual bool isFrameStreamAvailable() const override

Check whether frame stream is available.

Returns:

True if frame stream is available, false otherwise.

inline virtual bool isImuStreamAvailable() const override

Check whether IMU data is available.

Returns:

True if IMU data stream is available, false otherwise.

inline virtual bool isTriggerStreamAvailable() const override

Check whether trigger data is available.

Returns:

True if trigger data stream is available, false otherwise.

inline virtual bool isRunning() const override

Check whether any input data streams have terminated. For a live camera this should check if the device is still connected and functioning, while for a recording file this should check if any of the data streams have reached end-of-file (EOF). For a network input, this indicates the network stream is still connected.

Returns:

True if data read on all streams is still possible, false otherwise.

inline virtual bool isRunning(const std::string_view streamName) const override

Check whether the input data stream with the specified name is still active.

Returns:

True if data read on this stream is possible, false otherwise.

inline virtual bool isRunningAny() const override

Check whether any input data streams are still available. For a live camera this should check if the device is still connected and functioning and at least one data stream is active (different than isRunning()), while for a recording file this should check if any of the data streams have not yet reached end-of-file (EOF) and are still readable. For a network input, this indicates the network stream is still connected.

Returns:

True if data read on at least one stream is still possible, false otherwise.

inline virtual std::chrono::microseconds getTimeInterval() const override

Get the time interval for data commit.

Returns:

Time interval in microseconds.

inline virtual void setTimeInterval(const std::chrono::microseconds timeInterval) override

Set a new time interval value for data commit. Data is put in the queues for getNextEventBatch(), readNext(), … at this interval’s rate.

Parameters:

timeInterval – New time interval value in microseconds.

inline virtual std::chrono::microseconds getTimestampOffset() const override

Get the timestamp offset.

Returns:

Absolute timestamp offset value in microseconds.

inline uint32_t getBias(const Bias bias) const

Get current value of specified bias configuration.

Parameters:

bias – name of bias

Returns:

current bias value

inline void setBias(const Bias bias, const uint32_t value)

Set new value for specified bias configuration.

Parameters:
  • bias – name of bias

  • value – new bias value

inline virtual bool getFlipHorizontal() const override

Status of horizontal events flip.

Returns:

status of horizontal events flip.

inline virtual void setFlipHorizontal(const bool flipHorizontal) override

Flip events horizontally.

Parameters:

flipHorizontalEvents – flip events horizontally.

inline virtual bool getFlipVertical() const override

Status of vertical events flip.

Returns:

status of vertical events flip.

inline virtual void setFlipVertical(const bool flipVertical) override

Flip events vertically.

Parameters:

flipVerticalEvents – flip events vertically.

inline virtual Flatten getFlatten() const override

Status of event polarity flattening.

Returns:

status of event polarity flattening.

inline virtual void setFlatten(const Flatten flatten) override

Flatten events polarity.

Parameters:

flattenEvents – flattening mode.

inline virtual cv::Rect getCropArea() const override

Get events Region of Interest (ROI).

Returns:

get events Region of Interest (ROI).

inline virtual void setCropArea(const cv::Rect cropArea) override

Set events Region of Interest (ROI). Usually hardware accelerated.

Parameters:

cropAreaEvents – region of interest (ROI) position and size.

Public Static Functions

static inline auto findDevices(const std::string_view filterBySerialNumber = {})

Find connected DVS128 cameras.

Parameters:

filterBySerialNumber – only search for devices with this serial number

Returns:

a descriptor structure describing a compatible device

Protected Functions

inline virtual void sendTimestampReset() override

Send a timestamp reset command to the device.

inline virtual void setTimestampOffset(const std::chrono::microseconds timestampOffset) override

Set a new timestamp offset value for the camera.

Parameters:

timestampOffset – New timestamp offset value in microseconds.

Private Functions

inline void shutdownCallback()
inline void usbDataCallback(const std::span<const uint8_t> data)
inline void dataParserCallback(parser::ParsedData data)
inline void timeInitCallback()
inline void setBiasInternal(const Bias bias, const uint32_t value, const bool send)

Private Members

parser::DVS128::Parser mParser
mutable std::mutex mConfigLock
std::array<std::array<uint8_t, 3>, 12> mBiases = {}
Flatten mFlatten = {Flatten::NONE}
cv::Rect mCropArea
mutable std::mutex mCallbackConfigLock
std::atomic<bool> mIsRunning = {true}
std::atomic<bool> mTimestampMaster = {true}

Private Static Attributes

static constexpr uint16_t PID_DVS128 = {0x8400}
static constexpr uint8_t FIRMWARE_REQUIRED_VERSION = {14}
static constexpr uint8_t DATA_ENDPOINT = {0x86}
static constexpr uint8_t VENDOR_REQUEST_START_TRANSFER = {0xB3}
static constexpr uint8_t VENDOR_REQUEST_STOP_TRANSFER = {0xB4}
static constexpr uint8_t VENDOR_REQUEST_SEND_BIASES = {0xB8}
static constexpr uint8_t VENDOR_REQUEST_RESET_TS = {0xBB}
static constexpr uint8_t VENDOR_REQUEST_RESET_ARRAY = {0xBD}
static constexpr uint8_t VENDOR_REQUEST_TS_MASTER = {0xBE}
static constexpr auto COMPATIBLE_CAMERA = [](const uint16_t vid, const uint16_t pid,[[maybe_unused]] const USBDeviceType deviceType) -> std::optional<CameraModel> {if ((vid == VID_INIVATION) && (pid == PID_DVS128)) {return CameraModel::DVS128;}return std::nullopt;}
class DVXplorer : public dv::io::camera::USBDevice, public dv::io::camera::SyncCameraInputBase

Public Types

enum class SubSample

Values:

enumerator EVERY_PIXEL
enumerator EVERY_SECOND
enumerator BAND_OF_TWO
enumerator EVERY_FOURTH
enumerator BAND_OF_FOUR
enumerator BIN_1010_0000
enumerator BIN_1100_0000
enumerator EVERY_EIGHTH
enum class ReadoutFPS

Values:

enumerator CONSTANT_100
enumerator CONSTANT_200
enumerator CONSTANT_500
enumerator CONSTANT_1000
enumerator CONSTANT_LOSSY_2000
enumerator CONSTANT_LOSSY_5000
enumerator CONSTANT_LOSSY_10000
enumerator VARIABLE_2000
enumerator VARIABLE_5000
enumerator VARIABLE_10000
enumerator VARIABLE_15000

Public Functions

inline explicit DVXplorer(const LogLevel deviceLogLevel = LogLevel::LVL_WARNING, loggerCallbackType deviceLogger = {})

Open the first DVXplorer camera that can be found. Throws if device cannot be opened.

Parameters:
  • deviceLogLevel – initial log-level

  • deviceLogger – per-device logging callback, must be thread-safe

inline explicit DVXplorer(const std::string_view filterBySerialNumber, const LogLevel deviceLogLevel = LogLevel::LVL_WARNING, loggerCallbackType deviceLogger = {})

Open the DVXplorer camera with the specified serial number. Throws if device cannot be opened.

Parameters:
  • filterBySerialNumber – serial number to search for

  • deviceLogLevel – initial log-level

  • deviceLogger – per-device logging callback, must be thread-safe

inline explicit DVXplorer(const DeviceDescriptor &deviceToOpen, const LogLevel deviceLogLevel = LogLevel::LVL_WARNING, loggerCallbackType deviceLogger = {})

Open the DVXplorer camera corresponding to the specified descriptor. Throws if device cannot be opened.

Parameters:
  • deviceToOpen – device descriptor structure

  • deviceLogLevel – initial log-level

  • deviceLogger – per-device logging callback, must be thread-safe

inline ~DVXplorer() override
inline virtual std::string getCameraName() const override

Get camera name, which is a combination of the camera model and the serial number.

Returns:

String containing the camera model and serial number separated by an underscore character.

inline uint32_t getLogicVersion() const

Get camera FPGA logic version.

Returns:

camera FPGA logic version

inline uint32_t getLogicPatchLevel() const

Get camera FPGA logic patch level.

Returns:

camera FPGA logic patch level

inline virtual std::optional<cv::Size> getEventResolution() const override

Get event stream resolution.

Returns:

Event stream resolution, std::nullopt if event stream is unavailable.

inline virtual std::optional<cv::Size> getFrameResolution() const override

Retrieve frame stream resolution.

Returns:

Frame stream resolution or std::nullopt if the frame stream is not available.

inline virtual imu::ImuModel getImuModel() const override

Return IMU model used on device.

Returns:

IMU model in use.

inline virtual float getPixelPitch() const override

Return pixel pitch distance for the connected camera model. The value is returned in meters.

Returns:

Pixel pitch distance in meters according to the connected device.

inline virtual bool isMaster() const override

Report if this camera is a clock synchronization master.

Returns:

true if clock master, false otherwise.

inline virtual bool isEventStreamAvailable() const override

Check whether event stream is available.

Returns:

True if event stream is available, false otherwise.

inline virtual bool isFrameStreamAvailable() const override

Check whether frame stream is available.

Returns:

True if frame stream is available, false otherwise.

inline virtual bool isImuStreamAvailable() const override

Check whether IMU data is available.

Returns:

True if IMU data stream is available, false otherwise.

inline virtual bool isTriggerStreamAvailable() const override

Check whether trigger data is available.

Returns:

True if trigger data stream is available, false otherwise.

inline virtual bool isRunning() const override

Check whether any input data streams have terminated. For a live camera this should check if the device is still connected and functioning, while for a recording file this should check if any of the data streams have reached end-of-file (EOF). For a network input, this indicates the network stream is still connected.

Returns:

True if data read on all streams is still possible, false otherwise.

inline virtual bool isRunning(const std::string_view streamName) const override

Check whether the input data stream with the specified name is still active.

Returns:

True if data read on this stream is possible, false otherwise.

inline virtual bool isRunningAny() const override

Check whether any input data streams are still available. For a live camera this should check if the device is still connected and functioning and at least one data stream is active (different than isRunning()), while for a recording file this should check if any of the data streams have not yet reached end-of-file (EOF) and are still readable. For a network input, this indicates the network stream is still connected.

Returns:

True if data read on at least one stream is still possible, false otherwise.

inline virtual std::chrono::microseconds getTimeInterval() const override

Get the time interval for data commit.

Returns:

Time interval in microseconds.

inline virtual void setTimeInterval(const std::chrono::microseconds timeInterval) override

Set a new time interval value for data commit. Data is put in the queues for getNextEventBatch(), readNext(), … at this interval’s rate.

Parameters:

timeInterval – New time interval value in microseconds.

inline virtual std::chrono::microseconds getTimestampOffset() const override

Get the timestamp offset.

Returns:

Absolute timestamp offset value in microseconds.

inline bool isEventsRunning() const

Report if the event output is running.

Returns:

true if events active, false otherwise.

inline void setEventsRunning(const bool run)

Enable or disable the event output.

Parameters:

run – whether to enable the event output or not.

inline virtual bool getFlipHorizontal() const override

Status of horizontal events flip.

Returns:

status of horizontal events flip.

inline virtual void setFlipHorizontal(const bool flipHorizontal) override

Flip events horizontally.

Parameters:

flipHorizontalEvents – flip events horizontally.

inline virtual bool getFlipVertical() const override

Status of vertical events flip.

Returns:

status of vertical events flip.

inline virtual void setFlipVertical(const bool flipVertical) override

Flip events vertically.

Parameters:

flipVerticalEvents – flip events vertically.

inline virtual Flatten getFlatten() const override

Status of event polarity flattening.

Returns:

status of event polarity flattening.

inline virtual void setFlatten(const Flatten flatten) override

Flatten events polarity.

Parameters:

flattenEvents – flattening mode.

inline SubSample getSubSampleHorizontal() const

Get horizontal subsampling mode.

Returns:

current horizontal subsampling mode.

inline void setSubSampleHorizontal(const SubSample subSampleHorizontal)

Set horizontal subsampling mode. DVXplorer Lite only supports EVERY_PIXEL, EVERY_SECOND and EVERY_FOURTH.

Parameters:

subSampleHorizontal – new horizontal subsampling mode.

inline SubSample getSubSampleVertical() const

Get vertical subsampling mode.

Returns:

current vertical subsampling mode.

inline void setSubSampleVertical(const SubSample subSampleVertical)

Set vertical subsampling mode. DVXplorer Lite only supports EVERY_PIXEL, EVERY_SECOND and EVERY_FOURTH.

Parameters:

subSampleVertical – new vertical subsampling mode.

inline bool getDualBinning() const

Report status of dual-binning feature. Always false for DVXplorer Lite.

Returns:

true if enabled, false otherwise.

inline void setDualBinning(const bool dualBinning)

Enable dual-binning, maps 2x2 pixel blocks to one pixel address. Not available on DVXplorer Lite.

Parameters:

dualBinning – true to enable, false to disable.

inline bool getGlobalHold() const

Report status of global hold feature.

Returns:

true if enabled, false otherwise.

inline void setGlobalHold(const bool globalHold)

Enable or disable global hold feature. Default is enabled. For some applications like LED tracking, setting this to false can help.

Parameters:

globalHold – true to enable, false to disable.

inline bool getGlobalReset() const

Report status of global reset feature.

Returns:

true if enabled, false otherwise.

inline void setGlobalReset(const bool globalReset)

Enable or disable global reset feature.

Parameters:

globalReset – true to enable, false to disable.

inline ReadoutFPS getReadoutFPS() const

Get currently set event-frame readout frequency.

Returns:

event-frame readout frequency.

inline void setReadoutFPS(const ReadoutFPS readoutFps)

Set frequency of event-frame readouts on sensor. CONSTANT frequencies are guaranteed fixed frequencies with no data loss. CONSTANT_LOSSY are guaranteed fixed frequencies but will cut off a readout to respect the timing if too much data is present, resulting in data loss. VARIABLE are best-effort frequencies that will change if lots of data is present, slowing down, but without data loss.

Parameters:

readoutFps – event-frame readout frequency.

inline uint8_t getContrastThresholdOn() const

Get the contrast threshold for ON polarity event generation.

Returns:

ON contrast threshold, from 0 to 17.

inline void setContrastThresholdOn(const uint8_t contrastThresholdOn)

Set the contrast threshold for ON polarity event generation. Valid values from 0 to 17.

Parameters:

contrastThresholdOn – ON contrast threshold, from 0 to 17.

inline uint8_t getContrastThresholdOff() const

Get the contrast threshold for OFF polarity event generation.

Returns:

OFF contrast threshold, from 0 to 17.

inline void setContrastThresholdOff(const uint8_t contrastThresholdOff)

Set the contrast threshold for OFF polarity event generation. Valid values from 0 to 17.

Parameters:

contrastThresholdOff – OFF contrast threshold, from 0 to 17.

inline virtual cv::Rect getCropArea() const override

Get events Region of Interest (ROI).

Returns:

get events Region of Interest (ROI).

inline virtual void setCropArea(const cv::Rect cropArea) override

Set events Region of Interest (ROI). Usually hardware accelerated.

Parameters:

cropAreaEvents – region of interest (ROI) position and size.

inline bool isImuRunningAccelerometer() const

Status of IMU accelerometer.

Returns:

true if enabled, false otherwise.

inline void setImuRunningAccelerometer(const bool run)

Enable or disable IMU accelerometer.

Parameters:

run – true to enable, false to disable.

inline bool isImuRunningGyroscope() const

Status of IMU gyroscope.

Returns:

true if enabled, false otherwise.

inline void setImuRunningGyroscope(const bool run)

Enable or disable IMU gyroscope.

Parameters:

run – true to enable, false to disable.

inline bool isImuRunningTemperature() const

Status of IMU temperature measurement.

Returns:

true if enabled, false otherwise.

inline void setImuRunningTemperature(const bool run)

Enable or disable IMU temperature measurement.

Parameters:

run – true to enable, false to disable.

inline bool getIMUFlipX() const

Status of IMU X axis flipping.

Returns:

true if enabled, false otherwise.

inline void setIMUFlipX(const bool flipX)

Enable or disable IMU X axis flipping. Will negate (flip) all returned X axis values.

Parameters:

flipX – true to enable, false to disable.

inline bool getIMUFlipY() const

Status of IMU Y axis flipping.

Returns:

true if enabled, false otherwise.

inline void setIMUFlipY(const bool flipY)

Enable or disable IMU Y axis flipping. Will negate (flip) all returned Y axis values.

Parameters:

flipY – true to enable, false to disable.

inline bool getIMUFlipZ() const

Status of IMU Z axis flipping.

Returns:

true if enabled, false otherwise.

inline void setIMUFlipZ(const bool flipZ)

Enable or disable IMU Z axis flipping. Will negate (flip) all returned Z axis values.

Parameters:

flipZ – true to enable, false to disable.

inline imu::BoschBMI160AccelDataRate getImuAccelDataRate() const

Get current IMU accelerometer data rate.

Returns:

accelerometer data rate.

inline void setImuAccelDataRate(const imu::BoschBMI160AccelDataRate dataRate)

Set IMU accelerometer data rate.

Parameters:

dataRate – accelerometer data rate.

inline imu::BoschBMI160AccelFilter getImuAccelFilter() const

Get current IMU accelerometer filter setting.

Returns:

accelerometer filter setting.

inline void setImuAccelFilter(const imu::BoschBMI160AccelFilter filter)

Set IMU accelerometer filter setting.

Parameters:

filter – accelerometer filter setting.

inline imu::BoschBMI160AccelRange getImuAccelRange() const

Get current IMU accelerometer range.

Returns:

accelerometer range.

inline void setImuAccelRange(const imu::BoschBMI160AccelRange range)

Set IMU accelerometer range.

Parameters:

range – accelerometer range.

inline imu::BoschBMI160GyroDataRate getImuGyroDataRate() const

Get current IMU gyroscope data rate.

Returns:

gyroscope data rate.

inline void setImuGyroDataRate(const imu::BoschBMI160GyroDataRate dataRate)

Set IMU gyroscope data rate.

Parameters:

dataRate – gyroscope data rate.

inline imu::BoschBMI160GyroFilter getImuGyroFilter() const

Get current IMU gyroscope filter setting.

Returns:

gyroscope filter setting.

inline void setImuGyroFilter(const imu::BoschBMI160GyroFilter filter)

Set IMU gyroscope filter setting.

Parameters:

filter – gyroscope filter setting.

inline imu::BoschBMI160GyroRange getImuGyroRange() const

Get current IMU gyroscope range.

Returns:

gyroscope range.

inline void setImuGyroRange(const imu::BoschBMI160GyroRange range)

Set IMU gyroscope range.

Parameters:

range – gyroscope range.

inline bool isDetectorRunning() const

Report status of external signal detector.

Returns:

true if running, false otherwise.

inline void setDetectorRunning(const bool run)

Enable or disable external signal detector.

Parameters:

run – true to enable, false to disable.

inline bool getDetectorRisingEdges() const

Report status of rising edge detection on the SIGNAL_IN line.

Returns:

true if enabled, false otherwise.

inline void setDetectorRisingEdges(const bool detectRising)

Detect rising edges (low to high transitions) on the SIGNAL_IN line.

Parameters:

detectRising – true to enable, false to disable.

inline bool getDetectorFallingEdges() const

Report status of falling edge detection on the SIGNAL_IN line.

Returns:

true if enabled, false otherwise.

inline void setDetectorFallingEdges(const bool detectFalling)

Detect falling edges (high to low transitions) on the SIGNAL_IN line.

Parameters:

detectFalling – true to enable, false to disable.

inline bool isGeneratorRunning() const

Report status of external signal generator.

Returns:

true if running, false otherwise.

inline void setGeneratorRunning(const bool run)

Enable or disable external signal generator (PWM-like output).

Parameters:

run – true to enable, false to disable.

inline std::chrono::microseconds getGeneratorLowTime() const

Get current PWM low time.

Returns:

low time in microseconds.

inline void setGeneratorLowTime(const std::chrono::microseconds lowTimeUs)

Set PWM low time for external signal generator.

Parameters:

lowTimeUs – low time in microseconds.

inline std::chrono::microseconds getGeneratorHighTime() const

Get current PWM high time.

Returns:

high time in microseconds.

inline void setGeneratorHighTime(const std::chrono::microseconds highTimeUs)

Set PWM high time for external signal generator.

Parameters:

highTimeUs – high time in microseconds.

inline bool getGeneratorInjectTriggerOnRisingEdge() const

Report status of trigger event injection feature for external signal generator rising edges.

Returns:

true if enabled, false otherwise.

inline void setGeneratorInjectTriggerOnRisingEdge(const bool injectRising)

Inject a trigger event of type EXTERNAL_GENERATOR_RISING_EDGE into the event stream from the device, every time a rising edge is generated by the PWM-like output of the external signal generator.

Parameters:

injectRising – true to inject trigger event, false to disable.

inline bool getGeneratorInjectTriggerOnFallingEdge() const

Report status of trigger event injection feature for external signal generator falling edges.

Returns:

true if enabled, false otherwise.

inline void setGeneratorInjectTriggerOnFallingEdge(const bool injectFalling)

Inject a trigger event of type EXTERNAL_GENERATOR_FALLING_EDGE into the event stream from the device, every time a falling edge is generated by the PWM-like output of the external signal generator.

Parameters:

injectFalling – true to inject trigger event, false to disable.

inline std::chrono::microseconds getUSBEarlyPacketDelay() const

Get value of USB early packet timeout.

Returns:

timeout in microseconds.

inline void setUSBEarlyPacketDelay(const std::chrono::microseconds earlyPacketDelayUs)

Send data over USB early if this timeout is reached, instead of waiting on buffers being full. Timeout on device is in 125µs time-slices.

Parameters:

earlyPacketDelayUs – timeout in microseconds.

Public Static Functions

static inline auto findDevices(const std::string_view filterBySerialNumber = {})

Find connected DVXplorer cameras.

Parameters:

filterBySerialNumber – only search for devices with this serial number

Returns:

a descriptor structure describing a compatible device

Protected Functions

inline virtual void sendTimestampReset() override

Send a timestamp reset command to the device.

inline virtual void setTimestampOffset(const std::chrono::microseconds timestampOffset) override

Set a new timestamp offset value for the camera.

Parameters:

timestampOffset – New timestamp offset value in microseconds.

inline std::chrono::microseconds getGeneratorHighTimeInternal() const
inline std::chrono::microseconds getGeneratorLowTimeInternal() const
inline void setCropAreaInternal(const cv::Rect cropArea)

Private Types

enum class SensorModel

Values:

enumerator LITE_QVGA
enumerator DVX_VGA

Private Functions

inline void shutdownCallback()
inline void usbDataCallback(const std::span<const uint8_t> data)
inline void dataParserCallback(parser::ParsedData data)
inline void timeInitCallback()

Private Members

uint32_t mLogicVersion
uint32_t mLogicPatch
cv::Size mResolution
SensorModel mSensorModel
imu::ImuModel mImuModel
float mLogicClockActual
float mUSBClockActual
std::unique_ptr<parser::DVXplorer::Parser> mParser
mutable std::mutex mConfigLock
ReadoutFPS mReadoutFPS
uint8_t mContrastThresholdOn
uint8_t mContrastThresholdOff
cv::Rect mCropArea
std::atomic<bool> mIsRunning = {true}
std::atomic<bool> mTimestampMaster = {true}

Private Static Attributes

static constexpr uint16_t MODULE_MULTIPLEXER = {0}

Module address: device-side Multiplexer configuration. The Multiplexer is responsible for mixing, timestamping and outputting (via USB) the various event types generated by the device. It is also responsible for timestamp generation and synchronization.

static constexpr uint16_t MODULE_DVS = {1}

Module address: device-side DVS configuration. The DVS state machine interacts with the DVS chip and gets the polarity events from it. It supports various configurable delays, as well as advanced filtering capabilities on the polarity events.

static constexpr uint16_t MODULE_IMU = {3}

Module address: device-side IMU (Inertial Measurement Unit) configuration. The IMU module connects to the external IMU chip and sends data on the device’s movement in space. It can configure various options on the external chip, such as accelerometer range or gyroscope refresh rate.

static constexpr uint16_t MODULE_EXTERNAL_INPUT = {4}

Module address: device-side External Input (signal detector/generator) configuration. The External Input module is used to detect external signals on the external input jack and inject an event into the event stream when this happens. It can detect pulses of a specific length or rising and falling edges. On some systems, a signal generator module is also present, which can generate PWM-like pulsed signals with configurable timing.

static constexpr uint16_t MODULE_SYSINFO = {6}

Module address: device-side system information. The system information module provides various details on the device, such as currently installed logic revision or clock speeds. All its parameters are read-only. This is reserved for internal use and should not be used by anything other than libcaer. Please see the ‘struct caer_info’ documentation for more details on what information is available.

static constexpr uint16_t MODULE_USB = {9}

Module address: device-side USB output configuration. The USB output module forwards the data from the device and the FPGA/CPLD to the USB chip, usually a Cypress FX2 or FX3.

static constexpr uint16_t MUX_RUN = {0}

Parameter address for module MUX: run the Multiplexer state machine, which is responsible for mixing the various event types at the device level, timestamping them and outputting them via USB or other connectors.

static constexpr uint16_t MUX_TIMESTAMP_RUN = {1}

Parameter address for module MUX: run the Timestamp Generator inside the Multiplexer state machine, which will provide microsecond accurate timestamps to the events passing through.

static constexpr uint16_t MUX_TIMESTAMP_RESET = {2}

Parameter address for module MUX: reset the Timestamp Generator to zero. This also sends a reset pulse to all connected slave devices, resetting their timestamp too.

static constexpr uint16_t MUX_RUN_CHIP = {3}

Parameter address for module MUX: power up the chip’s bias generator, enabling the chip to work.

static constexpr uint16_t MUX_DROP_EXTINPUT_ON_TRANSFER_STALL = {4}

Parameter address for module MUX: drop External Input events if the USB output FIFO is full, instead of having them pile up at the input FIFOs.

static constexpr uint16_t MUX_DROP_DVS_ON_TRANSFER_STALL = {5}

Parameter address for module MUX: drop DVS events if the USB output FIFO is full, instead of having them pile up at the input FIFOs.

static constexpr uint16_t MUX_STATISTICS_EXTINPUT_DROPPED = {81}

Parameter address for module MUX: read-only parameter, representing the number of dropped External Input events on the device due to full USB buffers. This is a 64bit value, and should always be read using the function: caerDeviceConfigGet64().

static constexpr uint16_t MUX_STATISTICS_DVS_DROPPED = {83}

Parameter address for module MUX: read-only parameter, representing the number of dropped DVS events on the device due to full USB buffers. This is a 64bit value, and should always be read using the function: caerDeviceConfigGet64().

static constexpr uint16_t DVS_SIZE_COLUMNS = {0}

Parameter address for module DVS: read-only parameter, contains the X axis resolution of the DVS events returned by the camera. This is reserved for internal use and should not be used by anything other than libcaer. Please see the ‘struct caer_davis_info’ documentation to get proper size information that already considers the rotation and orientation settings.

static constexpr uint16_t DVS_SIZE_ROWS = {1}

Parameter address for module DVS: read-only parameter, contains the Y axis resolution of the DVS events returned by the camera. This is reserved for internal use and should not be used by anything other than libcaer. Please see the ‘struct caer_davis_info’ documentation to get proper size information that already considers the rotation and orientation settings.

static constexpr uint16_t DVS_ORIENTATION_INFO = {2}

Parameter address for module DVS: read-only parameter, contains information on the orientation of the X/Y axes, whether they should be inverted or not on the host when parsing incoming events. Bit 2: dvsInvertXY Bit 1: reserved Bit 0: reserved This is reserved for internal use and should not be used by anything other than libcaer. Please see the ‘struct caer_davis_info’ documentation to get proper size information that already considers the rotation and orientation settings.

static constexpr uint16_t DVS_RUN = {3}

Parameter address for module DVS: run the DVS state machine and read out polarity events from the chip.

static constexpr uint16_t DVS_STATISTICS_COLUMN = {81}

Parameter address for module DVS: read-only parameter, representing the number of column transactions completed successfully on the device. This is a 64bit value, and should always be read using the function: caerDeviceConfigGet64().

static constexpr uint16_t DVS_STATISTICS_GROUP = {83}

Parameter address for module DVS: read-only parameter, representing the number of SGroup/MGroup transactions completed successfully on the device. This is a 64bit value, and should always be read using the function: caerDeviceConfigGet64().

static constexpr uint16_t DVS_STATISTICS_DROPPED_COLUMN = {85}

Parameter address for module DVS: read-only parameter, representing the number of dropped column transactions on the device. This is a 64bit value, and should always be read using the function: caerDeviceConfigGet64().

static constexpr uint16_t DVS_STATISTICS_DROPPED_GROUP = {87}

Parameter address for module DVS: read-only parameter, representing the number of dropped SGroup/MGroup transactions on the device. This is a 64bit value, and should always be read using the function: caerDeviceConfigGet64().

static constexpr uint16_t IMU_TYPE = {0}

Parameter address for module IMU: read-only parameter, contains information on the type of IMU chip being used in this device: 0 - no IMU present 3 - Bosch BMI 160 This is reserved for internal use and should not be used by anything other than libcaer.

static constexpr uint16_t IMU_ORIENTATION_INFO = {1}

Parameter address for module IMU: read-only parameter, contains information on the orientation of the X/Y/Z axes, whether they should be flipped or not on the host when parsing incoming IMU data samples. Bit 2: imuFlipX Bit 1: imuFlipY Bit 0: imuFlipZ This is reserved for internal use and should not be used by anything other than libcaer. Generated IMU events are already properly flipped when returned to the user.

static constexpr uint16_t IMU_RUN_ACCELEROMETER = {2}

Parameter address for module IMU: enable the IMU’s accelerometer. This takes the IMU chip out of sleep.

static constexpr uint16_t IMU_RUN_GYROSCOPE = {3}

Parameter address for module IMU: enable the IMU’s gyroscope. This takes the IMU chip out of sleep.

static constexpr uint16_t IMU_RUN_TEMPERATURE = {4}

Parameter address for module IMU: enable the IMU’s temperature sensor. This takes the IMU chip out of sleep.

static constexpr uint16_t IMU_ACCEL_DATA_RATE = {5}

Parameter address for module IMU: 8 settings: 0 - 12.5 Hz 1 - 25 Hz 2 - 50 Hz 3 - 100 Hz 4 - 200 Hz 5 - 400 Hz 6 - 800 Hz 7 - 1600 Hz

static constexpr uint16_t IMU_ACCEL_FILTER = {6}

Parameter address for module IMU: 3 settings: 0 - OSR4 1 - OSR2 2 - Normal

static constexpr uint16_t IMU_ACCEL_RANGE = {7}

Parameter address for module IMU: 4 settings: 0 - +- 2g 1 - +- 4g 2 - +- 8g 3 - +- 16g

static constexpr uint16_t IMU_GYRO_DATA_RATE = {8}

Parameter address for module IMU: 8 settings: 0 - 25 Hz 1 - 50 Hz 2 - 100 Hz 3 - 200 Hz 4 - 400 Hz 5 - 800 Hz 6 - 1600 Hz 7 - 3200 Hz

static constexpr uint16_t IMU_GYRO_FILTER = {9}

Parameter address for module IMU: 3 settings: 0 - OSR4 1 - OSR2 2 - Normal

static constexpr uint16_t IMU_GYRO_RANGE = {10}

Parameter address for module IMU: 5 settings: 0 - +- 2000°/s 1 - +- 1000°/s 2 - +- 500°/s 3 - +- 250°/s 4 - +- 125°/s

static constexpr uint16_t EXTINPUT_RUN_DETECTOR = {0}

Parameter address for module EXTINPUT: enable the signal detector module. It generates events when it sees certain types of signals, such as edges or pulses of a defined length, on the SIGNAL pin of the INPUT synchronization connector. This can be useful to inject events into the event stream in response to external stimuli or controls, such as turning on a LED lamp.

static constexpr uint16_t EXTINPUT_DETECT_RISING_EDGES = {1}

Parameter address for module EXTINPUT: send a special EXTERNAL_INPUT_RISING_EDGE event when a rising edge is detected (transition from low voltage to high).

static constexpr uint16_t EXTINPUT_DETECT_FALLING_EDGES = {2}

Parameter address for module EXTINPUT: send a special EXTERNAL_INPUT_FALLING_EDGE event when a falling edge is detected (transition from high voltage to low).

static constexpr uint16_t EXTINPUT_RUN_GENERATOR = {11}

Parameter address for module EXTINPUT: enable the signal generator module. It generates a PWM-like signal based on configurable parameters and outputs it on the OUT JACK signal.

static constexpr uint16_t EXTINPUT_GENERATE_PULSE_INTERVAL = {13}

Parameter address for module EXTINPUT: the interval between the start of two consecutive pulses, expressed in cycles at LogicClock frequency (see ‘struct caer_davis_info’ for details on how to get the frequency). This must be bigger or equal to EXTINPUT_GENERATE_PULSE_LENGTH. To generate a signal with 50% duty cycle, this would have to be exactly double of EXTINPUT_GENERATE_PULSE_LENGTH.

static constexpr uint16_t EXTINPUT_GENERATE_PULSE_LENGTH = {14}

Parameter address for module EXTINPUT: the length a pulse stays active, expressed in cycles at LogicClock frequency (see ‘struct caer_davis_info’ for details on how to get the frequency). This must be smaller or equal to EXTINPUT_GENERATE_PULSE_INTERVAL. To generate a signal with 50% duty cycle, this would have to be exactly half of EXTINPUT_GENERATE_PULSE_INTERVAL.

static constexpr uint16_t EXTINPUT_GENERATE_INJECT_ON_RISING_EDGE = {15}

Parameter address for module EXTINPUT: enables event injection when a rising edge occurs in the generated signal; a special event EXTERNAL_GENERATOR_RISING_EDGE is emitted into the event stream.

static constexpr uint16_t EXTINPUT_GENERATE_INJECT_ON_FALLING_EDGE = {16}

Parameter address for module EXTINPUT: enables event injection when a falling edge occurs in the generated signal; a special event EXTERNAL_GENERATOR_FALLING_EDGE is emitted into the event stream.

static constexpr uint16_t SYSINFO_LOGIC_VERSION = {0}

Parameter address for module SYSINFO: read-only parameter, the version of the logic currently running on the device’s FPGA/CPLD. This is reserved for internal use and should not be used by anything other than libcaer. Please see the ‘struct caer_info’ documentation to get this information.

static constexpr uint16_t SYSINFO_CHIP_IDENTIFIER = {1}

Parameter address for module SYSINFO: read-only parameter, an integer used to identify the different types of sensor chips used on the device. This is reserved for internal use and should not be used by anything other than libcaer. Please see the ‘struct caer_info’ documentation to get this information.

static constexpr uint16_t SYSINFO_DEVICE_IS_MASTER = {2}

Parameter address for module SYSINFO: read-only parameter, whether the device is currently a timestamp master or slave when synchronizing multiple devices together. This is reserved for internal use and should not be used by anything other than libcaer. Please see the ‘struct caer_info’ documentation to get this information.

static constexpr uint16_t SYSINFO_LOGIC_CLOCK = {3}

Parameter address for module SYSINFO: read-only parameter, the frequency in MHz at which the main FPGA/CPLD logic is running. This is reserved for internal use and should not be used by anything other than libcaer.

static constexpr uint16_t SYSINFO_USB_CLOCK = {5}

Parameter address for module SYSINFO: read-only parameter, the frequency in MHz at which the FPGA/CPLD logic related to USB data transmission is running. This is reserved for internal use and should not be used by anything other than libcaer.

static constexpr uint16_t SYSINFO_CLOCK_DEVIATION = {6}

Parameter address for module SYSINFO: read-only parameter, the deviation factor for the clocks. Due to how FX3 generates the clocks, which are then used by FPGA/CPLD, they are not integers but have a fractional part. This is reserved for internal use and should not be used by anything other than libcaer.

static constexpr uint16_t SYSINFO_LOGIC_PATCH = {7}

Parameter address for module SYSINFO: read-only parameter, the patch version of the logic currently running on the device’s FPGA/CPLD. This is reserved for internal use and should not be used by anything other than libcaer.

static constexpr uint16_t USB_RUN = {0}

Parameter address for module USB: enable the USB FIFO module, which transfers the data from the FPGA/CPLD to the USB chip, to be then sent to the host. Turning this off will suppress any USB data communication!

static constexpr uint16_t USB_EARLY_PACKET_DELAY = {1}

Parameter address for module USB: the time delay after which a packet of data is committed to USB, even if it is not full yet (short USB packet). The value is in 125µs time-slices, corresponding to how USB schedules its operations (a value of 4 for example would mean waiting at most 0.5ms until sending a short USB packet to the host).

static constexpr uint16_t MODULE_DEVICE = {5}

Module address: device-side chip configuration. This state machine is responsible for configuring the Samsung DVS chip.

static constexpr uint16_t REGISTER_BIAS_CURRENT_RANGE_SELECT_LOGSFONREST = {0x000B}
static constexpr uint16_t REGISTER_BIAS_CURRENT_RANGE_SELECT_LOGALOGD_MONITOR = {0x000C}
static constexpr uint16_t REGISTER_BIAS_OTP_TRIM = {0x000D}
static constexpr uint16_t REGISTER_BIAS_PINS_DBGP = {0x000F}
static constexpr uint16_t REGISTER_BIAS_PINS_DBGN = {0x0010}
static constexpr uint16_t REGISTER_BIAS_CURRENT_LEVEL_SFOFF = {0x0012}
static constexpr uint16_t REGISTER_BIAS_PINS_BUFP = {0x0013}
static constexpr uint16_t REGISTER_BIAS_PINS_BUFN = {0x0014}
static constexpr uint16_t REGISTER_BIAS_PINS_DOB = {0x0015}
static constexpr uint16_t REGISTER_BIAS_CURRENT_AMP = {0x0018}
static constexpr uint16_t REGISTER_BIAS_CURRENT_ON = {0x001C}
static constexpr uint16_t REGISTER_BIAS_CURRENT_OFF = {0x001E}
static constexpr uint16_t REGISTER_CONTROL_MODE = {0x3000}
static constexpr uint16_t REGISTER_CONTROL_INTERRUPT_SOURCE = {0x3004}
static constexpr uint16_t REGISTER_CONTROL_INTERRUPT_ENABLE_TIME = {0x3005}
static constexpr uint16_t REGISTER_CONTROL_INTERRUPT_ACKNOWLEDGE = {0x3007}
static constexpr uint16_t REGISTER_CONTROL_INTERRUPT_AUTO_MODE = {0x3008}
static constexpr uint16_t REGISTER_CONTROL_INTERRUPT_RELEASE_TIME = {0x3009}
static constexpr uint16_t REGISTER_CONTROL_PLL_P = {0x300D}
static constexpr uint16_t REGISTER_CONTROL_PLL_M = {0x300E}
static constexpr uint16_t REGISTER_CONTROL_PLL_S = {0x3010}
static constexpr uint16_t REGISTER_CONTROL_CLOCK_DIVIDER_SYS = {0x3011}
static constexpr uint16_t REGISTER_CONTROL_CLOCK_DIVIDER_PVI = {0x3012}
static constexpr uint16_t REGISTER_CONTROL_PARALLEL_OUT_CONTROL = {0x3019}
static constexpr uint16_t REGISTER_CONTROL_PARALLEL_OUT_ENABLE = {0x301E}
static constexpr uint16_t REGISTER_CONTROL_PACKET_FORMAT = {0x3067}
static constexpr uint16_t REGISTER_DIGITAL_ENABLE = {0x3200}
static constexpr uint16_t REGISTER_DIGITAL_RESTART = {0x3201}
static constexpr uint16_t REGISTER_DIGITAL_DUAL_BINNING = {0x3202}
static constexpr uint16_t REGISTER_DIGITAL_SUBSAMPLE_RATIO = {0x3204}
static constexpr uint16_t REGISTER_DIGITAL_AREA_BLOCK = {0x3205}
static constexpr uint16_t REGISTER_DIGITAL_TIMESTAMP_SUBUNIT = {0x3234}
static constexpr uint16_t REGISTER_DIGITAL_TIMESTAMP_REFUNIT = {0x3235}
static constexpr uint16_t REGISTER_DIGITAL_DTAG_REFERENCE = {0x323D}
static constexpr uint16_t REGISTER_DIGITAL_TIMESTAMP_RESET = {0x3238}
static constexpr uint16_t REGISTER_TIMING_FIRST_SELX_START = {0x323C}
static constexpr uint16_t REGISTER_TIMING_GH_COUNT = {0x3240}
static constexpr uint16_t REGISTER_TIMING_GH_COUNT_FINE = {0x3243}
static constexpr uint16_t REGISTER_TIMING_GRS_COUNT = {0x3244}
static constexpr uint16_t REGISTER_TIMING_GRS_COUNT_FINE = {0x3247}
static constexpr uint16_t REGISTER_DIGITAL_GLOBAL_RESET_READOUT = {0x3248}
static constexpr uint16_t REGISTER_TIMING_NEXT_GH_CNT = {0x324B}
static constexpr uint16_t REGISTER_TIMING_SELX_WIDTH = {0x324C}
static constexpr uint16_t REGISTER_TIMING_AY_START = {0x324E}
static constexpr uint16_t REGISTER_TIMING_AY_END = {0x324F}
static constexpr uint16_t REGISTER_TIMING_MAX_EVENT_NUM = {0x3251}
static constexpr uint16_t REGISTER_TIMING_R_START = {0x3253}
static constexpr uint16_t REGISTER_TIMING_R_END = {0x3254}
static constexpr uint16_t REGISTER_DIGITAL_MODE_CONTROL = {0x3255}
static constexpr uint16_t REGISTER_TIMING_GRS_END = {0x3256}
static constexpr uint16_t REGISTER_TIMING_GRS_END_FINE = {0x3259}
static constexpr uint16_t REGISTER_DIGITAL_FIXED_READ_TIME = {0x325C}
static constexpr uint16_t REGISTER_TIMING_READ_TIME_INTERVAL = {0x325D}
static constexpr uint16_t REGISTER_DIGITAL_EXTERNAL_TRIGGER = {0x3260}
static constexpr uint16_t REGISTER_TIMING_NEXT_SELX_START = {0x3261}
static constexpr uint16_t REGISTER_DIGITAL_BOOT_SEQUENCE = {0x3266}
static constexpr uint16_t REGISTER_CROPPER_BYPASS = {0x3300}
static constexpr uint16_t REGISTER_CROPPER_Y_START_GROUP = {0x3301}
static constexpr uint16_t REGISTER_CROPPER_Y_START_MASK = {0x3302}
static constexpr uint16_t REGISTER_CROPPER_Y_END_GROUP = {0x3303}
static constexpr uint16_t REGISTER_CROPPER_Y_END_MASK = {0x3304}
static constexpr uint16_t REGISTER_CROPPER_X_START_ADDRESS = {0x3305}
static constexpr uint16_t REGISTER_CROPPER_X_END_ADDRESS = {0x3307}
static constexpr uint16_t REGISTER_ACTIVITY_DECISION_BYPASS = {0x3500}
static constexpr uint16_t REGISTER_SPATIAL_HISTOGRAM_OFF = {0x3600}
static constexpr uint16_t DVS_CHIP_MODE_OFF = {0}
static constexpr uint16_t DVS_CHIP_MODE_MONITOR = {1}
static constexpr uint16_t DVS_CHIP_MODE_STREAM = {2}
static constexpr uint16_t DVS_CHIP_DTAG_CONTROL_STOP = {0}
static constexpr uint16_t DVS_CHIP_DTAG_CONTROL_START = {1}
static constexpr uint16_t DVS_CHIP_DTAG_CONTROL_RESTART = {2}
static constexpr uint16_t SYSTEM_CLOCK_FREQUENCY = {50}
static constexpr uint16_t PID_DVXPLORER = {0x8419}
static constexpr uint8_t FIRMWARE_REQUIRED_VERSION = {9}
static constexpr uint32_t LOGIC_REQUIRED_VERSION = {18}
static constexpr uint32_t LOGIC_MINIMUM_PATCH = {4}
static constexpr uint32_t USB_EARLY_PACKET_DELAY_MAX = {(1 << 20) - 1}
static constexpr uint32_t EXT_INPUT_TIME_MAX = {(1 << 20) - 1}
static constexpr auto COMPATIBLE_CAMERA = [](const uint16_t vid, const uint16_t pid, const USBDeviceType deviceType) -> std::optional<CameraModel> {if ((vid == VID_INIVATION) && (pid == PID_DVXPLORER) && (deviceType == USBDeviceType::FX3_RED)) {return CameraModel::DVXPLORER;}return std::nullopt;}

Friends

inline friend std::ostream &operator<<(std::ostream &os, const SubSample &var)
inline friend std::ostream &operator<<(std::ostream &os, const ReadoutFPS &var)
class DVXplorerM : public dv::io::camera::USBDeviceNextGen, public dv::io::camera::CameraInputBase

Public Functions

inline explicit DVXplorerM(const LogLevel deviceLogLevel = LogLevel::LVL_WARNING, loggerCallbackType deviceLogger = {})

Open the first DVXplorer Mini/Micro camera that can be found. Throws if device cannot be opened.

Parameters:
  • deviceLogLevel – initial log-level

  • deviceLogger – per-device logging callback, must be thread-safe

inline explicit DVXplorerM(const std::string_view filterBySerialNumber, const LogLevel deviceLogLevel = LogLevel::LVL_WARNING, loggerCallbackType deviceLogger = {})

Open the DVXplorer Mini/Micro camera with the specified serial number. Throws if device cannot be opened.

Parameters:
  • filterBySerialNumber – serial number to search for

  • deviceLogLevel – initial log-level

  • deviceLogger – per-device logging callback, must be thread-safe

inline explicit DVXplorerM(const DeviceDescriptor &deviceToOpen, const LogLevel deviceLogLevel = LogLevel::LVL_WARNING, loggerCallbackType deviceLogger = {})

Open the DVXplorer Mini/Micro camera corresponding to the specified descriptor. Throws if device cannot be opened.

Parameters:
  • deviceToOpen – device descriptor structure

  • deviceLogLevel – initial log-level

  • deviceLogger – per-device logging callback, must be thread-safe

inline ~DVXplorerM() override
inline void configLoadFromFlash()

Load configuration from on-device flash memory.

inline void configStoreToFlash()

Store current configuration to on-device flash memory.

inline virtual std::string getCameraName() const override

Get camera name, which is a combination of the camera model and the serial number.

Returns:

String containing the camera model and serial number separated by an underscore character.

inline virtual std::optional<cv::Size> getEventResolution() const override

Get event stream resolution.

Returns:

Event stream resolution, std::nullopt if event stream is unavailable.

inline virtual std::optional<cv::Size> getFrameResolution() const override

Retrieve frame stream resolution.

Returns:

Frame stream resolution or std::nullopt if the frame stream is not available.

inline virtual imu::ImuModel getImuModel() const override

Return IMU model used on device.

Returns:

IMU model in use.

inline virtual float getPixelPitch() const override

Return pixel pitch distance for the connected camera model. The value is returned in meters.

Returns:

Pixel pitch distance in meters according to the connected device.

inline virtual bool isEventStreamAvailable() const override

Check whether event stream is available.

Returns:

True if event stream is available, false otherwise.

inline virtual bool isFrameStreamAvailable() const override

Check whether frame stream is available.

Returns:

True if frame stream is available, false otherwise.

inline virtual bool isImuStreamAvailable() const override

Check whether IMU data is available.

Returns:

True if IMU data stream is available, false otherwise.

inline virtual bool isTriggerStreamAvailable() const override

Check whether trigger data is available.

Returns:

True if trigger data stream is available, false otherwise.

inline virtual bool isRunning() const override

Check whether any input data streams have terminated. For a live camera this should check if the device is still connected and functioning, while for a recording file this should check if any of the data streams have reached end-of-file (EOF). For a network input, this indicates the network stream is still connected.

Returns:

True if data read on all streams is still possible, false otherwise.

inline virtual bool isRunning(const std::string_view streamName) const override

Check whether the input data stream with the specified name is still active.

Returns:

True if data read on this stream is possible, false otherwise.

inline virtual bool isRunningAny() const override

Check whether any input data streams are still available. For a live camera this should check if the device is still connected and functioning and at least one data stream is active (different than isRunning()), while for a recording file this should check if any of the data streams have not yet reached end-of-file (EOF) and are still readable. For a network input, this indicates the network stream is still connected.

Returns:

True if data read on at least one stream is still possible, false otherwise.

inline virtual std::chrono::microseconds getTimeInterval() const override

Get the time interval for data commit.

Returns:

Time interval in microseconds.

inline virtual void setTimeInterval(const std::chrono::microseconds timeInterval) override

Set a new time interval value for data commit. Data is put in the queues for getNextEventBatch(), readNext(), … at this interval’s rate.

Parameters:

timeInterval – New time interval value in microseconds.

inline virtual std::chrono::microseconds getTimestampOffset() const override

Get the timestamp offset.

Returns:

Absolute timestamp offset value in microseconds.

inline virtual bool getFlipHorizontal() const override

Status of horizontal events flip.

Returns:

status of horizontal events flip.

inline virtual void setFlipHorizontal(const bool flipHorizontal) override

Flip events horizontally.

Parameters:

flipHorizontalEvents – flip events horizontally.

inline virtual bool getFlipVertical() const override

Status of vertical events flip.

Returns:

status of vertical events flip.

inline virtual void setFlipVertical(const bool flipVertical) override

Flip events vertically.

Parameters:

flipVerticalEvents – flip events vertically.

inline virtual Flatten getFlatten() const override

Status of event polarity flattening.

Returns:

status of event polarity flattening.

inline virtual void setFlatten(const Flatten flatten) override

Flatten events polarity.

Parameters:

flattenEvents – flattening mode.

inline DVXplorer::SubSample getSubSampleHorizontal() const

Get horizontal subsampling mode. VGA sensor: all values. HD sensor: only EVERY_PIXEL, EVERY_SECOND, EVERY_FOURTH, EVERY_EIGHTH.

Returns:

current horizontal subsampling mode.

inline void setSubSampleHorizontal(const DVXplorer::SubSample subSampleHorizontal)

Set horizontal subsampling mode. VGA sensor: all values. HD sensor: only EVERY_PIXEL, EVERY_SECOND, EVERY_FOURTH, EVERY_EIGHTH.

Parameters:

subSampleHorizontal – new horizontal subsampling mode.

inline DVXplorer::SubSample getSubSampleVertical() const

Get vertical subsampling mode. VGA sensor: all values. HD sensor: only EVERY_PIXEL, EVERY_SECOND, EVERY_FOURTH, EVERY_EIGHTH.

Returns:

current vertical subsampling mode.

inline void setSubSampleVertical(const DVXplorer::SubSample subSampleVertical)

Set vertical subsampling mode. VGA sensor: all values. HD sensor: only EVERY_PIXEL, EVERY_SECOND, EVERY_FOURTH, EVERY_EIGHTH.

Parameters:

subSampleVertical – new vertical subsampling mode.

inline bool getGlobalHold() const

Report status of global hold feature.

Returns:

true if enabled, false otherwise.

inline void setGlobalHold(const bool globalHold)

Enable or disable global hold feature. Default is enabled. For some applications like LED tracking, setting this to false can help.

Parameters:

globalHold – true to enable, false to disable.

inline bool getGlobalReset() const

Report status of global reset feature.

Returns:

true if enabled, false otherwise.

inline void setGlobalReset(const bool globalReset)

Enable or disable global reset feature.

Parameters:

globalReset – true to enable, false to disable.

inline uint8_t getGlobalResetSkip() const

Get global reset skip value.

Returns:

only perform global reset every N+1 readout frames.

inline void setGlobalResetSkip(const uint8_t globalResetSkip)

Control global reset skip feature. Only perform global reset, if enabled, every N+1 readout frames. Setting to zero disables this feature.

Parameters:

globalResetSkip – only perform global reset every N+1 readout frames.

inline bool getMIPITimeoutEnable() const

Report status of MIPI timeout feature.

Returns:

true if enabled, false otherwise.

inline void setMIPITimeoutEnable(const bool mipiTimeoutEnable)

Enable MIPI timeout, will ensure data is sent from sensor to host at least every N microseconds, even during periods of low activity. See setMIPITimeoutValue() to set the timeout value.

Parameters:

mipiTimeoutEnable – true to enable, false to disable.

inline std::chrono::microseconds getMIPITimeoutValue() const

Get MIPI timeout in microseconds.

Returns:

timeout in microseconds.

inline void setMIPITimeoutValue(const std::chrono::microseconds mipiTimeoutValueUs)

Set MIPI timeout in microseconds. See setMIPITimeoutEnable() to enable this functionality.

Parameters:

mipiTimeoutValueUs – timeout in microseconds.

inline DVXplorer::ReadoutFPS getReadoutFPS() const

Get currently set event-frame readout frequency. Not available on HD sensor.

Returns:

event-frame readout frequency.

inline void setReadoutFPS(const DVXplorer::ReadoutFPS readoutFps)

Set frequency of event-frame readouts on sensor. Not available on HD sensor. CONSTANT frequencies are guaranteed fixed frequencies with no data loss. CONSTANT_LOSSY are guaranteed fixed frequencies but will cut off a readout to respect the timing if too much data is present, resulting in data loss. VARIABLE are best-effort frequencies that will change if lots of data is present, slowing down, but without data loss.

Parameters:

readoutFps – event-frame readout frequency.

inline uint8_t getContrastThresholdOn() const

Get the contrast threshold for ON polarity event generation. VGA sensor: values from 0 to 17. HD sensor: values from 0 to 127.

Returns:

ON contrast threshold.

inline void setContrastThresholdOn(const uint8_t contrastThresholdOn)

Set the contrast threshold for ON polarity event generation. VGA sensor: values from 0 to 17. HD sensor: values from 0 to 127.

Parameters:

contrastThresholdOn – ON contrast threshold.

inline uint8_t getContrastThresholdOff() const

Get the contrast threshold for OFF polarity event generation. VGA sensor: values from 0 to 17. HD sensor: values from 0 to 127.

Returns:

OFF contrast threshold.

inline void setContrastThresholdOff(const uint8_t contrastThresholdOff)

Set the contrast threshold for OFF polarity event generation. VGA sensor: values from 0 to 17. HD sensor: values from 0 to 127.

Parameters:

contrastThresholdOff – OFF contrast threshold.

inline virtual cv::Rect getCropArea() const override

Get events Region of Interest (ROI).

Returns:

get events Region of Interest (ROI).

inline virtual void setCropArea(const cv::Rect cropArea) override

Set events Region of Interest (ROI). Usually hardware accelerated.

Parameters:

cropAreaEvents – region of interest (ROI) position and size.

inline bool getAreaBlock(const uint32_t blockX, const uint32_t blockY) const

Report if the pixel block at the given X,Y address is filtered out or not.

Parameters:
  • blockX – horizontal address of pixel block. VGA sensor: from 0 to 19, pixel block size 32x32. HD sensor: from 0 to 23, pixel block size 40x40.

  • blockY – vertical address of pixel block. VGA sensor: from 0 to 14, pixel block size 32x32. HD sensor: from 0 to 17, pixel block size 40x40.

Returns:

true if filtered out, false if normal operation.

inline void setAreaBlock(const uint32_t blockX, const uint32_t blockY, const bool block)

Enable pixel block filtering at the given X,Y address.

Parameters:
  • blockX – horizontal address of pixel block. VGA sensor: from 0 to 19, pixel block size 32x32. HD sensor: from 0 to 23, pixel block size 40x40.

  • blockY – vertical address of pixel block. VGA sensor: from 0 to 14, pixel block size 32x32. HD sensor: from 0 to 17, pixel block size 40x40.

  • block – true to filter out, false for normal operation.

inline bool isImuRunningAccelerometer() const

Status of IMU accelerometer.

Returns:

true if enabled, false otherwise.

inline void setImuRunningAccelerometer(const bool run)

Enable or disable IMU accelerometer.

Parameters:

run – true to enable, false to disable.

inline bool isImuRunningGyroscope() const

Status of IMU gyroscope.

Returns:

true if enabled, false otherwise.

inline void setImuRunningGyroscope(const bool run)

Enable or disable IMU gyroscope.

Parameters:

run – true to enable, false to disable.

inline bool isImuRunningTemperature() const

Status of IMU temperature measurement.

Returns:

true if enabled, false otherwise.

inline void setImuRunningTemperature(const bool run)

Enable or disable IMU temperature measurement.

Parameters:

run – true to enable, false to disable.

inline imu::BoschBMI160AccelDataRate getImuAccelDataRate() const

Get current IMU accelerometer data rate.

Returns:

accelerometer data rate.

inline void setImuAccelDataRate(const imu::BoschBMI160AccelDataRate dataRate)

Set IMU accelerometer data rate.

Parameters:

dataRate – accelerometer data rate.

inline imu::BoschBMI160AccelFilter getImuAccelFilter() const

Get current IMU accelerometer filter setting.

Returns:

accelerometer filter setting.

inline void setImuAccelFilter(const imu::BoschBMI160AccelFilter filter)

Set IMU accelerometer filter setting.

Parameters:

filter – accelerometer filter setting.

inline imu::BoschBMI160AccelRange getImuAccelRange() const

Get current IMU accelerometer range.

Returns:

accelerometer range.

inline void setImuAccelRange(const imu::BoschBMI160AccelRange range)

Set IMU accelerometer range.

Parameters:

range – accelerometer range.

inline imu::BoschBMI160GyroDataRate getImuGyroDataRate() const

Get current IMU gyroscope data rate.

Returns:

gyroscope data rate.

inline void setImuGyroDataRate(const imu::BoschBMI160GyroDataRate dataRate)

Set IMU gyroscope data rate.

Parameters:

dataRate – gyroscope data rate.

inline imu::BoschBMI160GyroFilter getImuGyroFilter() const

Get current IMU gyroscope filter setting.

Returns:

gyroscope filter setting.

inline void setImuGyroFilter(const imu::BoschBMI160GyroFilter filter)

Set IMU gyroscope filter setting.

Parameters:

filter – gyroscope filter setting.

inline imu::BoschBMI160GyroRange getImuGyroRange() const

Get current IMU gyroscope range.

Returns:

gyroscope range.

inline void setImuGyroRange(const imu::BoschBMI160GyroRange range)

Set IMU gyroscope range.

Parameters:

range – gyroscope range.

Public Static Functions

static inline auto findDevices(const std::string_view filterBySerialNumber = {})

Find connected DVXplorer Mini/Micro cameras.

Parameters:

filterBySerialNumber – only search for devices with this serial number

Returns:

a descriptor structure describing a compatible device

Private Types

enum class SensorModel

Values:

enumerator S5K231Y
enumerator S5KRC1S

Private Functions

inline void shutdownCallback()
inline void usbDataCallback(const std::span<const uint8_t> data)
inline void usbDebugCallback(const std::span<const uint8_t> data) const
inline void dataParserCallback(parser::ParsedData data)
inline void timeInitCallback()
inline char getDVSOrientationX() const
inline char getDVSOrientationY() const

Private Members

cv::Size mResolution
SensorModel mSensorModel
imu::ImuModel mImuModel
std::unique_ptr<parser::ParserBase> mParser
mutable std::mutex mConfigLock
std::atomic<bool> mIsRunning = {true}

Private Static Attributes

static constexpr uint8_t MODULE_DVS = {1}
static constexpr uint8_t DVS_RESOLUTION_X = {0}
static constexpr uint8_t DVS_RESOLUTION_Y = {1}
static constexpr uint8_t DVS_ORIENTATION = {2}
static constexpr uint8_t DVS_RUN = {3}
static constexpr uint8_t DVS_FLATTEN = {4}
static constexpr uint8_t DVS_SUBSAMPLE_HORIZONTAL = {5}
static constexpr uint8_t DVS_SUBSAMPLE_VERTICAL = {6}
static constexpr uint8_t DVS_GLOBAL_HOLD = {8}
static constexpr uint8_t DVS_GLOBAL_RESET = {9}
static constexpr uint8_t DVS_GLOBAL_RESET_SKIP = {10}
static constexpr uint8_t DVS_MIPI_TIMEOUT_ENABLE = {11}
static constexpr uint8_t DVS_MIPI_TIMEOUT_VALUE = {12}
static constexpr uint8_t DVS_EFPS_S5K231Y = {13}
static constexpr uint8_t DVS_CONTRAST_THRESHOLD_ON = {14}
static constexpr uint8_t DVS_CONTRAST_THRESHOLD_OFF = {15}
static constexpr uint8_t DVS_AREA_BLOCKING_X_BLOCK = {16}
static constexpr uint8_t DVS_AREA_BLOCKING_Y_BLOCK = {17}
static constexpr uint8_t DVS_AREA_BLOCKING_BLOCK = {18}
static constexpr uint8_t DVS_CROP_X = {19}
static constexpr uint8_t DVS_CROP_Y = {20}
static constexpr uint8_t DVS_CROP_WIDTH = {21}
static constexpr uint8_t DVS_CROP_HEIGHT = {22}
static constexpr uint8_t DVS_CROP_APPLY = {23}
static constexpr uint8_t DVS_FLIP_HORIZONTAL = {24}
static constexpr uint8_t DVS_FLIP_VERTICAL = {25}
static constexpr uint8_t MODULE_IMU = {3}
static constexpr uint8_t IMU_MODEL = {0}
static constexpr uint8_t IMU_ORIENTATION = {1}
static constexpr uint8_t IMU_RUN_ACCELEROMETER = {2}
static constexpr uint8_t IMU_RUN_GYROSCOPE = {3}
static constexpr uint8_t IMU_RUN_TEMPERATURE = {4}
static constexpr uint8_t IMU_ACCEL_DATA_RATE = {5}
static constexpr uint8_t IMU_ACCEL_FILTER = {6}
static constexpr uint8_t IMU_ACCEL_RANGE = {7}
static constexpr uint8_t IMU_GYRO_DATA_RATE = {8}
static constexpr uint8_t IMU_GYRO_FILTER = {9}
static constexpr uint8_t IMU_GYRO_RANGE = {10}
static constexpr uint8_t IMU_FOC_RUN = {11}
static constexpr uint8_t IMU_FOC_ACCEL_X = {12}
static constexpr uint8_t IMU_FOC_ACCEL_Y = {13}
static constexpr uint8_t IMU_FOC_ACCEL_Z = {14}
static constexpr uint8_t IMU_FOC_GYRO_X = {15}
static constexpr uint8_t IMU_FOC_GYRO_Y = {16}
static constexpr uint8_t IMU_FOC_GYRO_Z = {17}
static constexpr uint8_t IMU_FOC_APPLY = {18}
static constexpr uint8_t VR_CONFIG_STORE = {0xC7}
static constexpr uint8_t VR_CONFIG_RELOAD = {0xC8}
static constexpr uint16_t PID_DVXPLORER = {0x8419}
static constexpr uint8_t FIRMWARE_REQUIRED_VERSION = {10}
static constexpr auto COMPATIBLE_CAMERA = [](const uint16_t vid, const uint16_t pid, const USBDeviceType deviceType) -> std::optional<CameraModel> {if ((vid == VID_INIVATION) && (pid == PID_DVXPLORER) && (deviceType == USBDeviceType::CX3_MIPI)) {return CameraModel::DVXPLORER_M;}return std::nullopt;}
struct eColumn

Public Functions

inline explicit eColumn(const uint32_t event)

Public Members

int16_t columnAddress
int16_t timestampSubUnit
bool startOfFrame
struct eColumn

Public Functions

inline explicit eColumn(const uint32_t event)

Public Members

int16_t columnAddress
uint8_t frameNumber
bool startOfFrame
bool mirrorMode
struct eDataLost
class EdgeMapAccumulator : public dv::AccumulatorBase
#include </builds/inivation/dv/dv-processing/include/dv-processing/core/frame/edge_map_accumulator.hpp>

dv::EdgeMapAccumulator accumulates events in a histogram representation with configurable contribution, but it is more efficient compared to generic accumulator since it uses 8-bit unsigned integers as internal memory type.

The EdgeMapAccumulator behaves the same as a generic dv::Accumulator with STEP decay function, neutral and minimum value of 0.0, maximum value of 1.0 and configurable event contribution. The difference is that it doesn’t use floating point numbers for the potential surface representation. The output data type of this accumulator is single channel 8-bit unsigned integer (CV_8UC1). Accumulation is performed using integer operations as well. Due to performance, no check on the event coordinates inside image plane is performed, unless compiled specifically in DEBUG mode. Events out of the image plane bounds will result in undefined behaviour, or program termination in DEBUG mode.

Public Functions

inline explicit EdgeMapAccumulator(const cv::Size &resolution, const float contribution_ = 0.25f, const bool ignorePolarity_ = true, const float neutralPotential = 0.f, const float decay_ = EdgeMapAccumulator::DECAY_FULL)

Create a pixel accumulator with known image dimensions and event contribution.

Parameters:
  • resolution – Dimensions of the expected event sensor

  • contribution_ – Contribution coefficient for a single event. The contribution value is multiplied by the maximum possible pixel value (255) to get the increment value. E.g. contribution value of 0.1 will increment a pixel value at a single event coordinates by 26.

  • ignorePolarity_ – Set ignore polarity option. All events are considered positive if enabled.

  • neutralPotential – Neutral potential value. Neutral value is the default pixel value when decay is disabled and the value that pixels decay into when decay is enabled. The range for neutral potential value is [0.0; 1.0], where 1.0 stands for maximum possible potential - 255 in 8-bit pixel representation.

  • decay_ – Decay coefficient value. This value defines how fast pixel values decay to neutral value. The bigger the value the faster the pixel value will reach neutral value. Decay is applied before each frame generation. The range for decay value is [0.0; 1.0], where 0.0 will not apply any decay and 1.0 will apply maximum decay value resetting a pixel to neutral potential at each generation (default behavior).

inline float getEventContribution() const

Get the contribution coefficient for a single event. The contribution value is multiplied by the maximum possible pixel value (255) to get the increment value. E.g. contribution value of 0.1 will increment a pixel value at a single event coordinates by 26.

Returns:

Contribution coefficient

inline void setEventContribution(const float contribution_)

Set new contribution coefficient.

Parameters:

contribution_ – Contribution coefficient for a single event. The contribution value is multiplied by the maximum possible pixel value (255) to get the increment value. E.g. contribution value of 0.1 will increment a pixel value at a single event coordinates by 26.

inline virtual void accumulate(const EventStore &packet) override

Perform accumulation on given events.

Parameters:

packet – Event store containing event to be accumulated.

inline virtual dv::Frame generateFrame() override

Generates the accumulation frame (potential surface) at the time of the last consumed event. The function writes the output image into the given outFrame argument. The output frame will contain data with type CV_8UC1.

The function resets any events accumulated up to this function call.

Returns:

accumulated frame

inline void reset()

Clear the buffered events.

inline EdgeMapAccumulator &operator<<(const EventStore &store)

Accumulates the event store into the accumulator.

Parameters:

store – The event store to be accumulated.

Returns:

A reference to this EdgeMapAccumulator.

inline bool isIgnorePolarity() const

Check whether ignore polarity option is set to true.

Returns:

True if the accumulator assumes all events as positive, false otherwise.

inline void setIgnorePolarity(const bool ignorePolarity_)

Set ignore polarity option. All events are considered positive if enabled.

Parameters:

ignorePolarity_ – True to enable ignore polarity option.

inline float getNeutralPotential() const

Get the neutral potential value for the accumulator. The range for potential value is [0.0; 1.0], where 1.0 stands for maximum possible potential - 255 in 8-bit pixel representation.

Returns:

Neutral potential value in range [0.0; 1.0]

inline void setNeutralPotential(const float neutralPotential)

Set the neutral potential value. The value should be in range 0.0 to 1.0, other values will be clamped to this range.

Parameters:

neutralPotential – Neutral potential value in range [0.0; 1.0].

inline float getDecay() const

Get current decay value.

Returns:

Decay value.

inline void setDecay(const float decay_)

Set the decay value. Decay value is clamped to range of [0.0; 1.0].

Parameters:

decay_ – Decay value. Negative value disabled the decay.

Public Static Attributes

static constexpr float DECAY_NONE = 0.0f

Decay coefficient value to disable any decay - zero decay.

static constexpr float DECAY_FULL = 1.0f

Maximum decay coefficient value which causes reset of pixels into neutral potential at each frame generation.

Protected Types

enum class DecayMode

Values:

enumerator NONE
enumerator FULL
enumerator DECAY

Protected Attributes

dv::EventStore buffer

Buffer to keep the latest events

uint8_t maxByteValue = 255

Max unsigned byte value

float contribution = 0.25f

Default contribution

uint8_t drawIncrement = (static_cast<uint8_t>(static_cast<float>(maxByteValue) * contribution))

Increment value for a single event

std::vector<uint8_t> incrementLUT

A look-up table for increment values at each possible pixel value.

bool ignorePolarity = true
float neutralValue = 0.f
uint8_t neutralByteValue = 0
float decay = 1.0
std::vector<uint8_t> decayLUT
cv::Mat imageBuffer
DecayMode decayMode = DecayMode::FULL

Friends

inline friend std::ostream &operator<<(std::ostream &os, const DecayMode &var)
struct eFrameEnd

Public Functions

inline explicit eFrameEnd(const uint32_t event)

Public Members

uint8_t frameNumber
struct EigenEvents
#include </builds/inivation/dv/dv-processing/include/dv-processing/core/core.hpp>

A structure that contains events represented in eigen matrices. Useful for mathematical operations using the Eigen library..

Public Functions

inline explicit EigenEvents(const size_t size)

Public Members

Eigen::Matrix<int64_t, Eigen::Dynamic, 1> timestamps
Eigen::Matrix<int16_t, Eigen::Dynamic, 2> coordinates
Eigen::Matrix<uint8_t, Eigen::Dynamic, 1> polarities
struct EmptyException

Subclassed by dv::exceptions::info::BadAlloc, dv::exceptions::info::IOError, dv::exceptions::info::LengthError, dv::exceptions::info::NullPointer, dv::exceptions::info::OutOfRange, dv::exceptions::info::RuntimeError

Public Types

using Info = void
struct EndOfFile

Public Types

using Info = std::filesystem::path

Public Static Functions

static inline std::string format(const Info &info)
struct Epanechnikov

Public Static Functions

static inline float getSearchRadius(const float bandwidth)
static inline float apply(const float squaredDistance, const float bandwidth)
struct ErrorInfo

Public Members

std::string mName
std::string mTypeIdentifier
struct ErrorInfo

Public Members

std::string mName
std::string mTypeIdentifier
struct eSMGroup

Public Functions

inline explicit eSMGroup(const uint32_t event)

Public Members

int16_t group1Address
int16_t group2Address
uint8_t group1Events
uint8_t group2Events
bool group1Polarity
bool group2Polarity
struct eSMGroup

Public Functions

inline explicit eSMGroup(const uint32_t event)

Public Members

int16_t group1Address
int16_t group2Offset
uint8_t group1Events
uint8_t group2Events
bool group1Polarity
bool group2Polarity
struct eTimestampReference

Public Functions

inline explicit eTimestampReference(const uint32_t event)

Public Members

int32_t timestampReference
struct eTimestampReference

Public Functions

inline explicit eTimestampReference(const uint32_t event)

Public Members

int32_t timestampReference
struct eTimestampSubUnit

Public Functions

inline explicit eTimestampSubUnit(const uint32_t event)

Public Members

int32_t timestampSubUnit
class EventBlobDetector
#include </builds/inivation/dv/dv-processing/include/dv-processing/features/event_blob_detector.hpp>

Event-based blob detector performing detection on accumulated event images.

Public Functions

inline explicit EventBlobDetector(const cv::Size &resolution, const int pyramidLevel = 0, std::function<void(cv::Mat&)> preprocess = {}, cv::Ptr<cv::SimpleBlobDetector> blobDetector = defaultBlobDetector())

Constructor for blob detector.

The detection steps are as following: 1) Compute accumulated image from events 2) Apply ROI to the accumulated event image 3) Down sample image (if pyramidLevel >= 1) 4) Apply preprocess function (if exists) 5) Detect blobs 6) Rescale blobs to original resolution (if pyramidLevel >= 1) 7) If ROI has an offset from (0,0) of initial image plane, add offset back to bring blobs location in the original image space coordinate system 8) Remove blobs where mask value is 0.

Parameters:
  • resolution – original image plane resolution

  • pyramidLevel – integer defining number of down samples applied to the accumulated image. E.g. if pyramidLevel = 3 –> we down sample the image by a factor of 2 for N=3 times. this means that an image of size (100, 100) is down sampled to (25, 25) before performing the blob detection. Note that blob location is always returned in the original resolution size.

  • preprocess – function to be applied to the accumulated image before performing the detection step. The function modifies the input image passed as argument to the function in place. Internally, the api check that resolution and type of the image are kept.

  • blobDetector – blob detector instance performing the detection step

inline std::vector<dv::TimedKeyPoint> detect(const dv::EventStore &events, const cv::Rect &roi = cv::Rect(), const cv::Mat &mask = cv::Mat())

Detection step.

Parameters:
  • events – data used to create the accumulated image over which blob detection will be applied

  • roi – region in which blobs will be searched

  • mask – disable any blob detections on coordinates with zero pixel value on the mask.

Returns:

blobs found from blob detector

Public Static Functions

static inline cv::Ptr<cv::SimpleBlobDetector> defaultBlobDetector()

Create a reasonable default blob detector.

The method creates an instance of cv::SimpleBlobDetector with following parameter values:

  • filterByArea = true

  • minArea = 10 : minimum area of blobs to be detected - reasonable value to safely detect blobs and not noise in the accumulated image

  • maxArea = 10000

  • filterByCircularity = false

  • filterByConvexity = false

  • filterByInertia = false

Returns:

blob detector used by default to detect interesting blobs

Private Members

cv::Ptr<cv::SimpleBlobDetector> mBlobDetector

Blob detector instance performing the detection step

int32_t mPyramidLevel

Number of pyrDown applied to the accumulated image

std::function<void(cv::Mat&)> mPreprocessFcn

Preprocessing function to be applied before the detection step

dv::EdgeMapAccumulator mAccumulator

Accumulator generating the image used for blob detection

template<dv::concepts::EventToFrameConverter<dv::EventStore> AccumulatorType = dv::EdgeMapAccumulator>
class EventCombinedLKTracker : public dv::features::ImageFeatureLKTracker
#include </builds/inivation/dv/dv-processing/include/dv-processing/features/event_combined_lk_tracker.hpp>

Implements an event combined Lucas-Kanade tracker. The algorithms detects and tracks features on a regular frame image, but to improve tracking quality, it accumulates intermediate frames from events, performs tracking on those frames and uses the output to predict the track locations on the regular frame.

Template Parameters:

AccumulatorTypeAccumulator class to be used for frame generation.

Public Types

using SharedPtr = std::shared_ptr<EventCombinedLKTracker>
using UniquePtr = std::unique_ptr<EventCombinedLKTracker>

Public Functions

inline void accept(const dv::EventStore &store)

Add an event batch. Added events should contain at least some events that were registered further in the future of the next image.

Parameters:

store – Batch of events.

inline const std::vector<std::vector<cv::Point2f>> &getEventTrackPoints() const

Get the intermediate tracking points on the event frames.

Returns:

A vector of tracked points on the intermediate frames.

inline const std::vector<dv::features::ImagePyramid> &getAccumulatedFrames() const

Get a vector containing the intermediate accumulated frames.

Returns:

A vector containing the intermediate accumulated frames.

inline dv::Duration getStoreTimeLimit() const

Get the event storage time limit.

Returns:

Duration of the event storage in microseconds.

inline void setStoreTimeLimit(const dv::Duration storeTimeLimit)

Set the event buffer storage duration limit.

Parameters:

storeTimeLimit – Storage duration limit in microseconds.

inline size_t getNumberOfEvents() const

Get the number of latest events that are going to be accumulated for each frame.

Returns:

Number of accumulated events.

inline void setNumberOfEvents(const size_t numberOfEvents)

Set the number of latest events that are going to be accumulated for each frame.

Parameters:

_numberOfEvents – Number of accumulated events.

inline int getNumIntermediateFrames() const

Get the number of intermediate frames that are going to be generated.

Returns:

Number of intermediate frames between the frames.

inline void setNumIntermediateFrames(const int numIntermediateFrames)

Set the number of intermediate frames that are going to be generated.

Parameters:

numIntermediateFrames – Number of intermediate frames between the frames.

inline void setAccumulator(std::unique_ptr<AccumulatorType> accumulator)

Set an accumulator instance to be used for frame generation. If a nullptr is passed, the function will instantiate an accumulator with no parameters (defaults).

Parameters:

accumulator – An accumulator instance, can be nullptr to instantiate a default accumulator.

inline virtual void accept(const dv::measurements::Depth &timedDepth) override

Add scene depth, a median depth value of tracked landmarks usually works well enough.

Parameters:

timedDepth – Depth measurement value (pair of timestamp and measured depth)

inline virtual void accept(const kinematics::Transformationf &transform) override

Add camera transformation, usually in the world coordinate frame (T_WC). Although the class only extract the motion difference, so any other reference frame should also work as long as reference frames are not mixed up.

Parameters:

transform – Camera pose represented by a transformation.

inline virtual void accept(const dv::Frame &image) override

Add an input image for the tracker. Image pyramid will be built from the given image.

Parameters:

image – Acquired image.

inline double getMinRateForIntermediateTracking() const

Get the minimum event rate that is required to perform intermediate tracking.

Returns:

Minimum event rate per second value.

inline void setMinRateForIntermediateTracking(const double minRateForIntermediateTracking)

Set a minimum event rate per second value that is used to perform intermediate. If the event rate between last and current frame is lower than this, tracker assumes very little motion and does not perform intermediate tracking.

Parameters:

minRateForIntermediateTracking – Event rate (number of incoming events per second) required to perform intermediate tracking on accumulated frames.

inline virtual void setConstantDepth(const float depth) override

Set constant depth value that is assumed if no depth measurement is passed using accept(dv::measurements::Depth). By default the constant depth is assumed to be 3.0 meters, which is just a reasonable guess.

This value is propagated into the accumulator if it supports constant depth setting.

Parameters:

depth – Distance to the scene (depth).

Throws:

InvalidArgument – Exception is thrown if a negative depth value is passed.

Public Static Functions

static inline EventCombinedLKTracker::UniquePtr RegularTracker(const cv::Size &resolution, const Config &config = Config(), std::unique_ptr<AccumulatorType> accumulator = nullptr, ImagePyrFeatureDetector::UniquePtr detector = nullptr, RedetectionStrategy::UniquePtr redetection = nullptr)

Create a tracker instance that performs tracking of features on both - event accumulated and regular images. Tracking is performed by detecting and tracking features on a regular image. It also uses events to generate intermediate accumulated frames between the regular frames, track the features on them and use the intermediate tracking results as feature position priors for the image frame.

Parameters:
  • resolution – Sensor resolution

  • config – Lucas-Kanade tracker configuration

  • accumulator – The accumulator instance to be used for intermediate frame accumulation. Uses dv::EdgeMapAccumulator with default parameters if nullptr is passed.

  • detector – Feature (corner) detector to be used. Uses cv::Fast with a threshold of 10 by default.

  • redetection – Feature redetection strategy. By default, redetects features when feature count is bellow 0.5 of maximum value.

Returns:

The tracker instance

static inline EventCombinedLKTracker::UniquePtr MotionAwareTracker(const camera::CameraGeometry::SharedPtr &camera, const Config &config = Config(), std::unique_ptr<AccumulatorType> accumulator = nullptr, kinematics::PixelMotionPredictor::UniquePtr motionPredictor = nullptr, ImagePyrFeatureDetector::UniquePtr detector = nullptr, RedetectionStrategy::UniquePtr redetection = nullptr)

Create a tracker instance that performs tracking of features on both - event accumulated and regular images. Tracking is performed by detecting and tracking features on a regular image. It also uses events to generate intermediate accumulated frames between the regular frames, track the features on them and use the intermediate tracking results as feature position priors for the image frame. The implementation also uses camera motion and scene depth to motion compensate events, so the intermediate accumulated frames are sharp and the Lucas-Kanade tracker works more accurately. This requires camera sensor to be calibrated.

Parameters:
  • camera – Camera geometry class instance, containing the intrinsic calibration of the camera sensor.

  • config – Lucas-Kanade tracker configuration

  • accumulator – The accumulator instance to be used for intermediate frame accumulation. Uses dv::EdgeMapAccumulator with default parameters if nullptr is passed.

  • motionPredictor – Motion predictor class, by default it uses pixel reprojection dv::kinematics::PixelMotionPredictor without distortion model.

  • detector – Feature (corner) detector to be used. Uses cv::Fast with a threshold of 10 by default.

  • redetection – Feature redetection strategy. By default, redetects features when feature count is bellow 0.5 of maximum value.

Returns:

The tracker instance

Protected Functions

inline std::vector<cv::Point2f> trackIntermediateEvents()

Run the intermediate tracking on accumulated events. The lastFrameResults are modified if any of the intermediate tracks are lost. The predicted coordinates are returned which must match the indices of the keypoints in lastFrameResults keypoint list.

Returns:

Predicted feature track locations that correspond to modified lastFrameResults->keypoints vector.

inline virtual Result::SharedPtr track() override

Perform the tracking.

Returns:

Tracking result.

inline EventCombinedLKTracker(const cv::Size &resolution, const ImageFeatureLKTracker::Config &config)

Initialize the event combined Lucas-Kanade tracker and custom tracker parameters. It is going to use EdgeMapAccumulator with 15000 events and 0.25 event contribution. It will accumulate 3 intermediate frames from events to predict the track positions on regular frame.

Parameters:
  • resolution – Image resolution.

  • config – Image tracker configuration.

inline EventCombinedLKTracker(const camera::CameraGeometry::SharedPtr &camera, const ImageFeatureLKTracker::Config &config)

Initialize the event combined Lucas-Kanade tracker and custom tracker parameters. It is going to use EdgeMapAccumulator with 15000 events and 0.25 event contribution. It will accumulate 3 intermediate frames from events to predict the track positions on regular frame.

Parameters:
  • camera – Camera geometry.

  • config – Image tracker configuration.

Protected Attributes

std::unique_ptr<AccumulatorType> mAccumulator = nullptr
dv::Duration mStoreTimeLimit = dv::Duration(5000000)
size_t mNumberOfEvents = 20000
double mMinRateForIntermediateTracking = 0
int mNumIntermediateFrames = 3
dv::EventStore mEventBuffer
std::vector<dv::features::ImagePyramid> mAccumulatedFrames
std::vector<std::vector<cv::Point2f>> mEventTrackPoints
template<concepts::EventToFrameConverter<dv::EventStore> AccumulatorType = dv::EdgeMapAccumulator>
class EventFeatureLKTracker : public dv::features::ImageFeatureLKTracker
#include </builds/inivation/dv/dv-processing/include/dv-processing/features/event_feature_lk_tracker.hpp>

Event-based Lucas-Kanade tracker, the tracking is achieved by accumulating frames and running the classic LK frame based tracker on them.

Since the batch of events might contain information for more than a single tracking iteration configurable by the framerate parameter, the tracking function should be executed on a loop until it returns a null-pointer, signifying end of available data processing:

tracker.addEventInput(eventStore);
while (auto result = tracker.runTracking()) {
    // process the tracking result
}

Template Parameters:

AccumulatorTypeAccumulator class to be used for frame generation.

Public Types

using SharedPtr = std::shared_ptr<EventFeatureLKTracker>
using UniquePtr = std::unique_ptr<EventFeatureLKTracker>

Public Functions

inline const cv::Mat &getAccumulatedFrame() const

Get the latest accumulated frame.

Returns:

An accumulated frame.

inline int getFramerate() const

Get configured framerate.

Returns:

Current accumulation and tracking framerate.

inline void setFramerate(int framerate)

Set the accumulation and tracking framerate.

Parameters:

framerate_ – New accumulation and tracking framerate.

inline void accept(const dv::EventStore &store)

Add the input events. Since the batch of events might contain information for more than a single tracking iteration configurable by the framerate parameter, the tracking function should be executed on a loop until it returns a null-pointer, signifying end of available data processing:

tracker.addEventInput(eventStore);
while (auto result = tracker.runTracking()) {
    // process the tracking result
}

Parameters:

store – Event batch.

inline dv::Duration getStoreTimeLimit() const

Get the event storage time limit.

Returns:

Duration of the event storage in microseconds.

inline void setStoreTimeLimit(const dv::Duration storeTimeLimit)

Set the event buffer storage duration limit.

Parameters:

storeTimeLimit – Storage duration limit in microseconds.

inline size_t getNumberOfEvents() const

Get the number of latest events that are going to be accumulated for each frame. The default number of event is a third of of total pixels in the sensor.

Returns:

Number of event to be accumulated.

inline void setNumberOfEvents(size_t numberOfEvents)

Set the number of latest events that are going to be accumulated for each frame. The default number of event is a third of of total pixels in the sensor.

Parameters:

numberOfEvents – Number of accumulated events.

inline void setAccumulator(std::unique_ptr<AccumulatorType> accumulator)

Set an accumulator instance to be used for frame generation. If a nullptr is passed, the function will instantiate an accumulator with no parameters (defaults).

Parameters:

accumulator – An accumulator instance, can be nullptr to instantiate a default accumulator.

inline virtual void accept(const dv::measurements::Depth &timedDepth) override

Add scene depth, a median depth value of tracked landmarks usually works well enough.

Parameters:

timedDepth – Depth measurement value (pair of timestamp and measured depth)

inline virtual void accept(const kinematics::Transformationf &transform) override

Add camera transformation, usually in the world coordinate frame (T_WC). Although the class only extract the motion difference, so any other reference frame should also work as long as reference frames are not mixed up.

Parameters:

transform – Camera pose represented by a transformation.

inline virtual void setConstantDepth(const float depth) override

Set constant depth value that is assumed if no depth measurement is passed using accept(dv::measurements::Depth). By default the constant depth is assumed to be 3.0 meters, which is just a reasonable guess.

This value is used for predicting feature track positions when no depth measurements are passed in and also is propagated into the accumulator if it supports constant depth setting.

Parameters:

depth – Distance to the scene (depth).

Throws:

InvalidArgument – Exception is thrown if a negative depth value is passed.

Public Static Functions

static inline EventFeatureLKTracker::UniquePtr RegularTracker(const cv::Size &resolution, const Config &config = Config(), std::unique_ptr<AccumulatorType> accumulator = nullptr, ImagePyrFeatureDetector::UniquePtr detector = nullptr, RedetectionStrategy::UniquePtr redetection = nullptr)

Create a tracker instance that performs tracking of features on event accumulated frames. Features are detected and tracked on event accumulated frames.

Parameters:
  • resolution – Sensor resolution

  • config – Lucas-Kanade tracker configuration

  • accumulator – The accumulator instance to be used for intermediate frame accumulation. Uses dv::EdgeMapAccumulator with default parameters if nullptr is passed.

  • detector – Feature (corner) detector to be used. Uses cv::Fast with a threshold of 10 by default.

  • redetection – Feature redetection strategy. By default, redetects features when feature count is bellow 0.5 of maximum value.

Returns:

The tracker instance

static inline EventFeatureLKTracker::UniquePtr MotionAwareTracker(const camera::CameraGeometry::SharedPtr &camera, const Config &config = Config(), std::unique_ptr<AccumulatorType> accumulator = nullptr, kinematics::PixelMotionPredictor::UniquePtr motionPredictor = nullptr, ImagePyrFeatureDetector::UniquePtr detector = nullptr, RedetectionStrategy::UniquePtr redetection = nullptr)

Create a tracker instance that performs tracking of features on event accumulated frames. Features are detected and tracked on event accumulated frames. Additionally, camera motion and scene depth are used to generate motion compensated frames, which are way sharper than usual accumulated frames. This requires camera sensor to be calibrated.

Parameters:
  • camera – Camera geometry class instance, containing the intrinsic calibration of the camera sensor.

  • config – Lucas-Kanade tracker configuration

  • accumulator – The accumulator instance to be used for intermediate frame accumulation. Uses dv::EdgeMapAccumulator with default parameters if nullptr is passed.

  • motionPredictor – Motion predictor class, by default it uses pixel reprojection dv::kinematics::PixelMotionPredictor without distortion model.

  • detector – Feature (corner) detector to be used. Uses cv::Fast with a threshold of 10 by default.

  • redetection – Feature redetection strategy. By default, redetects features when feature count is bellow 0.5 of maximum value.

Returns:

The tracker instance

Protected Functions

inline virtual Result::SharedPtr track() override

Perform the tracking

Returns:

Tracking result.

inline explicit EventFeatureLKTracker(const cv::Size &dimensions, const Config &config)

Initialize the event-frame tracker with default configuration: all the defaults of ImageFeatureLKTracker and a EdgeMapAccumulator executing at 50 FPS with and event count equal to third of the camera resolution and event contribution of 0.25.

Parameters:
  • imageDimensions – Image resolution.

  • config – Lukas-Kanade tracker configuration.

inline explicit EventFeatureLKTracker(const dv::camera::CameraGeometry::SharedPtr &camera, const Config &config)

Initialize the event-frame tracker with default configuration: all the defaults of ImageFeatureLKTracker and a EdgeMapAccumulator executing at 50 FPS with and event count equal to third of the camera resolution and event contribution of 0.25.

Parameters:
  • camera – Camera geometry.

  • config – Lukas-Kanade tracker configuration.

Protected Attributes

std::unique_ptr<AccumulatorType> mAccumulator = nullptr
int mFramerate = 50
int64_t mPeriod = 1000000 / mFramerate
int64_t mLastRunTimestamp = 0
dv::Duration mStoreTimeLimit = dv::Duration(5000000)
size_t mNumberOfEvents

The default number of event is a third of of total pixels in the sensor.

dv::EventStore mEventBuffer
cv::Mat mAccumulatedFrame

Private Functions

inline virtual void accept(const dv::Frame &image)

Add an input image for the tracker. Image pyramid will be built from the given image.

Parameters:

image – Acquired image.

inline virtual void accept(const dv::measurements::Depth &timedDepth)

Add scene depth, a median depth value of tracked landmarks usually works well enough.

Parameters:

timedDepth – Depth measurement value (pair of timestamp and measured depth)

inline virtual void accept(const dv::kinematics::Transformationf &transform)

Add camera transformation, usually in the world coordinate frame (T_WC). Although the class only extract the motion difference, so any other reference frame should also work as long as reference frames are not mixed up.

Parameters:

transform – Camera pose represented by a transformation

template<dv::concepts::EventStorage EventStoreClass = dv::EventStore>
class EventFilterBase
#include </builds/inivation/dv/dv-processing/include/dv-processing/core/filters.hpp>

A base class for noise filter implementations. Handles data input and output, derived classes only have to implement a retain function that tests whether event should be retained or discarded.

Subclassed by dv::EventFilterChain< EventStoreClass >, dv::EventMaskFilter< EventStoreClass >, dv::EventPolarityFilter< EventStoreClass >, dv::EventRegionFilter< EventStoreClass >, dv::noise::BackgroundActivityNoiseFilter< EventStoreClass >, dv::noise::BaseFrequencyFilter< EventStoreClass >, dv::noise::FastDecayNoiseFilter< EventStoreClass >, dv::noise::KNoiseFilter< EventStoreClass >

Public Functions

virtual ~EventFilterBase() = default
inline void accept(const EventStoreClass &store)

Accepts incoming events.

Parameters:

store – Event packet.

virtual bool retain(const typename EventStoreClass::value_type &event) noexcept = 0

A function to be implemented by derived class which tests whether given event should be retained or discarded.

Parameters:

event – An event to be checked.

Returns:

Return true if the event is to be retained or false to discard the event.

inline EventStoreClass generateEvents()

Apply the filter algorithm and return only the filtered events from the ones that were accepted as input.

Returns:

inline size_t getNumberOfIncomingEvents() const

Get number of total events that were accepted by the noise filter.

Returns:

Total number of incoming events to this filter instance.

inline size_t getNumberOfOutgoingEvents() const

Total number of outgoing events from this filter instance.

Returns:

Total number of outgoing events from this filter instance.

inline float getReductionFactor() const

Get the reduction factor of this filter. It’s a fraction representation of events that were discard by this filter compared to the amount of incoming events.

Returns:

Reduction factor value.

inline EventStoreClass &operator>>(EventStoreClass &out)

Retrieve filtered events using output stream operator.

Parameters:

out – Filtered events.

Returns:

Protected Attributes

EventStoreClass mBuffer = {}
int64_t mHighestProcessedTime = {-1}
size_t mNumberIncomingEvents = {0}
size_t mNumberOutgoingEvents = {0}
template<dv::concepts::EventStorage EventStoreClass = dv::EventStore>
class EventFilterChain : public dv::EventFilterBase<dv::EventStore>
#include </builds/inivation/dv/dv-processing/include/dv-processing/core/filters.hpp>

Event filter based on multiple event filter applied sequentially. Internally stores any added filters and runs them one after another.

Template Parameters:

EventStoreClass – Type of event store

Public Functions

inline void addFilter(std::shared_ptr<dv::EventFilterBase<EventStoreClass>> filter)

Add a filter to the chain of filtering.

Parameters:

filter

inline virtual bool retain(const typename EventStoreClass::value_type &event) noexcept override

Test whether event is of configured polarity.

Parameters:

event – Event to be checked.

Returns:

True if event has the expected polarity, false otherwise.

inline EventFilterChain &operator<<(const EventStoreClass &events)

Accept events using the input stream operator.

Parameters:

events – Input events.

Returns:

Protected Attributes

std::vector<std::shared_ptr<dv::EventFilterBase<EventStoreClass>>> filters
template<dv::concepts::EventStorage EventStoreClass = dv::EventStore>
class EventMaskFilter : public dv::EventFilterBase<dv::EventStore>

Public Functions

inline explicit EventMaskFilter(const cv::Mat &mask)

Create an event masking filter. Discards any events that happen on coordinates where mask has a zero value and retains all events with coordinates where mask has a non-zero value.

Parameters:

mask – The mask to be applied (requires CV_8UC1 type).

Throws:

InvalidArgument – Exception thrown if the mask is of incorrect type.

inline virtual bool retain(const typename EventStoreClass::value_type &event) noexcept override

A function to be implemented by derived class which tests whether given event should be retained or discarded.

Parameters:

event – An event to be checked.

Returns:

Return true if the event is to be retained or false to discard the event.

inline const cv::Mat &getMask() const

Get the mask that is currently applied.

Returns:

inline void setMask(const cv::Mat &mask)

Set a new mask to this filter.

Parameters:

mask – The mask to be applied (requires CV_8UC1 type).

inline EventMaskFilter &operator<<(const EventStoreClass &events)

Accept events using the input stream operator.

Parameters:

events – Input events.

Returns:

Protected Attributes

cv::Mat mMask
struct EventPacket : public flatbuffers::NativeTable

Public Types

typedef EventPacketFlatbuffer TableType

Public Functions

inline EventPacket()
inline EventPacket(const std::vector<Event> &_elements)

Public Members

std::vector<Event> elements

Public Static Functions

static inline constexpr const char *GetFullyQualifiedName()

Friends

inline friend std::ostream &operator<<(std::ostream &os, const EventPacket &packet)
struct EventPacketBuilder

Public Functions

inline void add_elements(flatbuffers::Offset<flatbuffers::Vector<const Event*>> elements)
inline explicit EventPacketBuilder(flatbuffers::FlatBufferBuilder &_fbb)
EventPacketBuilder &operator=(const EventPacketBuilder&)
inline flatbuffers::Offset<EventPacketFlatbuffer> Finish()

Public Members

flatbuffers::FlatBufferBuilder &fbb_
flatbuffers::uoffset_t start_
struct EventPacketFlatbuffer : private flatbuffers::Table

Public Types

typedef EventPacket NativeTableType

Public Functions

inline const flatbuffers::Vector<const Event*> *elements() const
inline bool Verify(flatbuffers::Verifier &verifier) const
inline EventPacket *UnPack(const flatbuffers::resolver_function_t *_resolver = nullptr) const
inline void UnPackTo(EventPacket *_o, const flatbuffers::resolver_function_t *_resolver = nullptr) const

Public Static Functions

static inline const flatbuffers::TypeTable *MiniReflectTypeTable()
static inline constexpr const char *GetFullyQualifiedName()
static inline void UnPackToFrom(EventPacket *_o, const EventPacketFlatbuffer *_fb, const flatbuffers::resolver_function_t *_resolver = nullptr)
static inline flatbuffers::Offset<EventPacketFlatbuffer> Pack(flatbuffers::FlatBufferBuilder &_fbb, const EventPacket *_o, const flatbuffers::rehasher_function_t *_rehasher = nullptr)

Public Static Attributes

static constexpr const char *identifier = "EVTS"
template<dv::concepts::EventStorage EventStoreClass = dv::EventStore>
class EventPolarityFilter : public dv::EventFilterBase<dv::EventStore>
#include </builds/inivation/dv/dv-processing/include/dv-processing/core/filters.hpp>

Event filter based on polarity.

Template Parameters:

EventStoreClass – Type of event store

Public Functions

inline explicit EventPolarityFilter(const bool polarity)

Construct an event filter which filters out only events of given polarity.

Parameters:

polarity – Extract events only of matching polarity.

inline bool getPolarity() const

Get the currently extracted polarity setting.

Returns:

inline void setPolarity(const bool polarity)

Set a new polarity to extract (keep after filtering).

Parameters:

polarity – Extract events only of matching polarity.

inline virtual bool retain(const typename EventStoreClass::value_type &event) noexcept override

Test whether event is of configured polarity.

Parameters:

event – Event to be checked.

Returns:

True if event has the expected polarity, false otherwise.

inline EventPolarityFilter &operator<<(const EventStoreClass &events)

Accept events using the input stream operator.

Parameters:

events – Input events.

Returns:

Protected Attributes

bool mPolarity
template<dv::concepts::EventStorage EventStoreClass = dv::EventStore>
class EventRegionFilter : public dv::EventFilterBase<dv::EventStore>
#include </builds/inivation/dv/dv-processing/include/dv-processing/core/filters.hpp>

Event filter that filters events based on a given ROI.

Template Parameters:

EventStoreClass – Type of event store

Public Functions

inline explicit EventRegionFilter(const cv::Rect region)

Filter event based on an ROI.

Parameters:

region – Region of interest, events outside of this region will be discarded.

inline cv::Rect getRegion() const

Get the currently applied region of interest.

Returns:

inline void setRegion(const cv::Rect region)

Set a new region of interest for events.

Parameters:

region – Region of interest, events outside of this region will be discarded.

inline virtual bool retain(const typename EventStoreClass::value_type &event) noexcept override

Test whether event belongs to an ROI.

Parameters:

event – Event to be checked.

Returns:

True if event belongs to ROI, false otherwise.

inline EventRegionFilter &operator<<(const EventStoreClass &events)

Accept events using the input stream operator.

Parameters:

events – Input events.

Returns:

Protected Attributes

cv::Rect mRegion
template<dv::concepts::AddressableEvent EventType>
class EventTimeComparator
#include </builds/inivation/dv/dv-processing/include/dv-processing/core/core.hpp>

INTERNAL USE ONLY Compares an events timestamp to that of a timestamp.

Public Functions

inline bool operator()(const EventType &evt, const int64_t time) const
inline bool operator()(const int64_t time, const EventType &evt) const
class EventVisualizer
#include </builds/inivation/dv/dv-processing/include/dv-processing/visualization/event_visualizer.hpp>

EventVisualizer class implements simple color-coded representation of events. It applies certain colors where positive or negative polarity events are registered.

Public Functions

inline explicit EventVisualizer(const cv::Size &resolution, const cv::Scalar &backgroundColor = colors::white, const cv::Scalar &positiveColor = colors::iniBlue, const cv::Scalar &negativeColor = colors::darkGray)

Initialize event visualizer.

Parameters:
  • resolution – Resolution of incoming events.

  • backgroundColor – Background color.

  • positiveColor – Color applied to positive polarity events.

  • negativeColor – Color applied to negative polarity events.

inline cv::Mat generateImage(const dv::EventStore &events) const

Generate a preview image from an event store.

Parameters:

events – Input events.

Returns:

Colored preview image of given events.

inline void generateImage(const dv::EventStore &events, cv::Mat &background) const

Generate a preview image from an event store.

Parameters:
  • events – Input events.

  • background – Image to draw the events on. The pixels type has to be 3-channel 8-bit unsigned integer (BGR).

inline cv::Scalar getBackgroundColor() const

Get currently configured background color.

Returns:

Background color.

inline void setBackgroundColor(const cv::Scalar &backgroundColor_)

Set new background color.

Parameters:

backgroundColor_ – New background color.

inline cv::Scalar getPositiveColor() const

Get currently configured positive polarity color.

Returns:

Positive polarity color.

inline void setPositiveColor(const cv::Scalar &positiveColor_)

Set new positive polarity color.

Parameters:

positiveColor_ – New positive polarity color.

inline cv::Scalar getNegativeColor() const

Get negative polarity color.

Returns:

Negative polarity color.

inline void setNegativeColor(const cv::Scalar &negativeColor_)

Set new negative polarity color.

Parameters:

negativeColor_ – New negative polarity color.

Private Members

const cv::Size resolution
cv::Vec3b backgroundColor
cv::Vec3b positiveColor
cv::Vec3b negativeColor
class Exception : public std::exception

Public Functions

inline explicit Exception(const std::source_location &location = std::source_location::current(), const boost::stacktrace::stacktrace &stacktrace = boost::stacktrace::stacktrace(), const std::string_view type = boost::core::demangle(typeid(Exception).name()))
inline explicit Exception(const std::string_view whatInfo, const std::source_location &location = std::source_location::current(), const boost::stacktrace::stacktrace &stacktrace = boost::stacktrace::stacktrace(), const std::string_view type = boost::core::demangle(typeid(Exception).name()))
~Exception() override = default
Exception(const Exception &other) = default
Exception(Exception &&other) = default
inline Exception operator<<(const std::string_view info)
inline const char *what() const noexcept override
inline const char *shortWhat() const noexcept

Protected Attributes

std::string mInfo
std::string mShortInfo

Private Functions

inline void createInfo(const std::string_view whatInfo, const std::string_view file, const std::string_view function, const uint32_t line, const std::string_view stacktrace, const std::string_view type)
template<typename EXCEPTION_TYPE, typename BASE_TYPE = Exception>
class Exception_ : public dv::exceptions::Exception

Public Types

using Info = typename EXCEPTION_TYPE::Info

Public Functions

template<internal::HasExtraExceptionInfo T = EXCEPTION_TYPE>
inline Exception_(const std::string_view whatInfo, const typename T::Info &errorInfo, const std::source_location &location = std::source_location::current(), const boost::stacktrace::stacktrace &stacktrace = boost::stacktrace::stacktrace(), const std::string_view type = boost::core::demangle(typeid(EXCEPTION_TYPE).name()))
template<internal::HasExtraExceptionInfo T = EXCEPTION_TYPE>
inline Exception_(const typename T::Info &errorInfo, const std::source_location &location = std::source_location::current(), const boost::stacktrace::stacktrace &stacktrace = boost::stacktrace::stacktrace(), const std::string_view type = boost::core::demangle(typeid(EXCEPTION_TYPE).name()))
inline Exception_(const std::string_view whatInfo, const std::source_location &location = std::source_location::current(), const boost::stacktrace::stacktrace &stacktrace = boost::stacktrace::stacktrace(), const std::string_view type = boost::core::demangle(typeid(EXCEPTION_TYPE).name()))
inline Exception_(const std::source_location &location = std::source_location::current(), const boost::stacktrace::stacktrace &stacktrace = boost::stacktrace::stacktrace(), const std::string_view type = boost::core::demangle(typeid(EXCEPTION_TYPE).name()))
~Exception_() override = default
Exception_(const Exception_ &other) = default
Exception_(Exception_ &&other) = default
template<internal::HasExtraExceptionInfo T = EXCEPTION_TYPE>
inline Exception_ operator<<(const typename T::Info &errorInfo)
inline Exception_ operator<<(const std::string_view whatInfo)
template<dv::concepts::EventStorage EventStoreClass = dv::EventStore>
class FastDecayNoiseFilter : public dv::EventFilterBase<dv::EventStore>

Public Functions

inline explicit FastDecayNoiseFilter(const cv::Size &resolution, const dv::Duration halfLife = dv::Duration(10 '000), const uint32_t subdivisionFactor = 4, const float noiseThreshold = 6.f)

Create a fast decay noise filter. This filter uses a concept that performs a fast decay on a low resolution representation of the image and checks whether corresponding neighbourhood of the event has recent activity.

Parameters:
  • resolution – Sensor resolution.

  • halfLife – Half-life is the amount of time it takes for the internal event counter to halve. Decreasing this will increase the strength of the noise filter (cause it to reject more events).

  • subdivisionFactor – Subdivision factor, this is used calculate a low resolution image dimensions used for the fast decay operations.

  • noiseThreshold – Noise threshold value, amount of filtered events can be increased by decreasing this value.

inline virtual bool retain(const typename EventStoreClass::value_type &event) noexcept override

Test whether to retain this event.

Parameters:

event – Event to be checked.

Returns:

True to retain an event, false to discard it.

inline FastDecayNoiseFilter &operator<<(const EventStoreClass &events)

Accept events using the input stream operator.

Parameters:

events – Input events.

Returns:

inline float getNoiseThreshold() const

Get the currently configured noise threshold.

Returns:

Noise threshold value.

inline void setNoiseThreshold(const float noiseThreshold)

Set a new noise threshold value.

Parameters:

noiseThreshold – Noise threshold value.

inline dv::Duration getHalfLife() const

Get the current configured half-life value.

Half-life is the amount of time it takes for the internal event counter to halve. Decreasing this will increase the strength of the noise filter (cause it to reject more events).

Returns:

Currently configured event counter half life value.

inline void setHalfLife(const dv::Duration halfLife)

Set a new counter half-life value.

Half-life is the amount of time it takes for the internal event counter to halve. Decreasing this will increase the strength of the noise filter (cause it to reject more events).

Parameters:

halfLife – New event counter half life value.

Protected Attributes

uint32_t mSubdivisionFactor = 4
cv::Mat mDecayLUT
dv::TimeSurface mTimeSurface
float mNoiseThreshold = 6.f
float mHalfLifeMicros = 10'000.f
class FeatureCountRedetection : public dv::features::RedetectionStrategy
#include </builds/inivation/dv/dv-processing/include/dv-processing/features/redetection_strategy.hpp>

Redetection strategy based on number of features.

Public Functions

inline explicit FeatureCountRedetection(float minimumProportionOfTracks)

Redetection strategy based on number of features.

Parameters:

minimumProportionOfTracks – Feature count coefficient, redetection is performed when feature count goes lower than the given proportion of maximum tracks, redetection will be executed.

inline virtual bool decideRedetection(const TrackerBase &tracker) override

Check whether to perform redetection.

Parameters:

tracker – Current state of the tracker.

Returns:

True to perform redetection of features, false to continue.

Protected Attributes

float mMinimumProportionOfTracks = 0.5f
template<class InputType, dv::concepts::FeatureDetectorAlgorithm<InputType> Algorithm>
class FeatureDetector
#include </builds/inivation/dv/dv-processing/include/dv-processing/features/feature_detector.hpp>

A base class to implement feature detectors on different input types, specifically either images, time surfaces, or event stores. The implementing class should override the detect function and output a vector of unordered features with a quality score. The API will handle margin calculations and post processing of the features.

Template Parameters:
  • InputType – The type of input that is needed for the detector.

  • Algorithm – The underlying detection algorithm, can be an OpenCV::Feature2D algorithm or a custom implementation, as long as it satisfies

Public Types

enum class FeaturePostProcessing

Feature post processing step performed after the the features were detected. Currently available types of post processing:

Values:

enumerator NONE
enumerator TOP_N
enumerator ADAPTIVE_NMS
using ThisType = FeatureDetector<InputType, Algorithm>
using SharedPtr = std::shared_ptr<ThisType>
using UniquePtr = std::unique_ptr<ThisType>
using AlgorithmPtr = typename std::conditional_t<std::is_base_of_v<cv::Feature2D, Algorithm>, cv::Ptr<Algorithm>, std::shared_ptr<Algorithm>>

Public Functions

inline FeatureDetector(const cv::Size &_imageDimensions, const AlgorithmPtr &_detector, const FeaturePostProcessing _postProcessing, const float _margin = 0.f)

Create a feature detector.

See also

FeatureDetectorBase::FeaturePostProcessing

Parameters:
  • _imageDimensions – Image dimensions.

  • _postProcessing – Post processing step - subsampling of events,

  • _margin – Margin coefficient, it will be multiplied by the width and height of the image to calculate an adaptive border alongside the edges of image, where features should not be detected.

inline explicit FeatureDetector(const cv::Size &_imageDimensions, const AlgorithmPtr &_detector)

Create a feature detector. This constructor defaults post-processing step to AdaptiveNMS and margin coefficient value of 0.02.

Parameters:

_imageDimensions – Image dimensions.

virtual ~FeatureDetector() = default

Destructor

inline std::vector<dv::TimedKeyPoint> runDetection(const InputType &input, const size_t numPoints = FIND_ALL, const cv::Mat &mask = cv::Mat())

Public detection call. Calls the overloaded detect function, applies margin and post processing.

Parameters:
  • input – The input to the detector

  • numPoints – Number of keypoints to be detected

  • mask – Detection mask, detection will be performed where mask value is non-zero.

Returns:

A list of keypoints with timestamp.

inline void runRedetection(std::vector<dv::TimedKeyPoint> &prior, const InputType &input, const size_t numPoints = FIND_ALL, const cv::Mat &mask = cv::Mat())

Redetect new features and add them to already detected features. This function performs detection within masked region (if mask is non-empty), runs postprocessing and appends the additional features to the prior keypoint list.

Parameters:
  • prior – A list of existing features.

  • input – The input to the detector (events, images, etc.).

  • numPoints – Number of total features after detection.

  • mask – Detection mask.

inline FeaturePostProcessing getPostProcessing() const

Get the type of post-processing.

See also

FeatureDetectorBase::FeaturePostProcessing

Returns:

Type of post-processing.

inline void setPostProcessing(FeaturePostProcessing _postProcessing)

Set the type of post-processing.

See also

FeatureDetectorBase::FeaturePostProcessing

Parameters:

_postProcessing – Type of post-processing.

inline float getMargin() const

Get currently applied margin coefficient. Margin coefficient is multiplied by the width and height of the image to calculate an adaptive border alongside the edges of image, where features should not be detected.

Returns:

The margin coefficient.

inline void setMargin(float _margin)

Set the margin coefficient. Margin coefficient is multiplied by the width and height of the image to calculate an adaptive border alongside the edges of image, where features should not be detected.

Parameters:

_margin – The margin coefficient

inline bool isWithinROI(const cv::Point2f &point) const

Check whether a point belongs to the ROI without the margins.

Parameters:

point – Point to be checked

Returns:

True if point belongs to the valid ROI, false otherwise.

inline const cv::Size &getImageDimensions() const

Get configured image dimensions.

Returns:

Image dimensions.

Public Static Attributes

static constexpr size_t FIND_ALL = {std::numeric_limits<size_t>::max()}

Private Functions

inline std::vector<dv::TimedKeyPoint> detect(const InputType &input, const cv::Rect &roi, const cv::Mat &mask)

The detection function to be implemented for feature detection. It should return a list of keypoints with a quality score, but it should not be ordered in any way. The sorting will be performed by the runDetection function as a postprocessing step.

Parameters:
  • input – Input for the detector.

  • roi – Region of interest where detection should be performed, the region is estimated using the margin configuration value.

  • mask – Detection mask, can be empty. If non empty, the detection should be performed where mask value is non-zero.

Returns:

A list of keypoint features with timestamp.

inline cv::Rect getMarginROI() const

Calculate the region of interest with the margin coefficient. Margin is a coefficient of width / height, which should be used to ignore pixels near borders of the image.

Returns:

Region of interest for detection of features.

Private Members

FeaturePostProcessing postProcessing
float margin
cv::Size imageDimensions
cv::Rect roiBuffered
AlgorithmPtr detector

Container of the feature detector

int classIdCounter = 0

Class id counter, each new feature will be assigned on incremented class id.

KeyPointResampler resampler

Friends

inline friend std::ostream &operator<<(std::ostream &os, const FeaturePostProcessing &var)
class FeatureTracks
#include </builds/inivation/dv/dv-processing/include/dv-processing/features/feature_tracks.hpp>

A class to store a time limited amount of feature tracks. Sorts and stores the data in separate queues for each track id. Provides visualize function to generate visualization images of the tracks.

Public Functions

inline void accept(const dv::TimedKeyPoint &keypoint)

Add a keypoint measurement into the feature track.

Parameters:

keypoint – Single keypoint measurement.

inline void accept(const dv::TimedKeyPointPacket &keypoints)

Add a set of keypoint measurements into the feature track.

Parameters:

keypoints – Vector of keypoint measurements.

inline void accept(const cv::KeyPoint &keypoint)

Add OpenCV type keypoint. It is missing a timestamp, so current system clock time will be used for the timestamp.

Parameters:

keypoint – KeyPoint measurement.

inline void accept(const TrackerBase::Result::ConstPtr &trackingResult)

Add keypoint tracking result from a tracker.

Parameters:

trackingResult – Tracking results.

inline Duration getHistoryDuration() const

Retrieve the history duration.

Returns:

Currently applied track history time limit.

inline void setHistoryDuration(const dv::Duration historyDuration)

Set new history duration limit to buffer. If the new limit is shorter than the previously set, the tracks will be reduced to the new limit right away.

Parameters:

historyDuration – New time limit for the track history buffer.

inline std::optional<std::shared_ptr<const std::deque<dv::TimedKeyPoint>>> getTrack(const int32_t trackId) const

Retrieve a track of given track id.

Parameters:

trackId – Track id to retrieve.

Returns:

A pointer to feature track history, std::nullopt if unavailable.

inline std::vector<int32_t> getTrackIds() const

Return all track ids that are available in the buffer.

Returns:

A vector containing track ids store in the history buffer.

inline dv::TimedKeyPointPacket getLatestTrackKeypoints()

Return last keypoint from all tracks in the history.

Returns:

inline void eachTrack(const std::function<void(const int32_t, const std::shared_ptr<const std::deque<dv::TimedKeyPoint>>&)> &callback) const

Run a callback function to each of the stored tracks.

Parameters:

callback – Callback function that is going to be called for each of the tracks, tracks are passed into the callback function as arguments.

inline cv::Mat visualize(const cv::Mat &background) const

Draws tracks on the input image, by default uses neon color palette from the dv::visualization::colors namespace for each of the tracks.

Parameters:

background – Background image to be used for tracks.

Throws:

InvalidArgument – An InvalidArgument exception is thrown if an empty image is passed as background.

Returns:

Input image with drawn colored feature tracks.

inline bool isEmpty() const

Checks whether the feature track history buffer is empty.

Returns:

True if there are no feature keypoints in the buffer.

inline void clear()

Deletes any data stored in feature track buffer and resets visualization image.

inline const std::optional<dv::Duration> &getTrackTimeout() const

Get the track timeout value.

See also

setTrackTimeout

Returns:

Current track timeout value.

inline void setTrackTimeout(const std::optional<dv::Duration> &trackTimeout)

Set the track timeout value, pass std::nullopt to disable the this feature at all. Track latest timestamp is going to be compared to highest received timestamp in accept method, if the value is exceeded the track is going to be removed. This is useful to remove lost tracks without waiting for the track history to remove it, consider setting it to 2x of tracking rate, so tracks will remove if the track is not updated for two consecutive frames.

By default the feature is disabled, so lost tracks are kept until it’s removed by the history time limit.

Parameters:

trackTimeout – Track timeout value or std::nullopt to disable the feature.

inline int64_t getHighestTime()

Return latest time from all existing tracks.

Private Functions

inline void addKeypoint(const dv::TimedKeyPoint &keypoint)

Add a keypoint measurement

Parameters:

keypoint – Keypoint measurement

inline void maintainBufferDuration()

Check the whole buffer for out-of-limit data, remove any tracks that do not contain any measurements.

Private Members

std::map<int32_t, std::shared_ptr<std::deque<dv::TimedKeyPoint>>> mHistory
dv::Duration mHistoryDuration = dv::Duration(500'000)
std::optional<dv::Duration> mTrackTimeout = std::nullopt
int64_t mHighestTime = -1
struct FileDataDefinition : public flatbuffers::NativeTable

Public Types

typedef FileDataDefinitionFlatbuffer TableType

Public Functions

inline FileDataDefinition()
inline FileDataDefinition(int64_t _ByteOffset, const PacketHeader &_PacketInfo, int64_t _NumElements, int64_t _TimestampStart, int64_t _TimestampEnd)

Public Members

int64_t ByteOffset
PacketHeader PacketInfo
int64_t NumElements
int64_t TimestampStart
int64_t TimestampEnd

Public Static Functions

static inline constexpr const char *GetFullyQualifiedName()
struct FileDataDefinitionBuilder

Public Functions

inline void add_ByteOffset(int64_t ByteOffset)
inline void add_PacketInfo(const PacketHeader *PacketInfo)
inline void add_NumElements(int64_t NumElements)
inline void add_TimestampStart(int64_t TimestampStart)
inline void add_TimestampEnd(int64_t TimestampEnd)
inline explicit FileDataDefinitionBuilder(flatbuffers::FlatBufferBuilder &_fbb)
FileDataDefinitionBuilder &operator=(const FileDataDefinitionBuilder&)
inline flatbuffers::Offset<FileDataDefinitionFlatbuffer> Finish()

Public Members

flatbuffers::FlatBufferBuilder &fbb_
flatbuffers::uoffset_t start_
struct FileDataDefinitionFlatbuffer : private flatbuffers::Table

Public Types

typedef FileDataDefinition NativeTableType

Public Functions

inline int64_t ByteOffset() const
inline const PacketHeader *PacketInfo() const
inline int64_t NumElements() const
inline int64_t TimestampStart() const
inline int64_t TimestampEnd() const
inline bool Verify(flatbuffers::Verifier &verifier) const
inline FileDataDefinition *UnPack(const flatbuffers::resolver_function_t *_resolver = nullptr) const
inline void UnPackTo(FileDataDefinition *_o, const flatbuffers::resolver_function_t *_resolver = nullptr) const

Public Static Functions

static inline const flatbuffers::TypeTable *MiniReflectTypeTable()
static inline constexpr const char *GetFullyQualifiedName()
static inline void UnPackToFrom(FileDataDefinition *_o, const FileDataDefinitionFlatbuffer *_fb, const flatbuffers::resolver_function_t *_resolver = nullptr)
static inline flatbuffers::Offset<FileDataDefinitionFlatbuffer> Pack(flatbuffers::FlatBufferBuilder &_fbb, const FileDataDefinition *_o, const flatbuffers::rehasher_function_t *_rehasher = nullptr)
struct FileDataTable : public flatbuffers::NativeTable

Public Types

typedef FileDataTableFlatbuffer TableType

Public Functions

inline FileDataTable()
inline FileDataTable(const std::vector<FileDataDefinition> &_Table)

Public Members

std::vector<FileDataDefinition> Table

Public Static Functions

static inline constexpr const char *GetFullyQualifiedName()
struct FileDataTableBuilder

Public Functions

inline void add_Table(flatbuffers::Offset<flatbuffers::Vector<flatbuffers::Offset<FileDataDefinitionFlatbuffer>>> Table)
inline explicit FileDataTableBuilder(flatbuffers::FlatBufferBuilder &_fbb)
FileDataTableBuilder &operator=(const FileDataTableBuilder&)
inline flatbuffers::Offset<FileDataTableFlatbuffer> Finish()

Public Members

flatbuffers::FlatBufferBuilder &fbb_
flatbuffers::uoffset_t start_
struct FileDataTableFlatbuffer : private flatbuffers::Table

Public Types

typedef FileDataTable NativeTableType

Public Functions

inline const flatbuffers::Vector<flatbuffers::Offset<FileDataDefinitionFlatbuffer>> *Table() const
inline bool Verify(flatbuffers::Verifier &verifier) const
inline FileDataTable *UnPack(const flatbuffers::resolver_function_t *_resolver = nullptr) const
inline void UnPackTo(FileDataTable *_o, const flatbuffers::resolver_function_t *_resolver = nullptr) const

Public Static Functions

static inline const flatbuffers::TypeTable *MiniReflectTypeTable()
static inline constexpr const char *GetFullyQualifiedName()
static inline void UnPackToFrom(FileDataTable *_o, const FileDataTableFlatbuffer *_fb, const flatbuffers::resolver_function_t *_resolver = nullptr)
static inline flatbuffers::Offset<FileDataTableFlatbuffer> Pack(flatbuffers::FlatBufferBuilder &_fbb, const FileDataTable *_o, const flatbuffers::rehasher_function_t *_rehasher = nullptr)

Public Static Attributes

static constexpr const char *identifier = "FTAB"
struct FileError

Public Types

using Info = std::filesystem::path

Public Static Functions

static inline std::string format(const Info &info)
struct FileInfo

Public Members

uint64_t mFileSize
dv::CompressionType mCompression
int64_t mDataTablePosition
int64_t mDataTableSize
dv::FileDataTable mDataTable
int64_t mTimeLowest
int64_t mTimeHighest
int64_t mTimeDifference
int64_t mTimeShift
std::vector<dv::io::Stream> mStreams
std::unordered_map<int32_t, dv::FileDataTable> mPerStreamDataTables
struct FileNotFound

Public Types

using Info = std::filesystem::path

Public Static Functions

static inline std::string format(const Info &info)
struct FileOpenError

Public Types

using Info = std::filesystem::path

Public Static Functions

static inline std::string format(const Info &info)
struct FileReadError

Public Types

using Info = std::filesystem::path

Public Static Functions

static inline std::string format(const Info &info)
struct FileWriteError

Public Types

using Info = std::filesystem::path

Public Static Functions

static inline std::string format(const Info &info)
template<>
struct formatter<cv::Mat> : public fmt::formatter<std::string_view>

Public Functions

inline auto format(const cv::Mat &var, fmt::format_context &ctx) const -> fmt::format_context::iterator
template<>
struct formatter<cv::Point> : public fmt::ostream_formatter
template<>
struct formatter<cv::Rect> : public fmt::ostream_formatter
template<>
struct formatter<cv::Size> : public fmt::ostream_formatter
template<>
struct formatter<dv::Accumulator::Decay> : public fmt::ostream_formatter
template<>
struct formatter<dv::BoundingBoxPacket> : public fmt::ostream_formatter
template<>
struct formatter<dv::camera::CameraGeometry::FunctionImplementation> : public fmt::ostream_formatter
template<>
struct formatter<dv::camera::DistortionModel> : public fmt::ostream_formatter
template<>
struct formatter<dv::camera::StereoGeometry::CameraPosition> : public fmt::ostream_formatter
template<>
struct formatter<dv::camera::StereoGeometry::FunctionImplementation> : public fmt::ostream_formatter
template<>
struct formatter<dv::DepthEventPacket> : public fmt::ostream_formatter
template<>
struct formatter<dv::DepthFrame> : public fmt::ostream_formatter
template<>
struct formatter<dv::EdgeMapAccumulator::DecayMode> : public fmt::ostream_formatter
template<>
struct formatter<dv::Event> : public fmt::ostream_formatter
template<>
struct formatter<dv::EventColor> : public fmt::ostream_formatter
template<>
struct formatter<dv::EventPacket> : public fmt::ostream_formatter
template<>
struct formatter<dv::EventStore> : public fmt::ostream_formatter
template<>
struct formatter<dv::Frame> : public fmt::ostream_formatter
template<>
struct formatter<dv::IMUPacket> : public fmt::ostream_formatter
template<>
struct formatter<dv::io::camera::CameraInputBase::Flatten> : public fmt::ostream_formatter
template<>
struct formatter<dv::io::camera::CameraModel> : public fmt::ostream_formatter
template<>
struct formatter<dv::io::camera::DVXplorer::ReadoutFPS> : public fmt::ostream_formatter
template<>
struct formatter<dv::io::camera::DVXplorer::SubSample> : public fmt::ostream_formatter
template<>
struct formatter<dv::io::camera::imu::ImuModel> : public fmt::ostream_formatter
template<>
struct formatter<dv::io::camera::parser::DAVIS::ColorMode> : public fmt::ostream_formatter
template<>
struct formatter<dv::io::camera::parser::DAVIS::SensorModel> : public fmt::ostream_formatter
template<>
struct formatter<dv::io::camera::USBDevice::LogLevel> : public fmt::ostream_formatter
template<>
struct formatter<dv::io::camera::USBDeviceType> : public fmt::ostream_formatter
template<>
class formatter<dv::io::support::VariantValueOwning>

Public Functions

inline constexpr auto parse(const format_parse_context &ctx)
template<typename FormatContext>
inline auto format(const dv::io::support::VariantValueOwning &obj, FormatContext &ctx) const

Private Members

std::array<char, FORMATTER_MAX_LEN> mFmtForward = {}

Private Static Attributes

static constexpr size_t FORMATTER_MAX_LEN = {32}
template<>
struct formatter<dv::LandmarksPacket> : public fmt::ostream_formatter
template<>
struct formatter<dv::noise::FrequencyFilterType> : public fmt::ostream_formatter
template<>
struct formatter<dv::PixelArrangement> : public fmt::ostream_formatter
template<>
struct formatter<dv::Pose> : public fmt::ostream_formatter
template<>
struct formatter<dv::TimedKeyPointPacket> : public fmt::ostream_formatter
template<>
struct formatter<dv::TimeSlicingApproach> : public fmt::ostream_formatter
template<>
struct formatter<dv::TriggerPacket> : public fmt::ostream_formatter
template<>
struct formatter<dv::visualization::PoseVisualizer::GridPlane> : public fmt::ostream_formatter
template<>
struct formatter<dv::visualization::PoseVisualizer::ViewMode> : public fmt::ostream_formatter
struct Frame : public flatbuffers::NativeTable

Public Types

typedef FrameFlatbuffer TableType

Public Functions

inline Frame()
inline Frame(int64_t _timestamp, int64_t _timestampStartOfFrame, int64_t _timestampEndOfFrame, int64_t _timestampStartOfExposure, int64_t _timestampEndOfExposure, FrameFormat _format, int16_t _sizeX, int16_t _sizeY, int16_t _positionX, int16_t _positionY, const std::vector<uint8_t> &_pixels)
inline Frame(int64_t _timestamp, int64_t _exposure, int16_t _positionX, int16_t _positionY, const cv::Mat &_image, dv::FrameSource _source)
inline Frame(int64_t _timestamp, const cv::Mat &_image)

Public Members

int64_t timestamp
int16_t positionX
int16_t positionY
cv::Mat image
dv::Duration exposure
FrameSource source

Public Static Functions

static inline constexpr const char *GetFullyQualifiedName()

Friends

inline friend std::ostream &operator<<(std::ostream &os, const Frame &frame)
struct FrameBuilder

Public Functions

inline void add_timestamp(int64_t timestamp)
inline void add_timestampStartOfFrame(int64_t timestampStartOfFrame)
inline void add_timestampEndOfFrame(int64_t timestampEndOfFrame)
inline void add_timestampStartOfExposure(int64_t timestampStartOfExposure)
inline void add_timestampEndOfExposure(int64_t timestampEndOfExposure)
inline void add_format(FrameFormat format)
inline void add_sizeX(int16_t sizeX)
inline void add_sizeY(int16_t sizeY)
inline void add_positionX(int16_t positionX)
inline void add_positionY(int16_t positionY)
inline void add_pixels(flatbuffers::Offset<flatbuffers::Vector<uint8_t>> pixels)
inline void add_exposure(int64_t exposure)
inline void add_source(FrameSource source)
inline explicit FrameBuilder(flatbuffers::FlatBufferBuilder &_fbb)
FrameBuilder &operator=(const FrameBuilder&)
inline flatbuffers::Offset<FrameFlatbuffer> Finish()

Public Members

flatbuffers::FlatBufferBuilder &fbb_
flatbuffers::uoffset_t start_
struct FrameFlatbuffer : private flatbuffers::Table

Public Types

typedef Frame NativeTableType

Public Functions

inline int64_t timestamp() const

Central timestamp (µs), corresponds to exposure midpoint.

inline int64_t timestampStartOfFrame() const

Start of Frame (SOF) timestamp.

inline int64_t timestampEndOfFrame() const

End of Frame (EOF) timestamp.

inline int64_t timestampStartOfExposure() const

Start of Exposure (SOE) timestamp.

inline int64_t timestampEndOfExposure() const

End of Exposure (EOE) timestamp.

inline FrameFormat format() const

Pixel format (grayscale, RGB, …).

inline int16_t sizeX() const

X axis length in pixels.

inline int16_t sizeY() const

Y axis length in pixels.

inline int16_t positionX() const

X axis position (upper left offset) in pixels.

inline int16_t positionY() const

Y axis position (upper left offset) in pixels.

inline const flatbuffers::Vector<uint8_t> *pixels() const

Pixel values, 8bit depth.

inline int64_t exposure() const

Exposure duration.

inline FrameSource source() const

Source of the image data, whether it’s from sensor or from some form of event accumulation.

inline bool Verify(flatbuffers::Verifier &verifier) const
inline Frame *UnPack(const flatbuffers::resolver_function_t *_resolver = nullptr) const
inline void UnPackTo(Frame *_o, const flatbuffers::resolver_function_t *_resolver = nullptr) const

Public Static Functions

static inline const flatbuffers::TypeTable *MiniReflectTypeTable()
static inline constexpr const char *GetFullyQualifiedName()
static inline void UnPackToFrom(Frame *_o, const FrameFlatbuffer *_fb, const flatbuffers::resolver_function_t *_resolver = nullptr)
static inline flatbuffers::Offset<FrameFlatbuffer> Pack(flatbuffers::FlatBufferBuilder &_fbb, const Frame *_o, const flatbuffers::rehasher_function_t *_rehasher = nullptr)

Public Static Attributes

static constexpr const char *identifier = "FRME"
struct Gaussian

Public Static Functions

static inline float getSearchRadius(const float bandwidth)
static inline float apply(const float squaredDistance, const float bandwidth)
template<dv::concepts::EventStorage EventStoreClass = dv::EventStore>
class HighPassFilter : public dv::noise::BaseFrequencyFilter<dv::EventStore>
#include </builds/inivation/dv/dv-processing/include/dv-processing/noise/frequency_filters.hpp>

A high-pass event frequency filter. Discards events at a pixel location with a frequency below a given cutoff frequency.

Template Parameters:

EventStoreClass – Type of event store.

Public Functions

inline explicit HighPassFilter(const cv::Size &resolution, const float cutOffFrequency)

A high-pass event frequency filter. Discards events at a pixel location with a frequency below a given cutoff frequency.

Parameters:
  • resolution – Sensor resolution.

  • cutOffFrequency – Filter cutoff frequency. All events with a frequency below this given cutoff are discarded.

inline HighPassFilter &operator<<(const EventStoreClass &events)

Accept events using the input stream operator.

Parameters:

events – Input events.

Returns:

inline float getCutOffFrequency() const

Get the cutoff frequency for the frequency filter.

Returns:

Currently configured cutoff frequency.

inline void setCutOffFrequency(const float frequency)

Set a new cutoff frequency for the frequency filter.

Parameters:

frequency – New cutoff frequency value.

class ImageFeatureLKTracker : public dv::features::TrackerBase
#include </builds/inivation/dv/dv-processing/include/dv-processing/features/image_feature_lk_tracker.hpp>

A feature based sparse Lucas-Kanade feature tracker based on image pyramids.

Subclassed by dv::features::EventCombinedLKTracker< AccumulatorType >, dv::features::EventFeatureLKTracker< AccumulatorType >

Public Types

using Config = LucasKanadeConfig
using SharedPtr = std::shared_ptr<ImageFeatureLKTracker>
using UniquePtr = std::unique_ptr<ImageFeatureLKTracker>

Public Functions

inline virtual void accept(const dv::Frame &image)

Add an input image for the tracker. Image pyramid will be built from the given image.

Parameters:

image – Acquired image.

inline void setRedetectionStrategy(RedetectionStrategy::UniquePtr redetectionStrategy)

Set a new redetection strategy.

Parameters:

redetectionStrategy – Redetection strategy instance.

inline void setDetector(ImagePyrFeatureDetector::UniquePtr detector)

Set a new feature (corner) detector. If a nullptr is passed, the function will instantiate a feature detector with no parameters (defaults).

Parameters:

detector – Feature detector instance.

inline void setMotionPredictor(kinematics::PixelMotionPredictor::UniquePtr predictor)

Set new pixel motion predictor instance. If a nullptr is passed, the function will instantiate a pixel motion predictor with no parameters (defaults).

Warning: motion prediction requires camera calibration to be set, otherwise the function will not instantiate the motion predictor.

Parameters:

predictor – Pixel motion predictor instance.

inline virtual void accept(const dv::measurements::Depth &timedDepth)

Add scene depth, a median depth value of tracked landmarks usually works well enough.

Parameters:

timedDepth – Depth measurement value (pair of timestamp and measured depth)

inline virtual void accept(const dv::kinematics::Transformationf &transform)

Add camera transformation, usually in the world coordinate frame (T_WC). Although the class only extract the motion difference, so any other reference frame should also work as long as reference frames are not mixed up.

Parameters:

transform – Camera pose represented by a transformation

inline bool isLookbackRejectionEnabled() const

Check whether lookback is enabled.

Returns:

True if lookback rejection is enabled.

inline void setLookbackRejection(const bool lookbackRejection)

Enable or disable lookback rejection based on Forward-Backward error. Lookback rejection applies Lucas-Kanade tracking backwards after running the usual tracking and rejects any tracks that fails to successfully track back to same approximate location by measuring Euclidean distance. Euclidean distance threshold for rejection can be set using setRejectionDistanceThreshold method.

This is a real-time implementation of the method proposed by Zdenek et al. 2010, that only performs forward-backward error measurement within a single pair of latest and previous frame: http://kahlan.eps.surrey.ac.uk/featurespace/tld/Publications/2010_icpr.pdf

Parameters:

lookbackRejection – Pass true to enable lookback rejection based on Forward-Backward error.

inline float getRejectionDistanceThreshold() const

Get the current rejection distance threshold for the lookback rejection feature.

Returns:

Rejection distance value which represents the Euclidean distance in pixel space between backward tracked feature pose and initial feature position before performing forward tracking.

inline void setRejectionDistanceThreshold(const float rejectionDistanceThreshold)

Set the threshold for lookback rejection feature. This value is a maximum Euclidean distance value that is considered successful when performing backwards tracking check after forward tracking. If the backward tracked feature location is further away from initial position than this given value, the tracker will reject the track as a failed track. See method setLookbackRejection documentation for further explanation of the approach.

Parameters:

rejectionDistanceThreshold – Rejection distance threshold value.

inline float getConstantDepth() const

Get currently assumed constant depth value. It is used if no depth measurements are provided.

See also

setConstantDepth

Returns:

Currently used aistance to the scene (depth).

inline virtual void setConstantDepth(const float depth)

Set constant depth value that is assumed if no depth measurement is passed using accept(dv::measurements::Depth). By default the constant depth is assumed to be 3.0 meters, which is just a reasonable guess.

This value is used for predicting feature track positions when no depth measurements are passed in.

Parameters:

depth – Distance to the scene (depth).

Throws:

InvalidArgument – Exception is thrown if a negative depth value is passed.

Public Static Functions

static inline ImageFeatureLKTracker::UniquePtr RegularTracker(const cv::Size &resolution, const Config &_config = Config(), ImagePyrFeatureDetector::UniquePtr detector = nullptr, RedetectionStrategy::UniquePtr redetection = nullptr)
static inline ImageFeatureLKTracker::UniquePtr MotionAwareTracker(const camera::CameraGeometry::SharedPtr &camera, const Config &config = Config(), kinematics::PixelMotionPredictor::UniquePtr motionPredictor = nullptr, ImagePyrFeatureDetector::UniquePtr detector = nullptr, RedetectionStrategy::UniquePtr redetection = nullptr)

Protected Functions

inline std::vector<cv::Point2f> predictNextPoints(const int64_t previousTime, const std::vector<cv::Point2f> &previousPoints, const int64_t nextTime)
inline virtual Result::SharedPtr track() override

Perform the LK tracking.

Returns:

Result of the tracking.

inline ImageFeatureLKTracker(const cv::Size &resolution, const Config &config)

Construct a tracker with default detector parameters, but configurable tracker parameters.

Parameters:
  • resolution – Image resolution.

  • _config – Lucas-Kanade tracker parameters.

inline ImageFeatureLKTracker(const camera::CameraGeometry::SharedPtr &cameraGeometry, const Config &config)

Construct a tracker with default detector parameters, but configurable tracker parameters.

Parameters:
  • resolution – Image resolution.

  • _config – Lucas-Kanade tracker parameters.

Protected Attributes

Config mConfig = {}
RedetectionStrategy::UniquePtr mRedetectionStrategy = nullptr
ImagePyrFeatureDetector::UniquePtr mDetector = nullptr
cv::Ptr<cv::SparsePyrLKOpticalFlow> mTracker
ImagePyramid::UniquePtr mPreviousFrame = nullptr
ImagePyramid::UniquePtr mCurrentFrame = nullptr
kinematics::PixelMotionPredictor::UniquePtr mPredictor = nullptr
std::unique_ptr<kinematics::LinearTransformerf> mTransformer = nullptr
std::map<int64_t, float> mDepthHistory
camera::CameraGeometry::SharedPtr mCamera = nullptr
cv::Size mResolution
bool mLookbackRejection = false
float mRejectionDistanceThreshold = 10.f
const int64_t depthHistoryDuration = 5000000
float constantDepth = 3.f
class ImagePyramid
#include </builds/inivation/dv/dv-processing/include/dv-processing/features/image_pyramid.hpp>

Class that holds image pyramid layers with an according timestamp.

Public Types

typedef std::shared_ptr<ImagePyramid> SharedPtr
typedef std::unique_ptr<ImagePyramid> UniquePtr

Public Functions

inline ImagePyramid(int64_t timestamp_, const cv::Mat &image, const cv::Size &winSize, int maxPyrLevel)

Construct the image pyramid.

Parameters:
  • timestamp_ – Image timestamp.

  • image – Image values.

  • winSize – Window size for the search.

  • maxPyrLevel – Maximum pyramid layer id (zero-based).

inline ImagePyramid(const dv::Frame &frame, const cv::Size &winSize, int maxPyrLevel)

Construct the image pyramid.

Parameters:
  • framedv::Frame containing an image and timestamp.

  • winSize – Window size for the search.

  • maxPyrLevel – Maximum pyramid layer id (zero-based).

inline ImagePyramid(int64_t timestamp_, const cv::Mat &image)

Create a single layer image representation (no pyramid is going to be built).

Parameters:
  • timestamp_ – Image timestamp.

  • image – Image values.

Public Members

int64_t timestamp

Timestamp of the image pyramid.

std::vector<cv::Mat> pyramid

Pyramid layers of the image.

struct IMU : public flatbuffers::NativeTable

Public Types

typedef IMUFlatbuffer TableType

Public Functions

inline IMU()
inline IMU(int64_t _timestamp, float _temperature, float _accelerometerX, float _accelerometerY, float _accelerometerZ, float _gyroscopeX, float _gyroscopeY, float _gyroscopeZ, float _magnetometerX, float _magnetometerY, float _magnetometerZ)
inline Eigen::Vector3f getAccelerations() const

Get measured acceleration in m/s^2.

Returns:

Measured acceleration.

inline Eigen::Vector3f getAngularVelocities() const

Get measured angular velocities in rad/s.

Returns:

Measured angular velocities.

Public Members

int64_t timestamp
float temperature
float accelerometerX
float accelerometerY
float accelerometerZ
float gyroscopeX
float gyroscopeY
float gyroscopeZ
float magnetometerX
float magnetometerY
float magnetometerZ

Public Static Functions

static inline constexpr const char *GetFullyQualifiedName()
struct IMUBuilder

Public Functions

inline void add_timestamp(int64_t timestamp)
inline void add_temperature(float temperature)
inline void add_accelerometerX(float accelerometerX)
inline void add_accelerometerY(float accelerometerY)
inline void add_accelerometerZ(float accelerometerZ)
inline void add_gyroscopeX(float gyroscopeX)
inline void add_gyroscopeY(float gyroscopeY)
inline void add_gyroscopeZ(float gyroscopeZ)
inline void add_magnetometerX(float magnetometerX)
inline void add_magnetometerY(float magnetometerY)
inline void add_magnetometerZ(float magnetometerZ)
inline explicit IMUBuilder(flatbuffers::FlatBufferBuilder &_fbb)
IMUBuilder &operator=(const IMUBuilder&)
inline flatbuffers::Offset<IMUFlatbuffer> Finish()

Public Members

flatbuffers::FlatBufferBuilder &fbb_
flatbuffers::uoffset_t start_
struct IMUCalibration

Public Functions

IMUCalibration() = default
inline explicit IMUCalibration(const std::string_view name_, const float omegaMax_, const float accMax_, const cv::Point3f &omegaOffsetAvg_, const cv::Point3f &accOffsetAvg_, const float omegaOffsetVar_, const float accOffsetVar_, const float omegaNoiseDensity_, const float accNoiseDensity_, const float omegaNoiseRandomWalk_, const float accNoiseRandomWalk_, const int64_t timeOffsetMicros_, const dv::kinematics::Transformationf &transformationToC0_, const std::optional<Metadata> &metadata_)
inline explicit IMUCalibration(const boost::property_tree::ptree &tree)
inline boost::property_tree::ptree toPropertyTree() const
inline bool operator==(const IMUCalibration &rhs) const

Public Members

std::string name

Sensor name (e.g. “IMU_DVXplorer_DXA02137”)

float omegaMax = -1.f

Maximum (saturation) angular velocity of the gyroscope [rad/s].

float accMax = -1.f

Maximum (saturation) acceleration of the accelerometer [m/s^2].

cv::Point3f omegaOffsetAvg

Average offset (bias) of the angular velocity [rad/s].

cv::Point3f accOffsetAvg

Average offset (bias) of the acceleration [m/s^2].

float omegaOffsetVar = -1.f

Variance of the offset of the angular velocity [rad/s].

float accOffsetVar = -1.f

Variance of the offset of the acceleration [m/s^2].

float omegaNoiseDensity = -1.f

Noise density of the gyroscope [rad/s^s/sqrt(Hz)].

float accNoiseDensity = -1.f

Noise density of the accelerometer [m/s^2/sqrt(Hz)].

float omegaNoiseRandomWalk = -1.f

Noise random walk of the gyroscope [rad/s^s/sqrt(Hz)].

float accNoiseRandomWalk = -1.f

Noise random walk of the accelerometer [m/s^2/sqrt(Hz)].

int64_t timeOffsetMicros = -1

Offset between the camera and IMU timestamps in microseconds (t_correct = t_imu - offset)

dv::kinematics::Transformationf transformationToC0

Transformation converting points in IMU frame to C0 frame p_C0= T * p_IMU.

std::optional<Metadata> metadata

Metadata.

Friends

inline friend std::ostream &operator<<(std::ostream &os, const IMUCalibration &calibration)
struct IMUFlatbuffer : private flatbuffers::Table

Public Types

typedef IMU NativeTableType

Public Functions

inline int64_t timestamp() const

Timestamp (µs).

inline float temperature() const

Temperature, measured in °C.

inline float accelerometerX() const

Acceleration in the X axis, measured in g (9.81m/s²).

inline float accelerometerY() const

Acceleration in the Y axis, measured in g (9.81m/s²).

inline float accelerometerZ() const

Acceleration in the Z axis, measured in g (9.81m/s²).

inline float gyroscopeX() const

Rotation in the X axis, measured in °/s.

inline float gyroscopeY() const

Rotation in the Y axis, measured in °/s.

inline float gyroscopeZ() const

Rotation in the Z axis, measured in °/s.

inline float magnetometerX() const

Magnetometer X axis, measured in µT (magnetic flux density).

inline float magnetometerY() const

Magnetometer Y axis, measured in µT (magnetic flux density).

inline float magnetometerZ() const

Magnetometer Z axis, measured in µT (magnetic flux density).

inline bool Verify(flatbuffers::Verifier &verifier) const
inline IMU *UnPack(const flatbuffers::resolver_function_t *_resolver = nullptr) const
inline void UnPackTo(IMU *_o, const flatbuffers::resolver_function_t *_resolver = nullptr) const

Public Static Functions

static inline const flatbuffers::TypeTable *MiniReflectTypeTable()
static inline constexpr const char *GetFullyQualifiedName()
static inline void UnPackToFrom(IMU *_o, const IMUFlatbuffer *_fb, const flatbuffers::resolver_function_t *_resolver = nullptr)
static inline flatbuffers::Offset<IMUFlatbuffer> Pack(flatbuffers::FlatBufferBuilder &_fbb, const IMU *_o, const flatbuffers::rehasher_function_t *_rehasher = nullptr)
struct IMUPacket : public flatbuffers::NativeTable

Public Types

typedef IMUPacketFlatbuffer TableType

Public Functions

inline IMUPacket()
inline IMUPacket(const std::vector<IMU> &_elements)

Public Members

std::vector<IMU> elements

Public Static Functions

static inline constexpr const char *GetFullyQualifiedName()

Friends

inline friend std::ostream &operator<<(std::ostream &os, const IMUPacket &packet)
struct IMUPacketBuilder

Public Functions

inline void add_elements(flatbuffers::Offset<flatbuffers::Vector<flatbuffers::Offset<IMUFlatbuffer>>> elements)
inline explicit IMUPacketBuilder(flatbuffers::FlatBufferBuilder &_fbb)
IMUPacketBuilder &operator=(const IMUPacketBuilder&)
inline flatbuffers::Offset<IMUPacketFlatbuffer> Finish()

Public Members

flatbuffers::FlatBufferBuilder &fbb_
flatbuffers::uoffset_t start_
struct IMUPacketFlatbuffer : private flatbuffers::Table

Public Types

typedef IMUPacket NativeTableType

Public Functions

inline const flatbuffers::Vector<flatbuffers::Offset<IMUFlatbuffer>> *elements() const
inline bool Verify(flatbuffers::Verifier &verifier) const
inline IMUPacket *UnPack(const flatbuffers::resolver_function_t *_resolver = nullptr) const
inline void UnPackTo(IMUPacket *_o, const flatbuffers::resolver_function_t *_resolver = nullptr) const

Public Static Functions

static inline const flatbuffers::TypeTable *MiniReflectTypeTable()
static inline constexpr const char *GetFullyQualifiedName()
static inline void UnPackToFrom(IMUPacket *_o, const IMUPacketFlatbuffer *_fb, const flatbuffers::resolver_function_t *_resolver = nullptr)
static inline flatbuffers::Offset<IMUPacketFlatbuffer> Pack(flatbuffers::FlatBufferBuilder &_fbb, const IMUPacket *_o, const flatbuffers::rehasher_function_t *_rehasher = nullptr)

Public Static Attributes

static constexpr const char *identifier = "IMUS"
struct Info

Public Members

bool imageCompensated = false
bool depthAvailable = false
bool transformsAvailable = false
int64_t depthTime = -1LL
int64_t generationTime = -1LL
size_t inputEventCount = 0ULL
size_t accumulatedEventCount = 0ULL
class InputBase
#include </builds/inivation/dv/dv-processing/include/dv-processing/io/input_base.hpp>

Camera input base class to abstract live camera and recorded files with a common interface.

Subclassed by dv::io::MonoCameraRecording, dv::io::NetworkReader, dv::io::camera::CameraInputBase

Public Functions

virtual ~InputBase() = default
virtual std::optional<dv::EventStore> getNextEventBatch() = 0

Parse and retrieve next event batch.

Returns:

Event batch or std::nullopt if no events were received since last read.

virtual std::optional<dv::Frame> getNextFrame() = 0

Parse and retrieve next frame.

Returns:

Frame or std::nullopt if no frames were received since last read.

virtual std::optional<std::vector<dv::IMU>> getNextImuBatch() = 0

Parse and retrieve next IMU data batch.

Returns:

IMU data batch or std::nullopt if no IMU data was received since last read.

virtual std::optional<std::vector<dv::Trigger>> getNextTriggerBatch() = 0

Parse and retrieve next trigger data batch.

Returns:

Trigger data batch or std::nullopt if no triggers were received since last read.

virtual std::optional<cv::Size> getEventResolution() const = 0

Get event stream resolution.

Returns:

Event stream resolution, std::nullopt if event stream is unavailable.

virtual std::optional<cv::Size> getFrameResolution() const = 0

Retrieve frame stream resolution.

Returns:

Frame stream resolution or std::nullopt if the frame stream is not available.

virtual bool isEventStreamAvailable() const = 0

Check whether event stream is available.

Returns:

True if event stream is available, false otherwise.

virtual bool isFrameStreamAvailable() const = 0

Check whether frame stream is available.

Returns:

True if frame stream is available, false otherwise.

virtual bool isImuStreamAvailable() const = 0

Check whether IMU data is available.

Returns:

True if IMU data stream is available, false otherwise.

virtual bool isTriggerStreamAvailable() const = 0

Check whether trigger data is available.

Returns:

True if trigger data stream is available, false otherwise.

virtual bool isStreamAvailable(std::string_view streamName) const = 0

Check whether a stream with given name is available.

Returns:

True if data stream is available, false otherwise.

virtual std::string getCameraName() const = 0

Get camera name, which is a combination of the camera model and the serial number.

Returns:

String containing the camera model and serial number separated by an underscore character.

virtual bool isRunning() const = 0

Check whether any input data streams have terminated. For a live camera this should check if the device is still connected and functioning, while for a recording file this should check if any of the data streams have reached end-of-file (EOF). For a network input, this indicates the network stream is still connected.

Returns:

True if data read on all streams is still possible, false otherwise.

virtual bool isRunning(std::string_view streamName) const = 0

Check whether the input data stream with the specified name is still active.

Returns:

True if data read on this stream is possible, false otherwise.

virtual bool isRunningAny() const = 0

Check whether any input data streams are still available. For a live camera this should check if the device is still connected and functioning and at least one data stream is active (different than isRunning()), while for a recording file this should check if any of the data streams have not yet reached end-of-file (EOF) and are still readable. For a network input, this indicates the network stream is still connected.

Returns:

True if data read on at least one stream is still possible, false otherwise.

struct InputError

Public Types

using Info = ErrorInfo

Public Static Functions

static inline std::string format(const Info &info)
template<class TYPE>
struct InvalidArgument

Public Types

using Info = TYPE

Public Static Functions

static inline std::string format(const Info &info)
class IODataBuffer

Public Functions

IODataBuffer() = default
inline dv::PacketHeader *getHeader()
inline const dv::PacketHeader *getHeader() const
inline flatbuffers::FlatBufferBuilder *getBuilder()
inline std::vector<std::byte> *getBuffer()
inline const std::byte *getData() const
inline size_t getDataSize() const
inline void switchToBuffer()

Private Members

dv::PacketHeader mHeader
std::vector<std::byte> mBuffer
flatbuffers::FlatBufferBuilder mBuilder = {INITIAL_SIZE}
bool mIsFlatBuffer = {true}

Private Static Attributes

static constexpr size_t INITIAL_SIZE = {64 * 1024}
struct IOError : public dv::exceptions::info::EmptyException
struct IOHeader : public flatbuffers::NativeTable

Public Types

typedef IOHeaderFlatbuffer TableType

Public Functions

inline IOHeader()
inline IOHeader(CompressionType _compression, int64_t _dataTablePosition, const std::string &_infoNode)

Public Members

CompressionType compression
int64_t dataTablePosition
std::string infoNode

Public Static Functions

static inline constexpr const char *GetFullyQualifiedName()
struct IOHeaderBuilder

Public Functions

inline void add_compression(CompressionType compression)
inline void add_dataTablePosition(int64_t dataTablePosition)
inline void add_infoNode(flatbuffers::Offset<flatbuffers::String> infoNode)
inline explicit IOHeaderBuilder(flatbuffers::FlatBufferBuilder &_fbb)
IOHeaderBuilder &operator=(const IOHeaderBuilder&)
inline flatbuffers::Offset<IOHeaderFlatbuffer> Finish()

Public Members

flatbuffers::FlatBufferBuilder &fbb_
flatbuffers::uoffset_t start_
struct IOHeaderFlatbuffer : private flatbuffers::Table

Public Types

typedef IOHeader NativeTableType

Public Functions

inline CompressionType compression() const
inline int64_t dataTablePosition() const
inline const flatbuffers::String *infoNode() const
inline bool Verify(flatbuffers::Verifier &verifier) const
inline IOHeader *UnPack(const flatbuffers::resolver_function_t *_resolver = nullptr) const
inline void UnPackTo(IOHeader *_o, const flatbuffers::resolver_function_t *_resolver = nullptr) const

Public Static Functions

static inline const flatbuffers::TypeTable *MiniReflectTypeTable()
static inline constexpr const char *GetFullyQualifiedName()
static inline void UnPackToFrom(IOHeader *_o, const IOHeaderFlatbuffer *_fb, const flatbuffers::resolver_function_t *_resolver = nullptr)
static inline flatbuffers::Offset<IOHeaderFlatbuffer> Pack(flatbuffers::FlatBufferBuilder &_fbb, const IOHeader *_o, const flatbuffers::rehasher_function_t *_rehasher = nullptr)

Public Static Attributes

static constexpr const char *identifier = "IOHE"
class IOStatistics

Public Functions

IOStatistics() = default
virtual ~IOStatistics() = default
IOStatistics(const IOStatistics &other) = delete
IOStatistics &operator=(const IOStatistics &other) = delete
IOStatistics(IOStatistics &&other) noexcept = default
IOStatistics &operator=(IOStatistics &&other) = default
virtual void publish() = 0
inline void addBytes(const uint64_t bytes)
inline void update(const uint64_t addedDataSize, const uint64_t addedPacketsNumber, const uint64_t addedPacketsElements, const uint64_t addedPacketsSize)

Protected Attributes

uint64_t mPacketsNumber = {0}
uint64_t mPacketsElements = {0}
uint64_t mPacketsSize = {0}
uint64_t mDataSize = {0}
template<typename T>
struct is_eigen_impl : public std::false_type
template<typename T, int... Is>
struct is_eigen_impl<Eigen::Matrix<T, Is...>> : public std::true_type
class KDTreeEventStoreAdaptor
#include </builds/inivation/dv/dv-processing/include/dv-processing/containers/kd_tree/event_store_adaptor.hpp>

Wrapper class around nanoflann::KDTree for dv::EventStore data, which provides efficient approximate nearest neighbour search as well as radius search.

Public Functions

inline KDTreeEventStoreAdaptor(const dv::EventStore &data, const uint32_t maxLeaves = 32768)

Constructor

See also

MeanShift::Matrix

Template Parameters:

T – The matrix type. Must be of exact same type as MeanShift::Matrix, to avoid copy construction of a temporary variable and thereby creating dangling references.

Parameters:
  • data – The Matrix containing the data. The data is neither copied nor otherwise managed, ownership remains with the user of this class.

  • maxLeaves – the maximum number of leaves for the KDTree. A smaller number typically increases the time used for construction of the tree, but may decrease the time used for searching it. A higher number typically does the opposite.

KDTreeEventStoreAdaptor() = delete
KDTreeEventStoreAdaptor(const KDTreeEventStoreAdaptor &other) = delete
KDTreeEventStoreAdaptor(KDTreeEventStoreAdaptor &&other) = delete
KDTreeEventStoreAdaptor &operator=(const KDTreeEventStoreAdaptor &other) = delete
KDTreeEventStoreAdaptor &operator=(KDTreeEventStoreAdaptor &&other) = delete
~KDTreeEventStoreAdaptor() = default
template<class T> inline or dv::concepts::KeyPoint< T > auto knnSearch (const T &centrePoint, const size_t numClosest) const

Searches for the k nearest neighbours surrounding centrePoint.

Parameters:
  • centrePoint – The point for which the nearest neighbours are to be searched

  • numClosest – The number of neighbours to be searched (i.e. the parameter “k”)

Returns:

The number of actually found neighbours

inline std::vector<std::pair<const dv::Event*, int32_t>> knnSearch(const int32_t x, const int32_t y, const size_t numClosest) const

Searches for the k nearest neighbours surrounding centrePoint.

Parameters:
  • x – The x-coordinate of the centre point for which the nearest neighbours are to be searched

  • y – The y-coordinate of the centre point for which the nearest neighbours are to be searched

  • numClosest – The number of neighbours to be searched (i.e. the parameter “k”)

Returns:

The number of actually found neighbours

template<class T> inline or dv::concepts::KeyPoint< T > auto radiusSearch (const T &centrePoint, const int16_t &radius, const float eps=0.0f, const bool sorted=false) const

Searches for all neighbours surrounding centrePoint that are within a certain radius.

Parameters:
  • centrePoint – The point for which the nearest neighbours are to be searched

  • radius – The radius

  • eps – The search accuracy

  • sorted – True if the neighbours should be sorted with respect to their distance to centrePoint (comes with a significant performance impact)

Returns:

The number of actually found neighbours

inline std::vector<std::pair<const dv::Event*, int32_t>> radiusSearch(const int32_t x, const int32_t y, const int16_t &radius, const float eps = 0.0f, const bool sorted = false) const

Searches for all neighbours surrounding centrePoint that are within a certain radius.

Parameters:
  • x – The x-coordinate of the centre point for which the nearest neighbours are to be searched

  • y – The y-coordinate of the centre point for which the nearest neighbours are to be searched

  • radius – The radius

  • eps – The search accuracy

  • sorted – True if the neighbours should be sorted with respect to their distance to centrePoint (comes with a significant performance impact)

Returns:

The number of actually found neighbours

inline dv::EventStore::iterator begin() const noexcept

Returns an iterator to the begin of the EventStore

Returns:

an iterator to the begin of the EventStore

inline dv::EventStore::iterator end() const noexcept

Returns an iterator to the end of the EventStore

Returns:

an iterator to the end of the EventStore

inline const KDTreeEventStoreAdaptor &derived() const

Returns the reference to the this object. Required by the nanoflann adaptors

Returns:

the reference to “this”

inline KDTreeEventStoreAdaptor &derived()

Returns the reference to the this object. Required by the nanoflann adaptors

Returns:

the reference to “this”

inline uint32_t kdtree_get_point_count() const

Returns the point count of the event store. Required by the nanoflann adaptors

Returns:

the reference to “this”

inline int16_t kdtree_get_pt(const dv::Event *event, const size_t dim) const

Returns the dim’th dimension of an event. Required by the nanoflann adaptors

Returns:

the reference to “this”

template<class BBOX>
inline bool kdtree_get_bbox(BBOX&) const

Bounding box computation required by the nanoflann adaptors As the documentation allows for it not being implemented and we don’t need it, it was left empty.

Returns:

false

Private Types

using Index = nanoflann::KDTreeSingleIndexNonContiguousIteratorAdaptor<nanoflann::metric_L2_Simple::traits<int32_t, KDTreeEventStoreAdaptor, const dv::Event*>::distance_t, KDTreeEventStoreAdaptor, 2, const dv::Event*>

Private Members

const dv::EventStore &mData
std::unique_ptr<Index> mIndex
template<typename TYPE, int32_t ROWS = Eigen::Dynamic, int32_t COLUMNS = Eigen::Dynamic, int32_t SAMPLE_ORDER = Eigen::ColMajor>
class KDTreeMatrixAdaptor
#include </builds/inivation/dv/dv-processing/include/dv-processing/containers/kd_tree/eigen_matrix_adaptor.hpp>

Wrapper class around nanoflann::KDTree for data contained in Eigen matrices, which provides efficient approximate nearest neighbour search as well as radius search.

See also

Eigen::Dynamic

See also

Eigen::Dynamic

See also

Eigen::StorageOptions

Template Parameters:
  • TYPE – the underlying data type

  • ROWS – the number of rows in the data matrix. May be Eigen::Dynamic or >= 0.

  • COLUMNS – the number of columns in the data matrix. May be Eigen::Dynamic or >= 0.

  • SAMPLE_ORDER – the order in which samples are entered in the matrix.

Public Types

using Matrix = Eigen::Matrix<TYPE, ROWS, COLUMNS, STORAGE_ORDER>
using Vector = Eigen::Matrix<TYPE, SAMPLE_ORDER == Eigen::ColMajor ? ROWS : 1, SAMPLE_ORDER == Eigen::ColMajor ? 1 : COLUMNS, STORAGE_ORDER>

Public Functions

template<typename T, std::enable_if_t<std::is_same_v<T, Matrix>, bool> = false>
inline explicit KDTreeMatrixAdaptor(const T &data, const uint32_t maxLeaves = 32768)

Constructor

See also

MeanShift::Matrix

Template Parameters:

T – The matrix type. Must be of exact same type as MeanShift::Matrix, to avoid copy construction of a temporary variable and thereby creating dangling references.

Parameters:
  • data – The Matrix containing the data. The data is neither copied nor otherwise managed, ownership remains with the user of this class.

  • maxLeaves – the maximum number of leaves for the KDTree. A smaller number typically increases the time used for construction of the tree, but may decrease the time used for searching it. A higher number typically does the opposite.

KDTreeMatrixAdaptor() = delete
KDTreeMatrixAdaptor(const ThisType &other) = delete
KDTreeMatrixAdaptor(ThisType &&other) = delete
KDTreeMatrixAdaptor &operator=(const ThisType &other) = delete
KDTreeMatrixAdaptor &operator=(ThisType &&other) = delete
~KDTreeMatrixAdaptor() = default
inline auto knnSearch(const Vector &centrePoint, const size_t numClosest) const

Searches for the k nearest neighbours surrounding centrePoint.

Parameters:
  • centrePoint – The point for which the nearest neighbours are to be searched

  • numClosest – The number of neighbours to be searched (i.e. the parameter “k”)

Returns:

A pair containing the indices of the neighbours in the underlying matrix as well as the distances to centrePoint

inline auto radiusSearch(const Vector &centrePoint, const TYPE &radius, const float eps = 0.0f, const bool sorted = false) const

Searches for all neighbours surrounding centrePoint that are within a certain radius.

Parameters:
  • centrePoint – The point for which the nearest neighbours are to be searched

  • radius – The radius

  • eps – The search accuracy

  • sorted – True if the neighbours should be sorted with respect to their distance to centrePoint (comes with a significant performance impact)

Returns:

A vector of pairs containing the indices of the neighbours in the underlying matrix as well as the distances to centrePoint

inline auto getSample(const uint32_t index) const

Returns a sample at a given index

Parameters:

index – the index of the sample in mData

Returns:

the sample

Private Types

using ThisType = KDTreeMatrixAdaptor<TYPE, ROWS, COLUMNS, SAMPLE_ORDER>
using Tree = nanoflann::KDTreeEigenMatrixAdaptor<Matrix, SAMPLE_ORDER == Eigen::ColMajor ? ROWS : COLUMNS, nanoflann::metric_L2_Simple, SAMPLE_ORDER == Eigen::RowMajor>

Private Members

const uint32_t mNumSamples
const uint32_t mNumDimensions
std::unique_ptr<Tree> mTree

Private Static Attributes

static constexpr int32_t DIMS = SAMPLE_ORDER == Eigen::ColMajor ? ROWS : COLUMNS
static constexpr int32_t NOT_SAMPLE_ORDER = (SAMPLE_ORDER == Eigen::ColMajor ? Eigen::RowMajor : Eigen::ColMajor)
static constexpr int32_t STORAGE_ORDER = DIMS == 1 ? NOT_SAMPLE_ORDER : SAMPLE_ORDER
class KeyPointResampler
#include </builds/inivation/dv/dv-processing/include/dv-processing/features/keypoint_resampler.hpp>

Create a feature resampler, which resamples given keypoints with homogenous distribution in pixel space.

Implementation was inspired by: https://github.com/BAILOOL/ANMS-Codes

Public Functions

inline explicit KeyPointResampler(const cv::Size &resolution)

Initialize resampler with given resolution.

Parameters:

resolution – Image resolution

template<class KeyPointVectorType> inline or dv::concepts::Coordinate2DMutableIterable< KeyPointVectorType > KeyPointVectorType resample (const KeyPointVectorType &keyPoints, const size_t numRetPoints)

Perform resampling on given keypoints.

See also

setTolerance)

Parameters:
  • keyPoints – Prior keypoints.

  • numRetPoints – Number of expected keypoints, the exact number of output keypoints can vary to configured tolerance value (

Returns:

Resampled keypoints

inline float getTolerance() const

Get currently set tolerance for output keypoint count.

Returns:

Tolerance value

inline void setTolerance(const float tolerance)

Set a new output size tolerance value.

The algorithm search for an optimal distance between keypoints so the resulting vector would contain the expected amount of keypoints. This search is performed with a given tolerance, by default - 0.1 (so by default the final resampled amount of events will be within +/-10% of requested amount).

Parameters:

tolerance – Output keypoint amount tolerance value.

Protected Types

typedef std::pair<dv::Point2f, size_t> RangeValue

Protected Attributes

float mPreviousSolution = -1.f
float mRows
float mCols
float mTolerance = 0.1f
struct KMemCell

Memory cell used per row/column to store latest event information. Note that, differently from the original implementation, the number of bits per memory cell is not exactly 64 (unnecessary bookkeeping bit removed).

Public Members

int64_t mTimestamp
int16_t mOtherAddress
bool mPolarity
template<dv::concepts::EventStorage EventStoreClass = dv::EventStore>
class KNoiseFilter : public dv::EventFilterBase<dv::EventStore>
#include </builds/inivation/dv/dv-processing/include/dv-processing/noise/k_noise_filter.hpp>

Memory efficient, spatiotemporal event filter as proposed by Khodamoradi’s algorithm; “O(N)-Space Spatiotemporal Filter for Reducing Noise in Neuromorphic Vision Sensors”.

Template Parameters:

EventStoreClass – Type of event store to filter.

Public Functions

inline explicit KNoiseFilter(const cv::Size &resolution, const dv::Duration timeDelta = dv::Duration(2 '000))

Construct a spatiotemporal event filter that discards an event if the number of spatiotemporal neighbors is less than #numSupportingPixels. The spatial window size is defined by #windowHalfSize, while the temporal window size is defined by #timeDelta.

Note that this is similar in functionality to the BackgroundActivityNoiseFilter, where #timeDelta corresponds to the backgroundActivityTime, and #windowHalfSize and #numSupportingPixels are by default set to 1; but more memory efficient due to only storing values per row/column.

Parameters:
  • resolution – Sensor resolution.

  • timeDelta – Size of the temporal window used to check the number of neighboring events.

inline KNoiseFilter &operator<<(const EventStoreClass &events)

Accept events using the input stream operator.

Parameters:

events – Input events.

Returns:

inline virtual bool retain(const typename EventStoreClass::value_type &event) noexcept override

Test if the given event should be filtered out by checking if the number of neighboring events within the defined spatiotemporal window is greater than #mNumSupportingPixels.

Parameters:

event – Event to be checked.

Returns:

True to retain event, false to discard.

inline dv::Duration getTemporalWindowDuration() const

Get the duration of the temporal window used to check for neighboring events.

Returns:

inline void setTemporalWindowDuration(const dv::Duration timeDelta)

Set the duration of the temporal window used to check for neighboring events.

Parameters:

timeDelta

Private Functions

inline bool checkColumnSupport(const dv::Event event, const int8_t columnShift)

Check if for the column given by event.x() + columnShift, the last received event for that column is within a spatiotemporal window from the current event.

Parameters:
  • event – Event to be checked.

  • columnShift – Shift in the column position from the current event column.

Returns:

True if the last received event at the given column position is within a spatiotemporal window from the current event.

inline bool checkRowSupport(const dv::Event event, const int8_t rowShift)

Check if for the row given by event.y() + rowShift, the last received event for that row is within a spatiotemporal window from the current event.

Parameters:
  • event – Event to be checked.

  • rowShift – Shift in the row position from the current event row.

Returns:

True if the last received event at the given row position is within a spatiotemporal window from the current event.

inline bool doKNoiseLookup_unsafe(const dv::Event event)
inline bool doKNoiseLookup(const dv::Event event)

Private Members

cv::Size mResolutionLimits

Sensor resolution.

int64_t mTimeDelta

Temporal size of the spatiotemporal window (in us)

std::vector<KMemCell> mColumnCells

Vector containing the memory cells per row/column of the sensor resolution.

std::vector<KMemCell> mRowCells
struct Landmark : public flatbuffers::NativeTable

Public Types

typedef LandmarkFlatbuffer TableType

Public Functions

inline Landmark()
inline Landmark(const Point3f &_pt, int64_t _id, int64_t _timestamp, const std::vector<int8_t> &_descriptor, const std::string &_descriptorType, const std::vector<float> &_covariance, const std::vector<Observation> &_observations)

Public Members

Point3f pt
int64_t id
int64_t timestamp
std::vector<int8_t> descriptor
std::string descriptorType
std::vector<float> covariance
std::vector<Observation> observations

Public Static Functions

static inline constexpr const char *GetFullyQualifiedName()
struct LandmarkBuilder

Public Functions

inline void add_pt(const Point3f *pt)
inline void add_id(int64_t id)
inline void add_timestamp(int64_t timestamp)
inline void add_descriptor(flatbuffers::Offset<flatbuffers::Vector<int8_t>> descriptor)
inline void add_descriptorType(flatbuffers::Offset<flatbuffers::String> descriptorType)
inline void add_covariance(flatbuffers::Offset<flatbuffers::Vector<float>> covariance)
inline void add_observations(flatbuffers::Offset<flatbuffers::Vector<flatbuffers::Offset<ObservationFlatbuffer>>> observations)
inline explicit LandmarkBuilder(flatbuffers::FlatBufferBuilder &_fbb)
LandmarkBuilder &operator=(const LandmarkBuilder&)
inline flatbuffers::Offset<LandmarkFlatbuffer> Finish()

Public Members

flatbuffers::FlatBufferBuilder &fbb_
flatbuffers::uoffset_t start_
struct LandmarkFlatbuffer : private flatbuffers::Table

Public Types

typedef Landmark NativeTableType

Public Functions

inline const Point3f *pt() const

3D coordinate of the landmark.

inline int64_t id() const

Landmark id (if the keypoints need to be clustered by an object they belong to).

inline int64_t timestamp() const

Timestamp (µs).

inline const flatbuffers::Vector<int8_t> *descriptor() const

Visual descriptor of the landmark.

inline const flatbuffers::String *descriptorType() const

Type of the visual descriptor.

inline const flatbuffers::Vector<float> *covariance() const

Covariance matrix, must contain 9 numbers. It is represented as a 3x3 square matrix.

inline const flatbuffers::Vector<flatbuffers::Offset<ObservationFlatbuffer>> *observations() const

Observation info, can be from multiple cameras if they are matched using descriptor.

inline bool Verify(flatbuffers::Verifier &verifier) const
inline Landmark *UnPack(const flatbuffers::resolver_function_t *_resolver = nullptr) const
inline void UnPackTo(Landmark *_o, const flatbuffers::resolver_function_t *_resolver = nullptr) const

Public Static Functions

static inline const flatbuffers::TypeTable *MiniReflectTypeTable()
static inline constexpr const char *GetFullyQualifiedName()
static inline void UnPackToFrom(Landmark *_o, const LandmarkFlatbuffer *_fb, const flatbuffers::resolver_function_t *_resolver = nullptr)
static inline flatbuffers::Offset<LandmarkFlatbuffer> Pack(flatbuffers::FlatBufferBuilder &_fbb, const Landmark *_o, const flatbuffers::rehasher_function_t *_rehasher = nullptr)
struct LandmarksPacket : public flatbuffers::NativeTable

Public Types

typedef LandmarksPacketFlatbuffer TableType

Public Functions

inline LandmarksPacket()
inline LandmarksPacket(const std::vector<Landmark> &_elements, const std::string &_referenceFrame)

Public Members

std::vector<Landmark> elements
std::string referenceFrame

Public Static Functions

static inline constexpr const char *GetFullyQualifiedName()

Friends

inline friend std::ostream &operator<<(std::ostream &os, const LandmarksPacket &packet)
struct LandmarksPacketBuilder

Public Functions

inline void add_elements(flatbuffers::Offset<flatbuffers::Vector<flatbuffers::Offset<LandmarkFlatbuffer>>> elements)
inline void add_referenceFrame(flatbuffers::Offset<flatbuffers::String> referenceFrame)
inline explicit LandmarksPacketBuilder(flatbuffers::FlatBufferBuilder &_fbb)
LandmarksPacketBuilder &operator=(const LandmarksPacketBuilder&)
inline flatbuffers::Offset<LandmarksPacketFlatbuffer> Finish()

Public Members

flatbuffers::FlatBufferBuilder &fbb_
flatbuffers::uoffset_t start_
struct LandmarksPacketFlatbuffer : private flatbuffers::Table

Public Types

typedef LandmarksPacket NativeTableType

Public Functions

inline const flatbuffers::Vector<flatbuffers::Offset<LandmarkFlatbuffer>> *elements() const
inline const flatbuffers::String *referenceFrame() const

Coordinate reference frame of the landmarks, “world” coordinate frame by default.

inline bool Verify(flatbuffers::Verifier &verifier) const
inline LandmarksPacket *UnPack(const flatbuffers::resolver_function_t *_resolver = nullptr) const
inline void UnPackTo(LandmarksPacket *_o, const flatbuffers::resolver_function_t *_resolver = nullptr) const

Public Static Functions

static inline const flatbuffers::TypeTable *MiniReflectTypeTable()
static inline constexpr const char *GetFullyQualifiedName()
static inline void UnPackToFrom(LandmarksPacket *_o, const LandmarksPacketFlatbuffer *_fb, const flatbuffers::resolver_function_t *_resolver = nullptr)
static inline flatbuffers::Offset<LandmarksPacketFlatbuffer> Pack(flatbuffers::FlatBufferBuilder &_fbb, const LandmarksPacket *_o, const flatbuffers::rehasher_function_t *_rehasher = nullptr)

Public Static Attributes

static constexpr const char *identifier = "LMRS"
struct LengthError : public dv::exceptions::info::EmptyException
template<std::floating_point Scalar>
class LinearTransformer
#include </builds/inivation/dv/dv-processing/include/dv-processing/kinematics/linear_transformer.hpp>

A buffer containing time increasing 3D transformations and capable of timewise linear interpolation between available transforms. Can be used with different underlying floating point types supported by Eigen.

Template Parameters:

Scalar – Underlying floating point number type - float or double.

Public Types

using iterator = typename TransformationBuffer::iterator
using const_iterator = typename TransformationBuffer::const_iterator

Public Functions

inline explicit LinearTransformer(size_t capacity)
inline void pushTransformation(const TransformationType &transformation)

Push a transformation into the transformation buffer.

Throws:

logic_error – exception when transformation is added out of order.

Parameters:

transformationTransformation to be pushed, it must contain increasing timestamp compared to latest transformation in the buffer, otherwise an exception will be thrown.

inline iterator begin()

Generate forward iterator pointing to first transformation in the transformer buffer.

Returns:

Buffer start iterator.

inline iterator end()

Generate an iterator representing end of the buffer.

Returns:

Buffer end const-iterator.

inline const_iterator cbegin() const

Generate a const forward iterator pointing to first transformation in the transformer buffer.

Returns:

Buffer start const-iterator.

inline const_iterator cend() const

Generate a const iterator representing end of the buffer.

Returns:

Buffer end iterator.

inline void clear()

Delete all transformations from the buffer.

inline bool empty() const

Check whether the buffer is empty.

Returns:

true if empty, false otherwise

inline std::optional<TransformationType> getTransformAt(int64_t timestamp) const

Get a transform at the given timestamp.

If no transform with the exact timestamp was pushed, estimates a transform assuming linear motion.

Parameters:

timestamp – Unix timestamp in microsecond format.

Returns:

Transformation if successful, std::nullopt otherwise.

inline bool isWithinTimeRange(int64_t timestamp) const

Checks whether the timestamp is within the range of transformations available in the buffer.

Parameters:

timestamp – Unix microsecond timestamp to be checked.

Returns:

true if the timestamp is within the range of transformations in the buffer.

inline size_t size() const

Return the size of the buffer.

Returns:

Number of transformations available in the buffer.

inline const TransformationType &latestTransformation() const

Return transformation with highest timestamp.

Returns:

Latest transformation in the buffer.

inline const TransformationType &earliestTransformation() const

Return transformation with lowest timestamp.

Returns:

Earliest transformation in time available in the buffer.

inline void setCapacity(size_t newCapacity)

Set new capacity, if the size of the buffer is larger than the newCapacity, oldest transformations from the start will be removed.

Parameters:

newCapacity – New transformation buffer capacity.

inline LinearTransformer<Scalar> getTransformsBetween(int64_t start, int64_t end) const

Extract transformation between two given timestamps. If timestamps are not at exact available transformations, additional transformations will be added so the resulting transformer would complete overlap over the period (if that is possible).

Parameters:
  • start – Start Unix timestamp in microseconds.

  • end – End Unix timestamp in microseconds.

Returns:

LinearTransformer containing transformations covering the given period.

inline LinearTransformer<Scalar> resampleTransforms(const int64_t samplingInterval) const

Resample containing transforms into a new transformer, containing interpolated transforms at given interval. Will contain the last transformation as well, although the interval might not be maintained for the last transform.

Parameters:

samplingInterval – Interval in microseconds at which to resample the transformations.

Returns:

Generated transformer with exact capacity of output transformation count.

Private Types

using TransformationType = Transformation<Scalar>
using TransformationBuffer = boost::circular_buffer<TransformationType, Eigen::aligned_allocator<TransformationType>>

Private Functions

inline TransformationBuffer::const_iterator bufferLowerBound(int64_t t) const

Finds the lower bound iterator in the buffer.

See also

std::lower_bound

Parameters:

t – Unix timestamp in microseconds to search for.

Returns:

Iterator to the buffer with timestamp that is equal or not less than given timestamp.

inline TransformationBuffer::const_iterator bufferUpperBound(int64_t t) const

Finds the upper bound iterator in the buffer.

See also

std::upper_bound

Parameters:

t – Unix timestamp in microseconds to search for.

Returns:

Iterator to the buffer with timestamp that is greater than given timestamp or end if not available.

Private Members

TransformationBuffer mTransforms

Private Static Functions

static inline TransformationType interpolateComponentwise(const TransformationType &T_a, const TransformationType &T_b, const int64_t timestamp, Scalar lambda)

Perform linear interpolation between two transformations.

Parameters:
  • T_a – First transformation.

  • T_b – Second transformation.

  • timestamp – Interpolated transformation timestamp.

  • lambda – Distance point between the two transformation to interpolate.

Returns:

Interpolated transformation.

template<dv::concepts::EventStorage EventStoreClass = dv::EventStore>
class LowPassFilter : public dv::noise::BaseFrequencyFilter<dv::EventStore>
#include </builds/inivation/dv/dv-processing/include/dv-processing/noise/frequency_filters.hpp>

A low-pass event frequency filter. Discards events at a pixel location with a frequency above a given cutoff frequency. This is also commonly referred to as refractory period filtering, where the refractory period is the inverse of the cutoff frequency.

Template Parameters:

EventStoreClass – Type of event store.

Public Functions

inline explicit LowPassFilter(const cv::Size &resolution, const float cutOffFrequency)

A low-pass event frequency filter. Discards events at a pixel location with a frequency above a given cutoff frequency. This is also commonly referred to as refractory period filtering, where the refractory period is the inverse of #cutOffFrequency.

Parameters:
  • resolution – Sensor resolution.

  • cutOffFrequency – Filter cutoff frequency. All events with a frequency above this given cutoff are discarded.

inline LowPassFilter &operator<<(const EventStoreClass &events)

Accept events using the input stream operator.

Parameters:

events – Input events.

Returns:

inline float getCutOffFrequency() const

Get the cutoff frequency for the frequency filter.

Returns:

Currently configured cutoff frequency.

inline void setCutOffFrequency(const float frequency)

Set a new cutoff frequency for the frequency filter.

Parameters:

frequency – New cutoff frequency value.

struct LucasKanadeConfig
#include </builds/inivation/dv/dv-processing/include/dv-processing/features/image_feature_lk_tracker.hpp>

Lucas-Kanade tracker configuration parameters.

Public Members

bool maskedFeatureDetect = true

Generate a mask which would disable image regions where features are already succesfully tracked.

double terminationEpsilon = 0.1

Tracking termination criteria for the LK tracker.

int numPyrLayers = 2

Total number of pyramid layers used by the LK tracker.

cv::Size searchWindowSize = cv::Size(24, 24)

Size of the search around the tracked feature.

class Lz4CompressionSupport : public dv::io::compression::CompressionSupport

Public Functions

inline explicit Lz4CompressionSupport(const CompressionType type)
inline explicit Lz4CompressionSupport(const LZ4F_preferences_t &preferences)

LZ4 compression support with custom compression settings. Internally sets compression type to CompressionType::LZ4.

Parameters:

preferences – LZ4 compression settings.

inline virtual void compress(dv::io::support::IODataBuffer &packet) override

Private Members

std::shared_ptr<LZ4F_cctx_s> mContext
const LZ4F_preferences_t mPrefs
size_t mChunkSize
size_t mEndSize

Private Static Attributes

static constexpr size_t LZ4_COMPRESSION_CHUNK_SIZE = {64 * 1024}
static constexpr LZ4F_preferences_t lz4CompressionPreferences = {{LZ4F_max64KB, LZ4F_blockLinked, LZ4F_noContentChecksum, LZ4F_frame}, 0, 0,}
static constexpr LZ4F_preferences_t lz4HighCompressionPreferences = {{LZ4F_max64KB, LZ4F_blockLinked, LZ4F_noContentChecksum, LZ4F_frame}, 9, 0,}
class Lz4DecompressionSupport : public dv::io::compression::DecompressionSupport

Public Functions

inline explicit Lz4DecompressionSupport(const CompressionType type)
inline virtual void decompress(std::vector<std::byte> &src, std::vector<std::byte> &target) override

Private Functions

inline void initDecompressionContext()

Private Members

std::shared_ptr<LZ4F_dctx_s> mContext

Private Static Attributes

static constexpr size_t LZ4_DECOMPRESSION_CHUNK_SIZE = {64 * 1024}
class MapOfVariants : public std::unordered_map<std::string, InputType>
#include </builds/inivation/dv/dv-processing/include/dv-processing/core/multi_stream_slicer.hpp>

Class that is passed to the slicer callback. It is an unordered map where key is the configured stream name and the value is a variant. The class provides convenience methods to access and cast the types.

Public Functions

template<class Type>
inline Type &get(const std::string &streamName)

Get a reference to the data packet of a given stream name.

Template Parameters:

Type – Type of data for the stream.

Parameters:

streamName – Stream name.

Returns:

Data packet casted to the given type.

template<class Type>
inline const Type &get(const std::string &streamName) const

Get a reference to the data packet of a given stream name.

Template Parameters:

Type – Type of data for the stream.

Parameters:

streamName – Stream name.

Returns:

Data packet casted to the given type.

template<typename TYPE, int32_t ROWS = Eigen::Dynamic, int32_t COLUMNS = Eigen::Dynamic, int32_t SAMPLE_ORDER = Eigen::ColMajor>
class MeanShiftEigenMatrixAdaptor
#include </builds/inivation/dv/dv-processing/include/dv-processing/cluster/mean_shift/eigen_matrix_adaptor.hpp>

This class implements the Mean Shift clustering algorithm.

As the Mean Shift algorithm performs a gradient ascent on an estimated probability density function, when applying it to integer data, which has a non-smooth probability density, the quality of the detected clusters depends significantly on the selected bandwidth hyperparameter, as well as the underlying data and the selected kernel. Generally the Gaussian Kernel yields better results for this kind of data, however it comes with a bigger performance impact.

The Mean Shift algorithm is an nonparametric estimate of the modes of the underlying probability distribution for the data. It implements an iterative search, starting from points provided by the user, or randomly selected from the data points provided. For each iteration, the current estimate of the mode is replaced by an estimate of the mean value of the surrounding data samples. If the Epanechnikov kernel is used for the underlying density estimate, its so-called “shadow kernel”, the flat kernel must be used for the estimate of the mean. This means, that we can simply compute the average value of the data points that lie within a given radius around the current estimate of the mode, and use this as the next estimate. To provide an efficient search for the neighbours of the current mode estimate, a KD tree was used.

For the underlying theory, see “The Estimation of the Gradient of a Density Function with

Applications in Pattern Recognition” by K. Fukunaga and L. Hostetler as well as “Mean shift, mode seeking, and

clustering” by Yizong Cheng.

See also

Eigen::Dynamic

See also

Eigen::Dynamic

See also

Eigen::StorageOptions

Template Parameters:
  • TYPE – the underlying data type

  • ROWS – the number of rows in the data matrix. May be Eigen::Dynamic or >= 0.

  • COLUMNS – the number of columns in the data matrix. May be Eigen::Dynamic or >= 0.

  • SAMPLE_ORDER – the order in which samples are entered in the matrix.

Public Types

using Matrix = Eigen::Matrix<TYPE, ROWS, COLUMNS, STORAGE_ORDER>
using Vector = Eigen::Matrix<TYPE, SAMPLE_ORDER == Eigen::ColMajor ? ROWS : 1, SAMPLE_ORDER == Eigen::ColMajor ? 1 : COLUMNS, STORAGE_ORDER>
using VectorOfVectors = std::vector<Vector, Eigen::aligned_allocator<Vector>>

Public Functions

template<typename T, std::enable_if_t<std::is_same_v<T, Matrix>, bool> = false>
inline MeanShiftEigenMatrixAdaptor(const T &data, const TYPE bw, TYPE conv, const uint32_t maxIter, const VectorOfVectors &startingPoints, const uint32_t numLeaves = 32768)

Constructor

See also

MeanShift::Matrix

See also

dv::containers::KDTree

Template Parameters:

T – The matrix type. Must be of exact same type as MeanShift::Matrix, to avoid copy construction of a temporary variable and thereby creating dangling references.

Parameters:
  • data – The Matrix containing the data. The data is neither copied nor otherwise managed, ownership remains with the user of this class.

  • bw – The bandwidth used for the shift. This is a hyperparameter for the kernel. For the Epanechnikov kernel this means that all values within a radius of bw are averaged.

  • conv – For each starting point, the algorithm is stopped as soon as the absolute value of the shift is <= conv.

  • maxIter – The maximum number of iterations. Detected modes, for which the the number of iterations exceed this value are not added to the detected clusters.

  • startingPoints – Points from which to start the search.

  • numLeaves – the maximum number of leaves for the KDTree.

template<typename T, std::enable_if_t<std::is_same_v<T, Matrix>, bool> = false>
inline MeanShiftEigenMatrixAdaptor(const T &data, const TYPE bw, TYPE conv, const uint32_t maxIter, VectorOfVectors &&startingPoints, const uint32_t numLeaves = 32768)

Constructor

See also

MeanShift::Matrix

See also

dv::containers::KDTree

Template Parameters:

T – The matrix type. Must be of exact same type as MeanShift::Matrix, to avoid copy construction of a temporary variable and thereby creating dangling references.

Parameters:
  • data – The Matrix containing the data. The data is neither copied nor otherwise managed, ownership remains with the user of this class.

  • bw – The bandwidth used for the shift. This is a hyperparameter for the kernel. For the Epanechnikov kernel this means that all values within a radius of bw are averaged.

  • conv – For each starting point, the algorithm is stopped as soon as the absolute value of the shift is <= conv.

  • maxIter – The maximum number of iterations. Detected modes, for which the the number of iterations exceed this value are not added to the detected clusters.

  • startingPoints – Points from which to start the search.

  • numLeaves – the maximum number of leaves for the KDTree.

template<typename T, std::enable_if_t<std::is_same_v<T, Matrix>, bool> = false>
inline MeanShiftEigenMatrixAdaptor(const T &data, const TYPE bw, TYPE conv, const uint32_t maxIter, const uint32_t numStartingPoints, const uint32_t numLeaves = 32768)

Constructor

See also

MeanShift::Matrix

See also

dv::containers::KDTree

Template Parameters:

T – The matrix type. Must be of exact same type as MeanShift::Matrix, to avoid copy construction of a temporary variable and thereby creating dangling references.

Parameters:
  • data – The Matrix containing the data. The data is neither copied nor otherwise managed, ownership remains with the user of this class.

  • bw – The bandwidth used for the shift. This is a hyperparameter for the kernel. For the Epanechnikov kernel this means that all values within a radius of bw are averaged.

  • conv – For each starting point, the algorithm is stopped as soon as the absolute value of the shift is <= conv.

  • maxIter – The maximum number of iterations. Detected modes, for which the the number of iterations exceed this value are not added to the detected clusters.

  • numStartingPoints – The number of points which are randomly selected from the data points, to be used as starting points.

  • numLeaves – the maximum number of leaves for the KDTree.

MeanShiftEigenMatrixAdaptor() = delete
MeanShiftEigenMatrixAdaptor(const ThisType &other) = delete
MeanShiftEigenMatrixAdaptor(ThisType &&other) = delete
MeanShiftEigenMatrixAdaptor &operator=(const ThisType &other) = delete
MeanShiftEigenMatrixAdaptor &operator=(ThisType &&other) = delete
~MeanShiftEigenMatrixAdaptor() = default
template<kernel::MeanShiftKernel kernel = kernel::Epanechnikov>
inline auto fit()

Executes the algorithm.

See also

MeanShiftKernel

Template Parameters:

kernel – the kernel to be used.

Returns:

The centres of each detected cluster

Returns:

The labels for each data point. The labels correspond to the index of the centre to which the sample is assigned.

Returns:

The number of samples in each cluster

Returns:

The in-cluster variance for each cluster

Public Static Functions

static inline VectorOfVectors generateStartingPointsFromData(const uint32_t numStartingPoints, const Matrix &data)

Generates a vector of vectors containing the starting points by randomly selecting from provided data.

Parameters:
  • numStartingPoints – The number of points to be generated.

  • data – the matrix to select the starting points from.

Returns:

The vector of vectors containing the starting points.

static inline VectorOfVectors generateStartingPointsFromRange(const uint32_t numStartingPoints, const std::vector<std::pair<TYPE, TYPE>> &ranges)

Generates a vector of vectors containing the starting points by generating random points within a given range for each dimension

Parameters:
  • numStartingPoints – The number of points to be generated

  • ranges – a vector containing one range per dimension. Each dimension is represented by a pair containing the beginning and the end of the range

Returns:

The vector of vectors containing the starting points.

Private Functions

template<kernel::MeanShiftKernel kernel>
inline auto findClusterCentres()

Performs the search for the cluster centres for each given starting point. A detected centre is added to the set of centres if it isn’t closer than the bandwidth to any previously detected centre.

See also

MeanShiftKernel

Template Parameters:

kernel – the kernel to be used.

Returns:

The centres of each detected cluster

inline auto assignClusters(const VectorOfVectors &clusterCentres)

Assigns the data samples to a cluster by means of a nearest neighbour search, and computes the number of samples as well as the in-cluster variance in the process.

Parameters:

clusterCentres – The centres of each detected cluster

Returns:

The labels for each data point. The labels correspond to the index of the centre to which the sample is assigned.

Returns:

The number of samples in each cluster

Returns:

The in-cluster variance for each cluster

template<kernel::MeanShiftKernel kernel>
inline std::optional<Vector> performShift(Vector currentMode)

Performs a search for a mode in the underlying density starting off with a provided initial point.

See also

MeanShiftKernel

Template Parameters:

kernel – the kernel to be used.

Parameters:

currentMode – The starting point that is to be shifted until convergence.

Returns:

An std::optional containing either a vector, if the search has converged, std::nullopt otherwise

template<kernel::MeanShiftKernel kernel>
inline float applyKernel(const float squaredDistance) const

Applies the selected kernel to the squared distance

See also

MeanShiftKernel

Template Parameters:

kernel – the kernel to be used.

Parameters:

squaredDistance – the squared distance between the current mode estimate and a given sample point

Returns:

the kernel value

template<kernel::MeanShiftKernel kernel>
inline auto getNeighbours(const Vector &currentMode)

Returns the neighbours surrounding a centre

See also

MeanShiftKernel

Template Parameters:

kernel – the kernel to be used.

Parameters:

centre – the centre surrounding which the neighbours are to be found

Returns:

the neighbours, as a vector of pairs, one pair per neighbour containing a the index of the point in the data matrix and a distance to the centre

inline auto getSample(const uint32_t index) const

Returns a sample at a given index

Parameters:

index – the index of the sample in mData

Returns:

the sample

inline Vector getZeroVector() const
Returns:

a zero vector of length mNumDimensions

Private Members

const size_t mNumSamples
const size_t mNumDimensions
KDTree mData
const TYPE mBandwidth
const uint32_t mMaxIter
const TYPE mConvergence
VectorOfVectors mStartingPoints

Private Static Functions

template<typename T>
static inline auto randomArrayBetween(const uint32_t length, const T begin, const T end)

Generate an array of random values within a given range and a given length

Template Parameters:

T – The data type

Parameters:
  • length – The length of the array

  • begin – The minimum value contained in the array

  • end – The maximum value contained in the array

Returns:

The array

static inline auto extractSample(const Matrix &data, const uint32_t index)

Returns a sample at a given index

Parameters:
  • data – the data to extract the sample from

  • index – the index of the sample in mData

Returns:

the sample

static inline Vector getZeroVector(uint32_t numDimensions)
Returns:

a zero vector of length mNumDimensions

Private Static Attributes

static constexpr int32_t DIMS = SAMPLE_ORDER == Eigen::ColMajor ? ROWS : COLUMNS
static constexpr int32_t NOT_SAMPLE_ORDER = (SAMPLE_ORDER == Eigen::ColMajor ? Eigen::RowMajor : Eigen::ColMajor)
static constexpr int32_t STORAGE_ORDER = DIMS == 1 ? NOT_SAMPLE_ORDER : SAMPLE_ORDER
class MeanShiftEventStoreAdaptor
#include </builds/inivation/dv/dv-processing/include/dv-processing/cluster/mean_shift/event_store_adaptor.hpp>

This class implements the Mean Shift clustering algorithm with an Epanechnikov Kernel for event store data.

As event data has a non-smooth probability density in x and y space, and the Mean Shift algorithm performs a gradient ascent, the quality of the detected clusters depends significantly on the selected bandwidth hyperparameter, as well as the underlying data and the selected kernel. Generally the Gaussian Kernel yields better results for this kind of data, however it comes with a bigger performance impact.

The Mean Shift algorithm is an nonparametric estimate of the modes of the underlying probability distribution for the data. It implements an iterative search, starting from points provided by the user, or randomly selected from the data points provided. For each iteration, the current estimate of the mode is replaced by an estimate of the mean value of the surrounding data samples. If the Epanechnikov kernel is used for the underlying density estimate, its so-called “shadow kernel”, the flat kernel must be used for the estimate of the mean. This means, that we can simply compute the average value of the data points that lie within a given radius around the current estimate of the mode, and use this as the next estimate. To provide an efficient search for the neighbours of the current mode estimate, a KD tree was used.

For the underlying theory, see “The Estimation of the Gradient of a Density Function with

Applications in Pattern Recognition” by K. Fukunaga and L. Hostetler as well as “Mean shift, mode seeking, and

clustering” by Yizong Cheng.

Public Types

using Vector = dv::TimedKeyPoint
using VectorOfVectors = std::vector<Vector, Eigen::aligned_allocator<Vector>>

Public Functions

inline MeanShiftEventStoreAdaptor(const dv::EventStore &data, const int16_t bw, float conv, const uint32_t maxIter, const VectorOfVectors &startingPoints, const uint32_t numLeaves = 32768)

Constructor

See also

dv::containers::KDTree

Parameters:
  • data – The Matrix containing the data. The data is neither copied nor otherwise managed, ownership remains with the user of this class.

  • bw – The bandwidth used for the shift. This is a hyperparameter for the kernel. For the Epanechnikov kernel this means that all values within a radius of bw are averaged.

  • conv – For each starting point, the algorithm is stopped as soon as the absolute value of the shift is <= conv.

  • maxIter – The maximum number of iterations. Detected modes, for which the the number of iterations exceed this value are not added to the detected clusters.

  • startingPoints – Points from which to start the search.

  • numLeaves – the maximum number of leaves for the KDTree.

inline MeanShiftEventStoreAdaptor(const dv::EventStore &data, const int16_t bw, float conv, const uint32_t maxIter, VectorOfVectors &&startingPoints, const uint32_t numLeaves = 32768)

Constructor

See also

dv::containers::KDTree

Parameters:
  • data – The Matrix containing the data. The data is neither copied nor otherwise managed, ownership remains with the user of this class.

  • bw – The bandwidth used for the shift. This is a hyperparameter for the kernel. For the Epanechnikov kernel this means that all values within a radius of bw are averaged.

  • conv – For each starting point, the algorithm is stopped as soon as the absolute value of the shift is <= conv.

  • maxIter – The maximum number of iterations. Detected modes, for which the the number of iterations exceed this value are not added to the detected clusters.

  • startingPoints – Points from which to start the search.

  • numLeaves – the maximum number of leaves for the KDTree.

inline MeanShiftEventStoreAdaptor(const dv::EventStore &data, const int16_t bw, float conv, const uint32_t maxIter, const uint32_t numStartingPoints, const uint32_t numLeaves = 32768)

Constructor

See also

dv::containers::KDTree

Parameters:
  • data – The Matrix containing the data. The data is neither copied nor otherwise managed, ownership remains with the user of this class.

  • bw – The bandwidth used for the shift. This is a hyperparameter for the kernel. For the Epanechnikov kernel this means that all values within a radius of bw are averaged.

  • conv – For each starting point, the algorithm is stopped as soon as the absolute value of the shift is <= conv.

  • maxIter – The maximum number of iterations. Detected modes, for which the the number of iterations exceed this value are not added to the detected clusters.

  • numStartingPoints – The number of points which are randomly selected from the data points, to be used as starting points.

  • numLeaves – the maximum number of leaves for the KDTree.

MeanShiftEventStoreAdaptor() = delete
MeanShiftEventStoreAdaptor(const MeanShiftEventStoreAdaptor &other) = delete
MeanShiftEventStoreAdaptor(MeanShiftEventStoreAdaptor &&other) = delete
MeanShiftEventStoreAdaptor &operator=(const MeanShiftEventStoreAdaptor &other) = delete
MeanShiftEventStoreAdaptor &operator=(MeanShiftEventStoreAdaptor &&other) = delete
~MeanShiftEventStoreAdaptor() = default
template<kernel::MeanShiftKernel kernel = kernel::Epanechnikov>
inline auto fit()

Executes the algorithm.

See also

MeanShiftKernel

Template Parameters:

kernel – the kernel to be used.

Returns:

The centres of each detected cluster

Returns:

The labels for each data point. The labels correspond to the index of the centre to which the sample is assigned.

Returns:

The number of samples in each cluster

Returns:

The in-cluster variance for each cluster

template<kernel::MeanShiftKernel kernel>
inline VectorOfVectors findClusterCentres()

Performs the search for the cluster centres for each given starting point. A detected centre is added to the set of centres if it isn’t closer than the bandwidth to any previously detected centre.

See also

MeanShiftKernel

Template Parameters:

kernel – the kernel to be used.

Returns:

The centres of each detected cluster

inline std::tuple<std::vector<uint32_t>, std::vector<uint32_t>, std::vector<float>> assignClusters(const VectorOfVectors &clusterCentres)

Assigns the data samples to a cluster by means of a nearest neighbour search, and computes the number of samples as well as the in-cluster variance in the process.

Parameters:

clusterCentres – The centres of each detected cluster

Returns:

The labels for each data point. The labels correspond to the index of the centre to which the sample is assigned.

Returns:

The number of samples in each cluster

Returns:

The in-cluster variance for each cluster

Public Static Functions

static inline VectorOfVectors generateStartingPointsFromData(const uint32_t numStartingPoints, const dv::EventStore &data)

Generates a vector of vectors containing the starting points by randomly selecting from provided data. Data cannot be empty, and for best results, should contain at least more elements than the desired number of starting points given as a parameter.

Parameters:
  • numStartingPoints – The number of points to be generated.

  • data – the event data to select the starting points from, cannot be empty.

Returns:

The vector of vectors containing the starting points.

static inline VectorOfVectors generateStartingPointsFromRange(const uint32_t numStartingPoints, const std::array<std::pair<int16_t, int16_t>, 2> &ranges)

Generates a vector of vectors containing the starting points by generating random points within a given range for each dimension

Parameters:
  • numStartingPoints – The number of points to be generated

  • ranges – a vector containing one range per dimension. Each dimension is represented by a pair containing the beginning and the end of the range

Returns:

The vector of vectors containing the starting points.

Private Types

using KDTree = dv::containers::kd_tree::KDTreeEventStoreAdaptor

Private Functions

template<kernel::MeanShiftKernel kernel>
inline std::optional<Vector> performShift(Vector currentMode)

Performs a search for a mode in the underlying density starting off with a provided initial point.

See also

MeanShiftKernel

Template Parameters:

kernel – the kernel to be used.

Parameters:

currentMode – The starting point that is to be shifted until convergence.

Returns:

An std::optional containing either a vector, if the search has converged, std::nullopt otherwise

template<kernel::MeanShiftKernel kernel>
inline float applyKernel(const float squaredDistance) const

Applies the selected kernel to the squared distance

See also

MeanShiftKernel

Template Parameters:

kernel – the kernel to be used.

Parameters:

squaredDistance – the squared distance between the current mode estimate and a given sample point

Returns:

the kernel value

template<kernel::MeanShiftKernel kernel>
inline auto getNeighbours(const Vector &centre)

Returns the neighbours surrounding a centre

See also

MeanShiftKernel

Template Parameters:

kernel – the kernel to be used.

Parameters:

centre – the centre surrounding which the neighbours are to be found

Returns:

the neighbours, as a vector of pairs, one pair per neighbour containing a pointer to the event and a distance to the centre

inline float squaredDistance(const dv::TimedKeyPoint &k, const dv::Event &e) const
inline float squaredDistance(const dv::TimedKeyPoint &k1, const dv::TimedKeyPoint &k2) const
inline float squaredDistance(const dv::Event &e1, const dv::Event &e2) const
template<typename T>
inline T pow2(const T val) const

Private Members

const size_t mNumSamples
KDTree mData
const int16_t mBandwidth
const uint32_t mMaxIter
const float mConvergence
const VectorOfVectors mStartingPoints

Private Static Functions

static inline Vector getZeroVector()
class MeanShiftTracker : public dv::features::TrackerBase
#include </builds/inivation/dv/dv-processing/include/dv-processing/features/mean_shift_tracker.hpp>

Track event blobs using mean shift algorithm on time surface event data.

Public Functions

inline MeanShiftTracker(const cv::Size &resolution, const int bandwidth, const dv::Duration timeWindow, RedetectionStrategy::UniquePtr redetectionStrategy = nullptr, std::unique_ptr<EventFeatureBlobDetector> detector = nullptr, const float stepSize = 0.5f, const float weightMultiplier = 1.f, float convergenceNorm = 0.01f, int maxIters = 2000)

Constructor for mean shift tracker using Epanechnikov kernel as weights for the time surface of events used to update track location. The kernel weights have highest value on the previous track location. This assumption is based on the idea that the new track location is “close” to last track location. The consecutive track updates are performed until the maximum number of iteration is reached or the shift between consecutive updates is below a threshold.

Parameters:
  • resolution – full image plane resolution

  • bandwidth – search window dimension size. The search area is a square. The square side is 2 * bandwidth and center the current track location

  • timeWindow – look back time from latest event: used to generate normalized time surface. All events older than (latestEventTime-timeWindow) will be discarded

  • redetectionStrategy – strategy used to decide if and when to re-detect interesting points to track

  • detector – detector used to re-detect tracks if redetection strategy uis defined and should happen

  • stepSize – weight applied to shift to compute new track location. This value is in range (0, 1). A value of 0 means that no shift is performed. A value of 1 means that the new candidate center is directly assigned as new center

  • weightMultiplier – scaling factor for Epanechnikov weights used in the computation of the mean shift cost update

  • convergenceNorm – shift value below which search will not continue (this value is named “mode” in the docs)

  • maxIters – maximum number of search iterations for one track update

inline void accept(const dv::EventStore &store)

Add events to time surface and update last batch of events fed to the tracker.

Parameters:

store – new incoming events for the tracker.

inline virtual Result::SharedPtr track() override

Compute new centers based on area with highest event density. The density is weighted by the event timestamp: newer timestamps have higher weight.

Returns:

structure containing new track locations as a vector of dv::TimedKeyPoint

inline void setRedetectionStrategy(RedetectionStrategy::UniquePtr redetectionStrategy)

Define redetection strategy used to re-detect interesting points to track.

Parameters:

redetectionStrategy – type of redetection to use (check redetection_strategy.hpp for available types of re-detections)

inline void setDetector(std::unique_ptr<EventFeatureBlobDetector> detector)

Define detector used to detect interesting points to track (if redetection should happen)

Parameters:

detector – detector for new interesting points to track

inline int getBandwidth() const

Getter for bandwidth value that defines the search area for a new track. For detailed information on how the area is computed please check related parameter in constructor.

Returns:

search window dimension size.

inline void setBandwidth(const int bandwidth)

Setter for bandwidth value.

Parameters:

bandwidth – search window dimension size.

inline dv::Duration getTimeWindow() const

Get time window duration used to normalize time surface.

Returns:

value of time window use to generate normalized time surface

inline void setTimeWindow(const dv::Duration timeWindow)

Setter for time window duration for time surface normalization.

Parameters:

timeWindow – size of window

inline float getStepSize() const

Get multiplier value used for track location update. Given a computed shift to be applied to a track, the actual shift performed is given by mStepSize * shift.

Returns:

scaling value applied to the spatial interval computed between current and new track position at consecutive updates

inline void setStepSize(const float stepSize)

Setter for learning rate for motion towards new center during one mean shift iteration. Please check the same parameter in the constructor description for detailed information.

Parameters:

stepSize – weight applied to shift to compute new track location.

inline float getWeightMultiplier() const

Getter for weight multiplier used to adjust weight of each time surface value in the mean shift update. If multiplier is smaller than 1, the cost values for each location are shrink, whereas if the multiplier is larger than 1, the difference between time surface intensities will be larger.

Returns:

weight multiplier value

inline void setWeightMultiplier(const float multiplier)

Setter for scaling factor used in the computation of the mean shift cost update.

Parameters:

multiplier – scaling factor value

inline float getConvergenceNorm() const

Get norm of distance between consecutive tracks updates. If the distance is smaller than this norm, the track update is considered to be converged.

Returns:

value of distance norm between consecutive updates

inline void setConvergenceNorm(const float norm)

Setter for threshold norm (i.e. mode) between consecutive track updates below which iterations are stopped.

Parameters:

norm – threshold value

inline int getMaxIterations() const

Get maximum number of times track update can be run.

Returns:

value of maximum number of operations for track update

inline void setMaxIterations(const int maxIters)

Setter for maximum number of track updates.

Parameters:

maxIters – value of maximum number of operations for track update

Private Functions

inline Result::SharedPtr updateTracks(const cv::Mat &normalizedTimeSurface)

Compute new location for all tracks. If a new position fall inside the area of a new position computed for a previous track, the track will not be updated. Previous track with its timestamp will be kept.

Parameters:

normalizedTimeSurface – image representation of event timestamps based on time surface

Returns:

updated track positions

inline std::optional<dv::Point2f> computeShift(const dv::Point2f &center, const cv::Mat &timeSurface, const float trackSize)

Compute new track location. Note: kernel weights are updated only if the search window changed size or if it intersects the boundaries of the image plane. This decision has been made for performance reasons and should not affect the final result as long as the new track position is “close enough”, to the starting position.

Parameters:
  • center – previous track location

  • timeSurface – Matrix containing normalized time surface values

  • trackSize – dimension of track determining kernel size

Returns:

new final track location if value is valid, std::nullopt is returned if the search area has no event data inside it.

inline std::optional<dv::Point2f> updateCenterLocation(const cv::Mat &spatialWindow, const cv::Mat &kernelWeights) const

Compute mode (i.e. track location).

Parameters:
  • spatialWindow – image plane sub-matrix in which the center will be updated

  • kernelWeights – weights of Epanechnikov kernel applied to each time surface location inside the given spatial window

Returns:

new track location

inline cv::Mat kernelEpanechnikovWeights(const dv::Point2f &center, const cv::Rect &window, const float cutOffValue) const

Compute Epanechnikov kernel with highest peak at center location.

Parameters:
Returns:

matrix with weights of Epanechnikov kernel

inline std::pair<cv::Mat, cv::Rect> findSpatialWindow(const dv::Point2f &center, const cv::Mat &image) const

Compute area in which the new track position will be searched. This area depends on the bandwidth value. The search area is defined as the square around the center value with size of one side as 2*bandwidth. We return the selected area as first argument and the roi in the full image plane to be able to retrieve coordinates of selected area in the original image space.

Parameters:
  • center – previous track center around which we define the search area

  • image – full image plane data

Returns:

pair containing as first output the matrix block containing the data inside the image defined by the rectangle returned as second output

inline void runRedetection(Result::SharedPtr &result)

Re-detect interesting points

Parameters:

result – current set of tracks to which new detections will be added

Private Members

int mBandwidth

parameter defining search window size for each track update

dv::TimeSurface mSurface

event time surface

dv::Duration mTimeWindow

time window of events to generate the normalized time surface from

float mStepSize
cv::Size mResolution
dv::EventStore mEvents = dv::EventStore()

latest batch of events fed to the tracker

std::unique_ptr<EventFeatureBlobDetector> mDetector

detector used if no track has been detected or redetection is expected to happen

int32_t mLastFreeClassId = 0

value used to keep track of first free ID for a new track

RedetectionStrategy::UniquePtr mRedetectionStrategy = nullptr

type of redetection strategy used to detect new interesting points to track

float mWeightMultiplier

Weight multiplier used to adjust weight of each point in the mean shift update. If multiplier is smaller than 1, the cost values for each location are shrink, whereas if the multiplier is larger than 1, the difference between points with lower intensity in the time surface will be increased from ones with larger intensity values

float mConvergenceNorm

shift value below which search will not continue

int mMaxIters

maximum number of search iterations for one track update

struct Metadata

Public Functions

Metadata() = default
inline explicit Metadata(const cv::Size &patternShape_, const cv::Size &internalPatternShape_, const std::string_view patternType_, const float patternSize_, const float patternSpacing_, const std::optional<float> &calibrationError_, const std::string_view calibrationTime_, const std::string_view quality_, const std::string_view comment_, const std::optional<float> &pixelPitch_)
inline explicit Metadata(const boost::property_tree::ptree &tree)

Create an instance of metadata from a property tree structure.

Parameters:

tree – Property tree to be parsed.

Returns:

Constructed Metadata instance.

inline boost::property_tree::ptree toPropertyTree() const

Serialize the metadata structure into a property tree.

Returns:

Serialized property tree.

inline bool operator==(const Metadata &rhs) const

Equality operator.

Parameters:

rhs

Returns:

Public Members

cv::Size patternShape

Shape of the calibration pattern.

cv::Size internalPatternShape

Shape of the calibration pattern in terms of internal intersections.

std::string patternType

Type of the calibration pattern used (e.g. apriltag)

float patternSize = -1.f

Size of the calibration pattern in [m].

float patternSpacing = -1.f

Ratio between tags to patternSize (apriltag only)

std::optional<float> calibrationError

Calibration reprojection error.

std::string calibrationTime

Timestamp when the calibration was conducted.

std::string quality

Description of the calibration quality (excellent/good/bad etc)

std::string comment

Any additional information.

std::optional<float> pixelPitch

Pixel pitch in meters.

struct Metadata

Public Functions

Metadata() = default
inline explicit Metadata(const std::string_view calibrationTime_, const std::string_view comment_)
inline explicit Metadata(const boost::property_tree::ptree &tree)
inline boost::property_tree::ptree toPropertyTree() const
inline bool operator==(const Metadata &rhs) const

Public Members

std::string calibrationTime

Timestamp when the calibration was conducted.

std::string comment

Any additional information.

struct Metadata
#include </builds/inivation/dv/dv-processing/include/dv-processing/camera/calibrations/stereo_calibration.hpp>

Metadata for the stereo calibration.

Public Functions

Metadata() = default
inline explicit Metadata(const std::optional<float> &epipolarError_, const std::string_view comment_)
inline explicit Metadata(const boost::property_tree::ptree &tree)
inline boost::property_tree::ptree toPropertyTree() const

Serialize into a property tree.

Returns:

inline bool operator==(const Metadata &rhs) const

Public Members

std::optional<float> epipolarError

Average epipolar error.

std::string comment

Any additional information.

class MonoCameraRecording : public dv::io::InputBase
#include </builds/inivation/dv/dv-processing/include/dv-processing/io/mono_camera_recording.hpp>

A convenience class for reading recordings containing data captured from a single camera. Looks for an event, frame, imu, and trigger streams within the supplied aedat4 file.

Public Functions

inline explicit MonoCameraRecording(const std::shared_ptr<ReadOnlyFile> &fileReader, const std::string &cameraName = "")

Create a reader that reads single camera data recording from a pre-constructed file reader.

Parameters:
  • fileReader – A pointer for pre-constructed file reader.

  • cameraName – Name of the camera in the recording. If an empty string is passed (the default value), reader will try detect the name of the camera. In case recording contains more than one camera, it will choose the first encountered name and ignore streams that were recorded by a different camera.

inline explicit MonoCameraRecording(const std::filesystem::path &aedat4Path, const std::string &cameraName = "")

Create a reader that reads single camera data recording from an aedat4 file.

Parameters:
  • aedat4Path – Path to the aedat4 file.

  • cameraName – Name of the camera in the recording. If an empty string is passed (the default value), reader will try detect the name of the camera. In case recording contains more than one camera, it will choose the first encountered name and ignore streams that were recorded by a different camera.

inline virtual std::optional<dv::Frame> getNextFrame() override

Sequential read of a frame, tries reading from stream named “frames”. This function increments an internal seek counter which will return the next frame at each call.

Returns:

A dv::Frame or std::nullopt if the frame stream is not available or the end-of-stream was reached.

inline std::optional<dv::Frame> getNextFrame(const std::string &streamName)

Sequential read of a frame. This function increments an internal seek counter which will return the next frame at each call.

Parameters:

streamName – Name of the stream, if an empty name is passed, it will select any one stream with frame data type.

Returns:

A dv::Frame, std::nullopt if the frame stream is not available or the end-of-stream was reached.

inline virtual bool isStreamAvailable(const std::string_view streamName) const override

Check whether a given stream name is available.

Parameters:

streamName – Name of the stream.

Returns:

True if this stream is available, false otherwise.

inline std::vector<std::string> getStreamNames() const

Return a vector containing all available stream names.

Returns:

A list of custom data type stream names.

template<class DataType>
inline std::optional<DataType> getNextStreamPacket(const std::string &streamName)

Read a custom data type packet sequentially.

Custom data types are any flatbuffer generated types that are not the following: dv::EventPacket, dv::TriggerPacket, dv::IMUPacket, dv::Frame.

Template Parameters:

DataType – Custom data packet class.

Parameters:

streamName – Name of the stream.

Throws:
  • InvalidArgument – An exception is thrown if a stream with given name is not found in the file.

  • InvalidArgument – An exception is thrown if given type does not match the type identifier of the given stream.

Returns:

Next packet within given stream or std::nullopt in case of end-of-stream.

inline virtual std::optional<dv::EventStore> getNextEventBatch() override

Sequential read of events, tries reading from stream named “events”. This function increments an internal seek counter which will return the next event batch at each call.

Returns:

A dv::EventStore or std::nullopt if the frame stream is not available or the end-of-stream was reached.

inline std::optional<dv::EventStore> getNextEventBatch(const std::string &streamName)

Sequentially read a batch of recorded events. This function increments an internal seek counter which will return the next batch at each call.

Parameters:

streamName – Name of the stream, if an empty name is passed, it will select any one stream with event data type.

Returns:

A vector containing events, std::nullopt if the event stream is not available or the end-of-stream was reached.

inline virtual std::optional<std::vector<dv::IMU>> getNextImuBatch() override

Sequential read of imu data, tries reading from stream named “imu”. This function increments an internal seek counter which will return the next imu data batch at each call.

Returns:

A vector or IMU measurements or std::nullopt if the imu data stream is not available or the end-of-stream was reached.

inline std::optional<std::vector<dv::IMU>> getNextImuBatch(const std::string &streamName)

Sequentially read a batch of recorded imu data. This function increments an internal seek counter which will return the next batch at each call.

Parameters:

streamName – Name of the stream, if an empty name is passed, it will select any one stream with imu data type.

Returns:

A vector containing imu data, std::nullopt if the imu data stream is not available or the end-of-stream was reached.

inline virtual std::optional<std::vector<dv::Trigger>> getNextTriggerBatch() override

Sequential read of trigger data, tries reading from stream names “triggers”. This function increments an internal seek counter which will return the next trigger data batch at each call.

Returns:

A vector of trigger data or std::nullopt if the frame stream is not available or the end-of-stream was reached.

inline std::optional<std::vector<dv::Trigger>> getNextTriggerBatch(const std::string &streamName)

Sequentially read a batch of recorded triggers. This function increments an internal seek counter which will return the next batch at each call.

Parameters:

streamName – Name of the stream, if an empty name is passed, it will select any one stream with trigger data type.

Returns:

A vector containing triggers, std::nullopt if the trigger stream is not available or the end-of-stream was reached.

inline void resetSequentialRead()

Reset the sequential read function to start from the beginning of the file.

inline virtual bool isRunning() const override

Check whether any input data streams have terminated. For a live camera this should check if the device is still connected and functioning, while for a recording file this should check if any of the data streams have reached end-of-file (EOF). For a network input, this indicates the network stream is still connected.

Returns:

True if data read on all streams is still possible, false otherwise.

inline virtual bool isRunning(const std::string_view streamName) const override

Check whether the input data stream with the specified name is still active.

Returns:

True if data read on this stream is possible, false otherwise.

inline virtual bool isRunningAny() const override

Check whether any input data streams are still available. For a live camera this should check if the device is still connected and functioning and at least one data stream is active (different than isRunning()), while for a recording file this should check if any of the data streams have not yet reached end-of-file (EOF) and are still readable. For a network input, this indicates the network stream is still connected.

Returns:

True if data read on at least one stream is still possible, false otherwise.

inline std::optional<dv::EventStore> getEventsTimeRange(const int64_t startTime, const int64_t endTime, const std::string &streamName = "events")

Get events within given time range [startTime; endTime).

Parameters:
  • startTime – Start timestamp of the time range.

  • endTime – End timestamp of the time range.

  • streamName – Name of the stream, if an empty name is passed, it will select any one stream with event data type.

Returns:

dv::EventStore with events in the time range if the event stream is available, std::nullopt otherwise.

inline std::optional<std::vector<dv::Frame>> getFramesTimeRange(const int64_t startTime, const int64_t endTime, const std::string &streamName = "frames")

Get frames within given time range [startTime; endTime).

Parameters:
  • startTime – Start timestamp of the time range.

  • endTime – End timestamp of the time range.

  • streamName – Name of the stream, if an empty name is passed, it will select any one stream with frame data type.

Throws:

InvalidArgument – If frame stream doesn’t exists or a stream with given name doesn’t exist.

Returns:

Vector containing frames and timestamps.

template<class DataType>
inline std::optional<std::vector<DataType>> getStreamTimeRange(const int64_t startTime, const int64_t endTime, const std::string &streamName)

Get packets from a stream within given period of time. Returns a vector of packets. If a packet contains elements that are outside of given time range, the internal elements will be cut to match exactly the [startTime; endTime). If stream does not contain any packets within requested time range, the function returns an empty vector.

Template Parameters:

DataType – Packet type

Parameters:
  • startTime – Period start timestamp.

  • endTime – Period end timestamp.

  • streamName – Name of the stream, empty string will pick a first stream with matching type.

Throws:
  • InvalidArgument – An exception is thrown if a stream with given name is not found in the file.

  • InvalidArgument – An exception is thrown if given type does not match the type identifier of the given stream.

Returns:

A vector of packets containing the data only within [startTime; endTime) period.

inline std::optional<std::vector<dv::IMU>> getImuTimeRange(const int64_t startTime, const int64_t endTime, const std::string &streamName = "imu")

Get IMU data within given time range [startTime; endTime).

Parameters:
  • startTime – Start timestamp of the time range.

  • endTime – End timestamp of the time range.

  • streamName – Name of the stream, if an empty name is passed, it will select any one stream with imu data type.

Returns:

Vector containing IMU data if the IMU stream is available, std::nullopt otherwise.

inline std::optional<std::vector<dv::Trigger>> getTriggersTimeRange(const int64_t startTime, const int64_t endTime, const std::string &streamName = "triggers")

Get trigger data within given time range [startTime; endTime).

Parameters:
  • startTime – Start timestamp of the time range.

  • endTime – End timestamp of the time range.

  • streamName – Name of the stream, if an empty name is passed, it will select any one stream with trigger data type.

Returns:

Vector containing triggers if the trigger stream is available, std::nullopt otherwise.

inline virtual bool isFrameStreamAvailable() const override

Check whether frame stream is available. Specifically checks whether a stream named “frames” is available since it’s the default stream name for frames.

Returns:

True if the frame stream is available.

inline bool isFrameStreamAvailable(const std::string &streamName) const

Checks whether a frame data stream is present in the file.

Parameters:

streamName – Name of the stream, if an empty name is passed, it will select any one stream with frame data type.

Returns:

True if the frames are available, false otherwise.

inline virtual bool isEventStreamAvailable() const override

Check whether event stream is available. Specifically checks whether a stream named “events” is available since it’s the default stream name for events.

Returns:

True if the event stream is available, false otherwise.

inline bool isEventStreamAvailable(const std::string &streamName) const

Checks whether an event data stream is present in the file.

Parameters:

streamName – Name of the stream, if an empty name is passed, it will select any one stream with event data type.

Returns:

True if the events are available, false otherwise.

inline virtual bool isImuStreamAvailable() const override

Check whether imu data stream is available. Specifically checks whether a stream named “imu” is available since it’s the default stream name for imu data.

Returns:

True if the imu stream is available, false otherwise.

inline bool isImuStreamAvailable(const std::string &streamName) const

Checks whether an imu data stream is present in the file.

Parameters:

streamName – Name of the stream, if an empty name is passed, it will select any one stream with IMU data type.

Returns:

True if the imu data is available, false otherwise.

inline virtual bool isTriggerStreamAvailable() const override

Check whether trigger stream is available. Specifically checks whether a stream named “triggers” is available since it’s the default stream name for trigger data.

Returns:

True if the trigger stream are available, false otherwise.

inline bool isTriggerStreamAvailable(const std::string &streamName) const

Checks whether a trigger data stream is present in the file.

Parameters:

streamName – Name of the stream, if an empty name is passed, it will select any one stream with trigger data type.

Returns:

True if the triggers are available, false otherwise.

inline std::pair<int64_t, int64_t> getTimeRange() const

Return a pair containing start (first) and end (second) time of the recording file.

Returns:

A pair containing start and end timestamps for the recording.

inline dv::Duration getDuration() const

Return the duration of the recording.

Returns:

Duration value holding the total playback time of the recording.

inline virtual std::string getCameraName() const override

Return the camera name that is detected in the recording.

Returns:

String containing camera name.

inline DataReadVariant readNext()

Read next packet in the recorded stream, the function returns a std::variant containing one of the following types:

  • dv::EventStore

  • dv::Frame

  • std::vector<dv::IMU>

  • std::vector<dv::Trigger>

  • dv::io::MonoCameraRecording::OutputFlag The OutputFlag is used to determine when the end of file is reached. If the reader encounters an unsupported type, the data will be skipped and will seek until a packet containing a supported type is reached.

Returns:

std::variant containing a packet with data of one of the supported types.

inline bool handleNext(DataReadHandler &handler)

Read next packet from the recording and use a handler object to handle all types of packets. The function returns a true if end-of-file was not reached, so this function call can be used in a while loop like so:

while (recording.handleNext(handler)) {
        // While-loop executes after each packet
}

Parameters:

handler

Returns:

inline void run(DataReadHandler &handler)

Sequentially read all packets from the recording and apply handler to each packet. This is a blocking call.

Parameters:

handler – Handler class containing lambda functions for each supported packet type.

inline virtual std::optional<cv::Size> getEventResolution() const override

Get event stream resolution for the “events” stream.

Returns:

Resolution of the “events” stream.

inline std::optional<cv::Size> getEventResolution(const std::string &streamName) const

Get the resolution of the event data stream if it is available.

Parameters:

streamName – Name of the stream, if an empty name is passed, it will select any one stream with event data type.

Returns:

Returns the resolution of the event data if available, std::nullopt otherwise.

inline virtual std::optional<cv::Size> getFrameResolution() const override

Get frame stream resolution for the “frames” stream.

Returns:

Resolution of the “frames” stream.

inline std::optional<cv::Size> getFrameResolution(const std::string &streamName) const

Get the resolution of the frame data stream if it is available.

Parameters:

streamName – Name of the stream, if an empty name is passed, it will select any one stream with frame data type.

Returns:

Returns the resolution of the frames if available, std::nullopt otherwise.

inline const std::map<std::string, std::string> &getStreamMetadata(const std::string &streamName)

Get all metadata of a stream.

Parameters:

streamName – Name of the stream.

Throws:

out_of_range – Out of range exception is thrown if a stream with given name is not available.

Returns:

A map containing key-value strings of each available metadata of a requested stream.

inline std::optional<std::string> getStreamMetadataValue(const std::string &streamName, const std::string &key)

Get a value of a given metadata key. Throws an exception if given stream doesn’t exist and returns std::nullopt if a metadata entry with given key is not found for the stream.

Parameters:
  • streamName – Name of the stream.

  • key – Key string of the metadata.

Throws:

out_of_range – Out of range exception is thrown if a stream with given name is not available.

Returns:

Metadata entry with given key is found for the stream, std::nullopt otherwise.

template<class DataType>
inline bool isStreamOfDataType(const std::string &streamName) const

Check whether a stream is of a given data type.

Template Parameters:

DataType – Data type to be checked.

Parameters:

streamName – Name of the stream.

Throws:

out_of_bounds – Out of bounds exception is thrown if stream of a given name is not found.

Returns:

True if the given stream contains DataType data.

Private Types

using StreamInfoMap = std::map<std::string, StreamDescriptor, std::less<>>

Private Functions

inline const dv::io::Stream *getStream(const int streamId) const
inline void parseStreamIds()
template<class DataType>
inline StreamInfoMap::iterator getStreamInfo(const std::string &streamName)
template<class DataType>
inline StreamInfoMap::const_iterator getStreamInfo(const std::string &streamName) const
template<class DataType>
inline std::shared_ptr<DataType> getNextPacket(StreamDescriptor &streamInfo)

Private Members

std::shared_ptr<ReadOnlyFile> mReader = nullptr
FileInfo mInfo
std::string mCameraName
std::vector<FileDataDefinition>::const_iterator mPacketIter
bool eofReached = false
StreamInfoMap mStreamInfo

Private Static Functions

template<class VectorClass>
static inline void trimVector(VectorClass &vector, int64_t start, int64_t end)

Trim a vector containing elements with a timestamp. Retains only the data within [start; end).

Template Parameters:

VectorClass – The class of the vector

Parameters:
  • vector – The vector of data

  • start – Start timestamp (inclusive start of range)

  • end – End timestamp (exclusive end of range)

class MonoCameraWriter : public dv::io::OutputBase

Public Functions

inline MonoCameraWriter(const std::filesystem::path &aedat4Path, const MonoCameraWriter::Config &config, const dv::io::support::TypeResolver &resolver = dv::io::support::defaultTypeResolver)

Create an aedat4 file writer with simplified API.

Parameters:
  • aedat4Path – Path to the output file. The file is going to be overwritten.

  • configWriter config. Defines expected output streams and recording metadata.

  • resolver – Type resolver for the output file.

inline MonoCameraWriter(const std::filesystem::path &aedat4Path, const dv::io::camera::CameraInputBase &capture, const CompressionType compression = CompressionType::LZ4, const dv::io::support::TypeResolver &resolver = dv::io::support::defaultTypeResolver)

Create an aedat4 file writer that inspects the capabilities and configuration from a dv::io::CameraCapture class. This will enable all available data streams present from the camera capture.

Parameters:
  • aedat4Path – Path to the output file. The file is going to be overwritten.

  • capture – Direct camera capture instance. This is used to inspect the available data streams and metadata of the camera.

  • compression – Compression to be used for the output file.

  • resolver – Type resolver for the output file.

inline void writeEventPacket(const dv::EventPacket &events, const std::string &streamName = "events")

Write an event packet into the output file.

The data is passed directly into the serialization procedure without performing copies. Data is serialized and the actual file IO is performed on a separate thread.

Parameters:
  • events – Packet of events.

  • streamName – Name of the stream, an empty string will match first stream with compatible data type.

Throws:

invalid_argument – Invalid argument exception is thrown if function is called and compatible output stream was not added during construction.

inline void writeEvents(const dv::EventStore &events, const std::string &streamName)

Write an event store into the output file. The store is written by maintaining internal data partial ordering and fragmentation.

The data is passed directly into the serialization procedure without performing copies. Data is serialized and the actual file IO is performed on a separate thread.

Parameters:
  • events – Store of events.

  • streamName – Name of the stream, an empty string will match first stream with compatible data type.

Throws:

invalid_argument – Invalid argument exception is thrown if function is called and compatible output stream was not added during construction.

inline void writeFrame(const dv::Frame &frame, const std::string &streamName)

Write a frame image into the file.

The data is passed directly into the serialization procedure without performing copies. Data is serialized and the actual file IO is performed on a separate thread.

NOTE: if the frame contains an empty image, it will be ignored and not recorded.

Parameters:
  • frame – A frame to be written.

  • streamName – Name of the stream, an empty string will match first stream with compatible data type.

Throws:

invalid_argument – Invalid argument exception is thrown if function is called and compatible output stream was not added during construction.

inline void writeImuPacket(const dv::IMUPacket &packet, const std::string &streamName = "imu")

Write a packet of imu data into the file.

The data is passed directly into the serialization procedure without performing copies. Data is serialized and the actual file IO is performed on a separate thread.

Parameters:
  • packetIMU measurement packet.

  • streamName – Name of the stream, an empty string will match first stream with compatible data type.

Throws:

invalid_argument – Invalid argument exception is thrown if function is called and compatible output stream was not added during construction.

inline void writeImu(const dv::IMU &imu, const std::string &streamName = "imu")

Write an IMU measurement.

This function is not immediate, it batches the measurements until a configured amount is reached, only then the data is passed to the serialization step. Only then the data will be passed to the file write IO thread. If the file is closed (the object gets destroyed), destructor will dump the rest of the buffered measurements to the serialization step.

Parameters:
  • imu – A single IMU measurement.

  • streamName – Name of the stream, an empty string will match first stream with compatible data type.

Throws:

invalid_argument – Invalid argument exception is thrown if function is called and compatible output stream was added enabled during construction.

inline void writeTriggerPacket(const dv::TriggerPacket &packet, const std::string &streamName = "triggers")

Write a packet of trigger data into the file.

The data is passed directly into the serialization procedure without performing copies. Data is serialized and the actual file IO is performed on a separate thread.

Parameters:
  • packetTrigger data packet.

  • streamName – Name of the stream, an empty string will match first stream with compatible data type.

Throws:

invalid_argument – Invalid argument exception is thrown if function is called and compatible output stream was added enabled during construction.

inline void writeTrigger(const dv::Trigger &trigger, const std::string &streamName = "triggers")

Write a Trigger measurement.

This function is not immediate, it batches the measurements until a configured amount is reached, only then the data is passed to the serialization step. Only then the data will be passed to the file write IO thread. If the file is closed (the object gets destroyed), destructor will dump the rest of the buffered measurements to the serialization step.

Parameters:
  • trigger – A single Trigger measurement.

  • streamName – Name of the stream, an empty string will match first stream with compatible data type.

Throws:

invalid_argument – Invalid argument exception is thrown if function is called and compatible output stream was not added during construction.

template<class PacketType>
inline void writePacket(const PacketType &packet, const std::string &stream)

Write a packet into a named stream.

Template Parameters:

PacketType – Type of data packet.

Parameters:
  • stream – Name of the stream, an empty string will match first stream with compatible data type.

  • packet – Data packet

Throws:
  • InvalidArgument – If a stream with given name is not configured.

  • InvalidArgument – If a stream with given name is configured for a different type of data packet.

  • invalid_argument – Invalid argument exception is thrown if function is called and compatible output stream was added enabled during construction.

template<class PacketType, class ElementType>
inline void writePacketElement(const ElementType &element, const std::string &streamName)

Write a single element into packet. A packet will be created per stream and element will be added until packaging count is reached, at that point the packet will be written do disk.

Template Parameters:
  • PacketType – Type of the packet to hold the elements.

  • ElementType – Type of an element.

Parameters:
  • element – Element to be saved.

  • streamName – Name of the stream, an empty string will match first stream with compatible data type.

inline void setPackagingCount(const size_t packagingCount)

Set the size batch size for trigger and imu buffering. The single measurements passed into writeTrigger and writeImu functions will packed into batches of the given size before writing to the file.

A packaging value of 0 or 1 will cause each measurement to be serialized immediately.

See also

writeTrigger

See also

writeImu

Parameters:

packagingCountTrigger and IMU measurement packet size that is batched up using the writeImu and writeTrigger functions.

inline bool isEventStreamConfigured(const std::string &streamName = "events") const

Check if the event stream is configured for this writer.

Parameters:

streamName – Name of the stream, an empty string will match first stream with compatible data type.

Returns:

True if event stream is configured, false otherwise.

inline bool isFrameStreamConfigured(const std::string &streamName = "frames") const

Check if the frame stream is configured for this writer.

Parameters:

streamName – Name of the stream, an empty string will match first stream with compatible data type.

Returns:

True if frame stream is configured, false otherwise.

inline bool isImuStreamConfigured(const std::string &streamName = "imu") const

Check if the IMU stream is configured for this writer.

Parameters:

streamName – Name of the stream, an empty string will match first stream with compatible data type.

Returns:

True if IMU stream is configured, false otherwise.

inline bool isTriggerStreamConfigured(const std::string &streamName = "triggers") const

Check if the trigger stream is configured for this writer.

Parameters:

streamName – Name of the stream, an empty string will match first stream with compatible data type.

Returns:

True if trigger stream is configured, false otherwise.

template<class PacketType>
inline bool isStreamConfigured(const std::string &streamName) const

Check whether a stream with given name and compatible data type is configured.

Template Parameters:

PacketType – Type of the packet to hold the elements.

Parameters:

streamName – Name of the stream, an empty string will match first stream with compatible data type.

Returns:

inline ~MonoCameraWriter()
inline virtual void writeEvents(const dv::EventStore &events) override

Write event data into the output.

Parameters:

events – Write events into the output.

inline virtual void writeFrame(const dv::Frame &frame) override

Write a frame into the output.

Parameters:

frame – Write a frame into the output.

inline virtual void writeImu(const std::vector<dv::IMU> &imu) override

Write imu data into the output.

Parameters:

imu – Write imu into the output.

inline virtual void writeTriggers(const std::vector<dv::Trigger> &triggers) override

Write trigger data into the output.

Parameters:

triggers – Write trigger into the output.

inline virtual std::string getCameraName() const override

Retrieve camera name of this writer output instance.

Returns:

Configured camera name.

Public Static Functions

static inline Config EventOnlyConfig(const std::string &cameraName, const cv::Size &resolution, dv::CompressionType compression = dv::CompressionType::LZ4)

Generate a config for a writer that will expect a stream of events only.

Parameters:
  • cameraName – Name of the camera.

  • resolution – Camera sensor resolution.

  • compression – Compression type.

Returns:

A config template for MonoCameraWriter.

static inline Config FrameOnlyConfig(const std::string &cameraName, const cv::Size &resolution, dv::CompressionType compression = dv::CompressionType::LZ4)

Generate a config for a writer that will expect a stream of frames only.

Parameters:
  • cameraName – Name of the camera.

  • resolution – Camera sensor resolution.

  • compression – Compression type.

Returns:

A config template for MonoCameraWriter.

static inline Config DVSConfig(const std::string &cameraName, const cv::Size &resolution, dv::CompressionType compression = dv::CompressionType::LZ4)

Generate a config for a writer that will expect data from a DVS camera - events, IMU, triggers.

Parameters:
  • cameraName – Name of the camera.

  • resolution – Camera sensor resolution.

  • compression – Compression type.

Returns:

A config template for MonoCameraWriter.

static inline Config DAVISConfig(const std::string &cameraName, const cv::Size &resolution, dv::CompressionType compression = dv::CompressionType::LZ4)

Generate a config for a writer that will expect data from a DAVIS camera - frames, events, IMU, triggers.

Parameters:
  • cameraName – Name of the camera.

  • resolution – Camera sensor resolution.

  • compression – Compression type.

Returns:

A config template for MonoCameraWriter.

static inline Config CaptureConfig(const dv::io::camera::CameraInputBase &capture, dv::CompressionType compression = dv::CompressionType::LZ4)

Generate a config from a camera capture instance, this only checks whether camera provides frame data stream or not and enables all available streams to be recorded.

Parameters:
  • capture – Camera capture class instance.

  • compression – Compression type.

Returns:

A config template for MonoCameraWriter.

Private Types

typedef std::map<std::string, StreamDescriptor> StreamDescriptorMap

Private Functions

inline std::string createHeader(const MonoCameraWriter::Config &config, const dv::io::support::TypeResolver &resolver)
template<class PacketType>
inline StreamDescriptorMap::iterator findStreamDescriptor(const std::string &streamName)
template<class PacketType>
inline StreamDescriptorMap::const_iterator findStreamDescriptor(const std::string &streamName) const
inline explicit MonoCameraWriter(const std::shared_ptr<dv::io::WriteOnlyFile> &outputFile, const dv::io::MonoCameraWriter::Config &config, const dv::io::support::TypeResolver &resolver = dv::io::support::defaultTypeResolver)

Preconfigured output file constructor. Internal use only, used for multi-camera recording.

Parameters:
  • outputFileWriteOnlyFile instance to write data.

  • config – Output stream configuration.

  • resolver – Type resolver for the output file.

Private Members

size_t mPackagingCount = 20
MonoCameraWriter::Config inputConfig
StreamDescriptorMap mOutputStreamDescriptors
dv::io::support::XMLTreeNode mRoot
std::shared_ptr<dv::io::WriteOnlyFile> mOutput

Private Static Functions

static inline void validateConfig(const MonoCameraWriter::Config &config)

Friends

friend class StereoCameraWriter
template<class Accumulator = dv::EdgeMapAccumulator, class PixelPredictor = kinematics::PixelMotionPredictor>
class MotionCompensator

Public Functions

inline const Info &getInfo() const

Return an info class instance containing motion compensator state for the algorithm iteration. The info object contains debug information about the execution of the motion compensator.

Returns:

inline void accept(const Transformationf &transform)

Push camera pose measurement.

Parameters:

transform – Transform representing camera pose in some fixed reference frame (e.g. World coordinates).

inline void accept(const dv::measurements::Depth &timeDepth)

Scene depth measurement in meters.

Parameters:

timeDepth – A pair containing measured depth into the scene and a timestamp at when the measurement was performed.

inline void accept(const dv::EventStore &events)

Push event camera input.

Parameters:

events – Pixel brightness changes from an event camera.

inline void accept(const dv::Event &event)

Push event camera input.

Parameters:

event – Pixel brightness change from an event camera.

inline dv::EventStore generateEvents(const int64_t generationTime = -1)

Generate the motion compensated events contained in the buffer.

Parameters:

generationTime – Provide a timestamp to which point in time the motion compensator compensates into, negative values will cause the function to use highest timestamp value in the event buffer.

Returns:

Motion compensated events.

inline dv::Frame generateFrame(const int64_t generationTime = -1)

Generate the motion compensated frame output and reset the events contained in the buffer.

Parameters:

generationTime – Provide a timestamp to which point in time the motion compensator compensates into, negative values will cause the function to use highest timestamp value in the event buffer.

Returns:

Motion compensated frame.

inline void reset()

Clear the event buffer.

inline MotionCompensator &operator<<(const dv::EventStore &store)

Accept the event data using the stream operator.

Parameters:

store – Input event store.

Returns:

Reference to current object instance.

inline MotionCompensator &operator<<(const dv::Event &event)

Accept the event data using the stream operator.

Parameters:

store – Input event.

Returns:

Reference to current object instance.

inline dv::Frame &operator>>(dv::Frame &image)

Output stream operator which generates a frame.

Parameters:

image – Motion compensated frame.

Returns:

Motion compensated frame.

inline MotionCompensator(const camera::CameraGeometry::SharedPtr &cameraGeometry, std::unique_ptr<Accumulator> accumulator_ = nullptr)

Construct a motion compensator instance with custom accumulator.

Parameters:
  • cameraGeometry – Camera geometry class instance containing intrinsic calibration of the camera sensor.

  • accumulator_Accumulator instance to be used to accumulate events.

inline explicit MotionCompensator(const cv::Size &sensorDimensions)

Construct a motion compensator with no known calibration. This assumes that the camera is an ideal pinhole camera sensor (no distortion) with focal length equal to camera sensor width in pixels and central point is the exact geometrical center of the pixel array.

Parameters:

sensorDimensions – Camera sensor resolution.

inline float getConstantDepth() const

Get currently assumed constant depth value. It is used if no depth measurements are provided.

See also

setConstantDepth

Returns:

Currently used aistance to the scene (depth).

inline void setConstantDepth(const float depth)

Set constant depth value that is assumed if no depth measurement is passed using accept(dv::measurements::Depth). By default the constant depth is assumed to be 3.0 meters, which is just a reasonable guess.

Parameters:

depth – Distance to the scene (depth).

Throws:

InvalidArgument – Exception is thrown if a negative depth value is passed.

inline dv::EventStore &operator>>(dv::EventStore &out)

Private Functions

inline dv::kinematics::LinearTransformerf generateTransforms(const int64_t from, const int64_t to)

Generate a sequence of transformations at a fixed period (samplingPeriod) with an additional overhead transform before and after the given interval.

Parameters:
  • from – Start of the interest interval.

  • to – End of the interest interval.

Returns:

Transformer with resampled transformations.

inline dv::EventStore compensateEvents(const dv::EventStore &events, const dv::kinematics::LinearTransformerf &transforms, const dv::kinematics::Transformationf &target, const float depth)

Apply motion compensation to event store and project all event into the target transformation.

Parameters:
  • events – Input events.

  • transforms – Transformer containing the fine grained trajectory of the camera motion.

  • target – Target position of the camera to be projected into.

  • depth – Scene depth to be assumed for the calculations.

Returns:

Motion compensated events at the target camera pose.

inline dv::EventStore generateEventsAt(const int64_t timestamp)

Generate compensated events at a given timestamp.

Parameters:

timestamp – time to compensate events at.

Returns:

A motion compensated events at given time point.

inline dv::Frame generateFrameAt(const int64_t timestamp)

Generate a frame at a given timestamp.

Parameters:

timestamp – Time to generate frame at.

Returns:

A motion compensated frame at given time point.

Private Members

PixelPredictor predictor
dv::kinematics::LinearTransformerf transformer
std::unique_ptr<Accumulator> accumulator
std::map<int64_t, float> depths
float constantDepth = 3.f
dv::EventStore eventBuffer
int64_t storageDuration = 5000000LL
const int64_t samplingPeriod = 200LL
MotionCompensator::Info info
template<class MainStreamType, class ...AdditionalTypes>
class MultiStreamSlicer
#include </builds/inivation/dv/dv-processing/include/dv-processing/core/multi_stream_slicer.hpp>

MultiStreamSlicer takes multiple streams of timestamped data, slices data with configured intervals and calls a given callback method on each interval. It is an extension of StreamSlicer class that can synchronously slice multiple streams. Each stream has to be named uniquely, the name is carried over to the callback method to identify each stream.

The class relies heavily on templating, so it supports different containers of data, as long as the container is an iterable and each element contains an accessible timestamp in microsecond format.

The slicing is driven by the main stream, which needs to be specified during construction time. The type of the main stream is the first template argument and the name for the main stream is provided as the constructor’s first argument.

By default, these types are supported without additional configuration: dv::EventStore, dv::EventPacket, dv::TriggerPacket, std::vector<dv::Trigger>, dv::IMUPacket, std::vector<dv::IMU>, std::vector<dv::Frame>. Additional types can be supported by specifying them as additional template parameters.

Template Parameters:
  • MainStreamType – The type of the main stream.

  • AdditionalTypes – Parameter pack to specify an arbitrary number of additional stream types to be supported.

Public Types

using InputType = std::variant<MainType, dv::EventStore, dv::EventPacket, dv::IMUPacket, dv::TriggerPacket, std::vector<dv::Frame>, std::vector<dv::IMU>, std::vector<dv::Trigger>, AdditionalTypes...>

Alias for the variant that holds a packet type.

Public Functions

inline explicit MultiStreamSlicer(std::string mainStreamName)

Initialize the multi-stream slicer, provide the type of the main stream and a name for the main stream. The slicing is performed by applying a typical slicer on the main stream, all other stream follow it. When a window of slicing executes, the slicer extracts according data from all the other streams and calls a registered callback method for data processing.

Main stream is used to evaluate the jobs, but it also waits for the other types of data to arrive. The callbacks are not executed until all data has arrived on all streams.

By default, these types are supported without additional configuration: dv::EventStore, dv::EventPacket, dv::TriggerPacket, std::vector<dv::Trigger>, dv::IMUPacket, std::vector<dv::IMU>, std::vector<dv::Frame>. Additional types can be supported by specifying them as additional template parameters.

Parameters:

mainStreamName – Name of the main stream.

template<class DataType>
inline void addStream(const std::string &streamName)

Add a stream to the slicer.

Template Parameters:

DataType – Data packet type of the stream.

Parameters:

streamName – Name for the stream.

template<class DataType>
inline void accept(const std::string &streamName, const DataType &data)

Accept incoming data for a stream and evaluate processing jobs. Can be either a packet or a single timestamped element of the stream.

Parameters:
  • streamName – Name of the stream.

  • data – Incoming data, either a data packet or timestamp data element.

Throws:

RuntimeError – Exception is thrown if passed data type does not match the stream data type.

inline int doEveryTimeInterval(const dv::Duration interval, std::function<void(const dv::TimeWindow&, const MapOfVariants&)> callback)

Register a callback to be performed at a given interval. Data is passed as an argument to the method. Callback method passes TimeWindow parameter along the data for the callback to be aware of time slicing windows.

Parameters:
  • interval – Interval at which the callback has to be executed.

  • callback – Callback method that is called at the given interval, receives time window information and sliced data.

Returns:

An id that can be used to modify this job.

inline int doEveryTimeInterval(const dv::Duration interval, std::function<void(const MapOfVariants&)> callback)

Register a callback to be performed at a given interval. Data is passed as an argument to the method.

Parameters:
  • interval – Interval at which the callback has to be executed.

  • callback – Callback method that is called at the given interval.

Returns:

An id that can be used to modify this job.

inline int doEveryNumberOfElements(const size_t n, std::function<void(const dv::TimeWindow&, const MapOfVariants&)> callback, const TimeSlicingApproach timeSlicingApproach = TimeSlicingApproach::BACKWARD)

Adds a number-of-elements triggered job to the Slicer. A job is defined by its interval and callback function. The slicer calls the callback function every time n elements are added to the stream buffer, with the corresponding data. The (cpu) time interval between individual calls to the function depends on the physical event rate as well as the bulk sizes of the incoming data.

Parameter timeSlicingApproach - is an enum that defines timing approach for multi-stream slicing by number. The slicing by number happens by slicing the main stream by a given number of elements. Secondary streams are sliced by the time window of the numbered slice, this introduces a problem of gaps between two number slices - the gap values can either be assigned to current or the next slice, this enum allows to control which of the data parts these gap data will be assigned - backwards will assign all gap data from previous slice end time to current slice start time to current, the forwards approach will assign the gap data from current slice end time to next slice start time to the current slice. The forwards slice timing will result in processing delay of exactly one slice, as it requires to wait for the next slice to happen to correctly retrieve next slice start time. Backwards slicing does not wait for any additional data and processes everything immediately.

Parameters:
  • n – the interval (in number of elements) in which the callback should be called.

  • callback – the callback function that gets called on the data every interval.

  • timeSlicingApproach – Select approach for handling secondary stream gap data.

Returns:

A handle to uniquely identify the job.

inline int doEveryNumberOfElements(const size_t n, std::function<void(const MapOfVariants&)> callback, const TimeSlicingApproach timeSlicingApproach = TimeSlicingApproach::BACKWARD)

Adds a number-of-elements triggered job to the Slicer. A job is defined by its interval and callback function. The slicer calls the callback function every time n elements are added to the stream buffer, with the corresponding data. The (cpu) time interval between individual calls to the function depends on the physical event rate as well as the bulk sizes of the incoming data.

Parameter timeSlicingApproach - is an enum that defines timing approach for multi-stream slicing by number. The slicing by number happens by slicing the main stream by a given number of elements. Secondary streams are sliced by the time window of the numbered slice, this introduces a problem of gaps between two number slices - the gap values can either be assigned to current or the next slice, this enum allows to control which of the data parts these gap data will be assigned - backwards will assign all gap data from previous slice end time to current slice start time to current, the forwards approach will assign the gap data from current slice end time to next slice start time to the current slice. The forwards slice timing will result in processing delay of exactly one slice, as it requires to wait for the next slice to happen to correctly retrieve next slice start time. Backwards slicing does not wait for any additional data and processes everything immediately.

Parameters:
  • n – the interval (in number of elements) in which the callback should be called.

  • callback – the callback function that gets called on the data every interval.

  • timeSlicingApproach – Select approach for handling secondary stream gap data.

Returns:

A handle to uniquely identify the job.

inline void modifyTimeInterval(const int jobId, const dv::Duration timeInterval)

Modify the execution interval of a job.

Parameters:
  • jobId – Callback id that is received from callback registration.

  • timeInterval – New time interval to be executed.

Throws:

invalid_argument – Exception is thrown if trying to modify a number based slicing job.

inline void modifyNumberInterval(const int jobId, const size_t n)

Modify the execution number of elements of a job.

Parameters:
  • jobId – Job id that is received from callback registration.

  • n – New number of elements to slice for the given job id.

Throws:

invalid_argument – Exception is thrown if trying to modify a time based slicing job.

inline bool hasJob(const int jobId) const

Returns true if the slicer contains the slice-job with the provided id

Parameters:

jobId – the id of the slice-job in question

Returns:

true, if the slicer contains the given slice-job

inline void removeJob(const int jobId)

Removes the given job from the list of current jobs.

Parameters:

jobId – The job id to be removed

inline void setStreamSeekTime(const std::string &streamName, const int64_t seekTimestamp)

Update a stream’s seek time manually and evaluate jobs.

Data synchronization is automatically inferred from received data. This works well with data streams that produce data at guaranteed periodic intervals. For aperiodic data streams, which produce data spontaneously, a manual synchronization is required. This method allows to manually instruct the slicer that the given stream has provided data up to, but not including, this given seek timestamp; even in case when there was no data. Slicer is then able to progress other streams until the given time, since it assumes no data will ever arrive for this stream until this point. Be sure to call this method when you are sure no data will arrive, otherwise that data can be lost.

Parameters:
  • streamName – Name of the stream.

  • seekTimestamp – Seek time for this stream; all data until this time has been provided to the slicer.

Protected Attributes

int64_t mMainBufferSeekTime = -1

Main buffer seek time, this is the timestamp of last fed data into main slicer.

std::map<int, SliceJob> mSliceJobs

Storage container for configured slice jobs.

int32_t mHashCounter = 0
std::map<int32_t, int32_t> mMapFromSliceJobIdsToMainSlicerIds

Map for determining mapping from multi stream slicer job ids to main stream slicer job ids, we use this since it is not known a priori how job ids are set for the main stream slicer

std::map<std::string, InputType> mBuffer

Buffered data that is in queue for slicing.

std::map<std::string, int64_t> mLastReceivedBufferTimestamps

Placeholder for manually provided seek timestamp of stream seek times.

std::string mMainStreamName

Name of the main stream.

dv::StreamSlicer<MainStreamType> mMainSlicer

Slicer for the main stream, all other streams follow the main stream slicer.

Private Types

using MainType = typename std::conditional_t<dv::concepts::is_type_one_of<MainStreamType, dv::EventStore, dv::EventPacket, dv::IMUPacket, dv::TriggerPacket, std::vector<dv::Frame>, std::vector<dv::IMU>, std::vector<dv::Trigger>, AdditionalTypes...>, std::monostate, MainStreamType>

Private Functions

inline int64_t getMinLastBufferTimestamps()

Get the minimum value of the last received buffer timestamps.

Returns:

minimum last received buffer timestamp.

inline int64_t getMinEvaluatedJobTime()

Get the minimum of the last evaluated job times. This is helpful for determining which data to remove from the internal buffer as any data before this minimum value is no longer needed and can, therefore, be discarded

Returns:

minimum of the last evaluated job times.

inline void evaluate()

Evaluate the current state of the slicer. Performs data book-keeping and executes the callback methods.

Private Static Functions

template<class VectorType>
static inline VectorType sliceVector(const int64_t start, const int64_t end, const VectorType &packet)

Slice a vector type within given time bounds [start, end). Start time is inclusive, end time is exclusive.

Template Parameters:

VectorType

Parameters:
  • start – Start timestamp

  • end – End timestamp

  • packet – Packet of a vector type

Returns:

Copy of the data within the bounds

template<class PacketType>
static inline PacketType slicePacketSpecific(const int64_t start, const int64_t end, const PacketType &packet)

Templated method for packet slicing. Returns the data slice between given timestamps. Start time is inclusive, end time is exclusive.

Template Parameters:

PacketType

Parameters:
  • start – Start timestamp

  • end – End timestamp

  • packet – Packet of data

Returns:

Copy of the data within the bounds

static inline InputType slicePacket(const int64_t start, const int64_t end, const InputType &packet)

Templated method for packet contained in a variant. Returns the data slice between given timestamps. Start time is inclusive, end time is exclusive.

Parameters:
  • start – Start of time range.

  • end – End of time range.

  • packet – Input data packet.

Returns:

Sliced data from the packet according to given time ranges.

template<class PacketType>
static inline void mergePackets(const PacketType &from, PacketType &into)

Merge successive packets, this copies data from one to another. Performs shallow copy if possible.

Template Parameters:

PacketType

Parameters:
  • from – Source packet

  • into – Destination packet

template<class PacketType>
static inline void eraseUpToIterable(const int64_t timeLimit, PacketType &packet)

Erase data within the packet up to the given time point. Specific implementation for vector containers.

Template Parameters:

PacketType

Parameters:
  • timeLimit – Timestamp to delete until, this is exclusive

  • packet – Packet to modify

template<class PacketType>
static inline void eraseUpTo(const int64_t timeLimit, PacketType &packet)

Erase data within the packet up to the given time point.

Template Parameters:

PacketType

Parameters:
  • timeLimit – Timestamp to delete until, this is exclusive

  • packet – Packet to modify

template<class PacketType>
static inline dv::TimeWindow getPacketTimeWindow(const PacketType &packet)

Retrieve highest and lowest timestamps of a given packet

Template Parameters:

PacketType

Parameters:

packet

Returns:

Time window containing start and end timestamps.

template<class PacketType>
static inline bool isPacketEmpty(const PacketType &packet)

Check if a packet is empty.

Template Parameters:

PacketType

Parameters:

packet

Returns:

True if the given packet is empty, false otherwise.

class NetworkReader : public dv::io::InputBase
#include </builds/inivation/dv/dv-processing/include/dv-processing/io/network_reader.hpp>

Network capture class. Connect to a TCP or a local socket server providing a data stream. The class provides a single data stream per network capture.

Public Functions

inline NetworkReader(const std::string_view ipAddress, const uint16_t port)

Initialize a network capture object, it will connect to a given TCP port with given IP address.

Parameters:
  • ipAddress – IP address of the target TCP server.

  • port – TCP port number.

inline NetworkReader(const std::string_view ipAddress, const uint16_t port, boost::asio::ssl::context &&encryptionContext)

Initialize an encrypted network capture object, it will connect to a given TCP port with given IP address. Provide an encryption context that is preconfigured, prefer using existing dv::io::encrypt::defaultEncryptionClient() method for configuring the encryption context.

Parameters:
  • ipAddress – IP address of the target TCP server.

  • port – TCP port number.

  • encryptionContext – Preconfigured encryption context.

inline explicit NetworkReader(const std::filesystem::path &socketPath)

Initialize a network capture object, it will connect to a given UNIX socket with a given file system path.

Parameters:

socketPath – Path to the UNIX socket.

inline ~NetworkReader() override

Destructor - disconnects from network resource, stops threads and frees any buffered data.

inline virtual std::optional<dv::EventStore> getNextEventBatch() override

Read next event batch. This is a non-blocking method, if there is no data to read, it will return a std::nullopt.

Returns:

Next batch of events, std::nullopt if no data received from last read or the event stream is not available.

inline virtual std::optional<dv::Frame> getNextFrame() override

Read next frame. This is a non-blocking method, if there is no data to read, it will return a std::nullopt.

Returns:

Next frame, std::nullopt if no data received from last read or the event stream is not available.

inline virtual std::optional<std::vector<dv::IMU>> getNextImuBatch() override

Read next imu measurement batch. This is a non-blocking method, if there is no data to read, it will return a std::nullopt.

Returns:

Next batch of imu measurements, std::nullopt if no data received from last read or the event stream is not available.

inline virtual std::optional<std::vector<dv::Trigger>> getNextTriggerBatch() override

Read next trigger batch. This is a non-blocking method, if there is no data to read, it will return a std::nullopt.

Returns:

Next batch of triggers, std::nullopt if no data received from last read or the event stream is not available.

inline virtual std::optional<cv::Size> getEventResolution() const override

Retrieve the event sensor resolution. The method returns std::nullopt if event stream is not available or the metadata does not contain resolution.

Returns:

Event sensor resolution or std::nullopt if not available.

inline virtual std::optional<cv::Size> getFrameResolution() const override

Retrieve the frame sensor resolution. The method returns std::nullopt if frame stream is not available or the metadata does not contain resolution.

Returns:

Frame sensor resolution or std::nullopt if not available.

template<class PacketType>
inline std::shared_ptr<PacketType> getNextPacket()

Read next packet, given it’s type.

The given type must match the stream type exactly (it must be a flatbuffer generated type). Returns nullptr if no data is available for reading or stream of such type is not available.

Template Parameters:

PacketTypeStream packet type, must be a flatbuffer type and must match stream type exactly.

Returns:

Shared pointer to a packet of data, or nullptr if unavailable.

inline virtual bool isEventStreamAvailable() const override

Check whether an event stream is available in this capture class.

Returns:

True if an event stream is available; false otherwise.

inline virtual bool isFrameStreamAvailable() const override

Check whether a frame stream is available in this capture class.

Returns:

True if a frame stream is available; false otherwise.

inline virtual bool isImuStreamAvailable() const override

Check whether an IMU data stream is available in this capture class.

Returns:

True if an IMU data stream is available; false otherwise.

inline virtual bool isTriggerStreamAvailable() const override

Check whether a trigger stream is available in this capture class.

Returns:

True if a trigger stream is available; false otherwise.

inline virtual std::string getCameraName() const override

Get camera name, which is a combination of the camera model and the serial number.

Returns:

String containing the camera model and serial number separated by an underscore character.

inline virtual bool isRunning() const override

Check whether any input data streams have terminated. For a live camera this should check if the device is still connected and functioning, while for a recording file this should check if any of the data streams have reached end-of-file (EOF). For a network input, this indicates the network stream is still connected.

Returns:

True if data read on all streams is still possible, false otherwise.

inline virtual bool isRunning(const std::string_view streamName) const override

Check whether the input data stream with the specified name is still active.

Returns:

True if data read on this stream is possible, false otherwise.

inline virtual bool isRunningAny() const override

Check whether any input data streams are still available. For a live camera this should check if the device is still connected and functioning and at least one data stream is active (different than isRunning()), while for a recording file this should check if any of the data streams have not yet reached end-of-file (EOF) and are still readable. For a network input, this indicates the network stream is still connected.

Returns:

True if data read on at least one stream is still possible, false otherwise.

template<class PacketType>
inline bool isStreamAvailable() const

Check whether a stream of given type is available.

The given type must match the stream type exactly (it must be a flatbuffer generated type). Returns nullptr if no data is available for reading or stream of such type is not available.

Template Parameters:

PacketTypeStream packet type, must be a flatbuffer type and must match stream type exactly.

Returns:

True if stream of a given type is available, false otherwise.

inline virtual bool isStreamAvailable(const std::string_view streamName) const override

Check whether a stream with given name is available.

Returns:

True if data stream is available, false otherwise.

inline void close()

Explicitly close the communication socket, receiving data is not going to possible after this method call.

inline const dv::io::Stream &getStreamDefinition() const

Get the stream definition object, which describe the available data stream by this reader.

Returns:

Data stream definition object.

Private Types

using PacketQueue = boost::lockfree::spsc_queue<dv::types::TypedObject*>

Private Functions

inline void readClbk(std::vector<std::byte> &data, const int64_t)

Read block of data from the network socket.

Parameters:

data – Container for data that is going to be read.

inline void connectTCP(const std::string_view ipAddress, const uint16_t port, const bool tlsEnabled = false)

Initiate connection to the given IP address and port.

Parameters:
  • ipAddress – Ip address, dot separated (in format “0.0.0.0”)

  • port – TCP port number

  • tlsEnabled – Enable TLS encryption

inline void connectUNIX(const std::filesystem::path &socketPath)

Initiate a connection to UNIX socket under given filesystem path.

Parameters:

socketPath – Path to a socket.

inline void readThread()
inline void initializeReader()

Private Members

std::function<void(std::vector<std::byte>&, const int64_t)> mReadHandler = std::bind_front(&NetworkReader::readClbk, this)

Callback method that calls read method of the socket.

boost::asio::io_context mIOService

IO service context.

std::unique_ptr<network::SocketBase> mSocket = nullptr

Socket to contain the connection instance.

asioSSL::context mTLSContext = asioSSL::context(asioSSL::context::method::tlsv12_client)

Decryption context.

bool mTLSEnabled

Whether TLS encryption is enabled.

dv::io::Reader mAedat4Reader

AEDAT4 reader.

dv::io::Stream mStream

Data stream container - one per capture.

std::string mCameraName

Name of the camera producing the stream.

PacketQueue mPacketQueue = PacketQueue(1000)

Incoming packet queue.

std::thread mReadingThread

Reading thread.

std::atomic<bool> mKeepReading = true

Atomic bool used to stop the reading thread.

std::atomic<bool> mExceptionThrown = false

Boolean value that indicated whether an exception was thrown on reading thread.

std::exception_ptr mException = nullptr

Pointer that holds thrown exception, mExceptionThrown contains thread-safe flag indicating an exception was thrown

class NetworkWriter : public dv::io::OutputBase
#include </builds/inivation/dv/dv-processing/include/dv-processing/io/network_writer.hpp>

Network server class for streaming AEDAT4 serialized data types.

Public Types

using ErrorMessageCallback = std::function<void(const boost::system::error_code&, const std::string_view)>

Public Functions

inline NetworkWriter(const std::string_view ipAddress, const uint16_t port, const dv::io::Stream &stream, const size_t maxClientConnections = 10, ErrorMessageCallback messageCallback = [](const boost::system::error_code &, const std::string_view) { })

Create a non-encrypted server that listens for connections on a given IP address. Supports multiple clients.

Parameters:
  • ipAddress – IP address to bind the server.

  • port – Port number.

  • stream – AEDAT4 stream definition.

  • maxClientConnections – Maximum number of client connections supported by this instance.

  • messageCallback – Callback to handle any error messages received by the client connections.

inline NetworkWriter(const std::string_view ipAddress, const uint16_t port, const dv::io::Stream &stream, boost::asio::ssl::context &&encryptionContext, const size_t maxClientConnections = 10, ErrorMessageCallback messageCallback = [](const boost::system::error_code &, const std::string_view) { })

Create an encrypted server that listens for connections on a given IP address. Supports multiple clients.

Parameters:
  • ipAddress – IP address to bind the server.

  • port – Port number.

  • stream – AEDAT4 stream definition.

  • encryptionContext – Preconfigured encryption context, use either dv::io::encrypt::defaultEncryptionServer() to create the context or configure custom encryption context. When a client connects to the server, it will run handshake, during which client certificates will be validated, if the handshake fails, connection is terminated.

  • maxClientConnections – Maximum number of client connections supported by this instance.

  • messageCallback – Callback to handle any error messages received by the client connections.

inline NetworkWriter(const std::filesystem::path &socketPath, const dv::io::Stream &stream, const size_t maxClientConnections = 10, ErrorMessageCallback messageCallback = [](const boost::system::error_code &, const std::string_view) { })

Create a local socket server. Provide a path to the socket, if a file already exists on a given path, the connection will fail by throwing an exception. It is required that the given socket path does not point to an existing socket file. If the file can exist, it is up to the user of this class to decide whether it is safe to remove any existing socket files or the class should not bind to the path.

Parameters:
  • socketPath – Path to a socket file, must be a non-existent path.

  • stream – AEDAT4 stream definition.

  • maxClientConnections – Maximum number of client connections supported by this instance.

  • messageCallback – Callback to handle any error messages received by the client connections.

inline ~NetworkWriter() override

Closes the socket, frees allocated memory, and removes any queued packets from write queue.

inline virtual void writeEvents(const EventStore &events) override

Write an event store to the network stream.

Parameters:

events – Data to be sent out.

inline virtual void writeFrame(const dv::Frame &frame) override

Write a frame image to the network stream.

Parameters:

frame – Data to be sent out.

inline virtual void writeImu(const std::vector<dv::IMU> &imu) override

Write IMU data to the socket.

Parameters:

imu – Data to be sent out.

inline virtual void writeTriggers(const std::vector<dv::Trigger> &triggers) override

Write trigger data to the network stream.

Parameters:

triggers – Data to be sent out.

template<class PacketType>
inline void writePacket(PacketType &&packet)

Write a flatbuffer packet to the network stream.

Template Parameters:

PacketType – Type of the packet, must satisfy the dv::concepts::FlatbufferPacket concept.

Parameters:

packet – Data to write.

inline virtual std::string getCameraName() const override

Get camera name. It is looked up from the stream definition during construction.

Returns:

inline size_t getQueuedPacketCount() const

Get number of packets in the write queue.

Returns:

Number of packets in the write queue.

inline size_t getClientCount() const

Get number of active connected clients.

Returns:

Number of active connected clients.

Private Types

using WriteQueue = boost::lockfree::spsc_queue<std::shared_ptr<dv::types::TypedObject>>

Private Functions

template<class SocketType>
inline void acceptStart()
inline void writePacketToClients(const std::shared_ptr<dv::types::TypedObject> &packet)
inline void ioThread()
inline void connectTCP(const std::string_view ipAddress, const uint16_t port)
inline void connectUNIX(const std::filesystem::path &socketPath)
inline void generateHeaderContent(const dv::io::Stream &stream)
inline void removeClient(const Connection *const client)

Private Members

std::string mCameraName
size_t mMaxConnections
asio::io_context mIoService
std::unique_ptr<asioTCP::acceptor> mAcceptorTcp = nullptr
std::unique_ptr<asioUNIX::acceptor> mAcceptorUnix = nullptr
asioSSL::context mTLSContext = asioSSL::context(asioSSL::context::method::tlsv12_server)
bool mTLSEnabled
mutable std::mutex mClientsMutex
std::vector<Connection*> mClients

The client list is raw point, that is self-owned, read Connection class documentation for more details.

std::atomic<size_t> mQueuedPackets = 0
dv::io::Writer mAedat4Writer
std::string mInfoNode
std::atomic<bool> mShutdownRequested = false
std::thread mIOThread
int32_t mStreamId = 0
std::filesystem::path mSocketPath
WriteQueue mWriteQueue = WriteQueue(1024)
ErrorMessageCallback mErrorMessageHandler

Error message handler, by default: NOOP.

class NoneCompressionSupport : public dv::io::compression::CompressionSupport

Public Functions

inline explicit NoneCompressionSupport(const CompressionType type)
inline virtual void compress(dv::io::support::IODataBuffer &packet) override
class NoneDecompressionSupport : public dv::io::compression::DecompressionSupport

Public Functions

inline explicit NoneDecompressionSupport(const CompressionType type)
inline virtual void decompress(std::vector<std::byte> &source, std::vector<std::byte> &target) override
class NoRedetection : public dv::features::RedetectionStrategy
#include </builds/inivation/dv/dv-processing/include/dv-processing/features/redetection_strategy.hpp>

No redetection strategy.

Public Functions

inline virtual bool decideRedetection(const TrackerBase&) override

Do not perform redetection.

Returns:

Just return false always.

struct NullPointer : public dv::exceptions::info::EmptyException
struct Observation : public flatbuffers::NativeTable

Public Types

typedef ObservationFlatbuffer TableType

Public Functions

inline Observation()
inline Observation(int32_t _trackId, int32_t _cameraId, const std::string &_cameraName, int64_t _timestamp)

Public Members

int32_t trackId
int32_t cameraId
std::string cameraName
int64_t timestamp

Public Static Functions

static inline constexpr const char *GetFullyQualifiedName()
struct ObservationBuilder

Public Functions

inline void add_trackId(int32_t trackId)
inline void add_cameraId(int32_t cameraId)
inline void add_cameraName(flatbuffers::Offset<flatbuffers::String> cameraName)
inline void add_timestamp(int64_t timestamp)
inline explicit ObservationBuilder(flatbuffers::FlatBufferBuilder &_fbb)
ObservationBuilder &operator=(const ObservationBuilder&)
inline flatbuffers::Offset<ObservationFlatbuffer> Finish()

Public Members

flatbuffers::FlatBufferBuilder &fbb_
flatbuffers::uoffset_t start_
struct ObservationFlatbuffer : private flatbuffers::Table

Public Types

typedef Observation NativeTableType

Public Functions

inline int32_t trackId() const

The tracking sequence ID that the landmark is observed by a camera.

inline int32_t cameraId() const

Arbitrary ID of the camera, this can be application specific.

inline const flatbuffers::String *cameraName() const

Name of the camera. Optional.

inline int64_t timestamp() const

Timestamp of the observation (µs).

inline bool Verify(flatbuffers::Verifier &verifier) const
inline Observation *UnPack(const flatbuffers::resolver_function_t *_resolver = nullptr) const
inline void UnPackTo(Observation *_o, const flatbuffers::resolver_function_t *_resolver = nullptr) const

Public Static Functions

static inline const flatbuffers::TypeTable *MiniReflectTypeTable()
static inline constexpr const char *GetFullyQualifiedName()
static inline void UnPackToFrom(Observation *_o, const ObservationFlatbuffer *_fb, const flatbuffers::resolver_function_t *_resolver = nullptr)
static inline flatbuffers::Offset<ObservationFlatbuffer> Pack(flatbuffers::FlatBufferBuilder &_fbb, const Observation *_o, const flatbuffers::rehasher_function_t *_rehasher = nullptr)
template<typename _Scalar, int NX = Eigen::Dynamic, int NY = Eigen::Dynamic>
class OptimizationFunctor
#include </builds/inivation/dv/dv-processing/include/dv-processing/optimization/optimization_functor.hpp>

Basic functor class inherited by all contrastMaximization functor. This functor is used by Eigen/NumericalDiff class, which handles the non linear optimization underlying contrast maximization algorithm. For more information about contrast maximization please check “contrast_maximization_rotation.hpp” or “contrast_maximization_translation_and_depth.hpp”.

Template Parameters:
  • _Scalar – type of variable to optimize (e.g. int, float..).

  • NX – Number of input variables (note: all variables are stored as Nx1 vector of values)

  • NY – Number of output measurements (note: number of measurements needs to be at least as big as number of input variables - NX - otherwise the optimization problem cannot be solved.)

Public Types

Values:

enumerator InputsAtCompileTime
enumerator ValuesAtCompileTime
typedef _Scalar Scalar
typedef Eigen::Matrix<Scalar, InputsAtCompileTime, 1> InputType
typedef Eigen::Matrix<Scalar, ValuesAtCompileTime, 1> ValueType
typedef Eigen::Matrix<Scalar, ValuesAtCompileTime, InputsAtCompileTime> JacobianType

Public Functions

virtual int operator()(const Eigen::VectorXf &input, Eigen::VectorXf &cost) const = 0

Base method for cost function implementation.

Parameters:
  • input – parameters to be optimized

  • cost – cost value updated at each iteration of the optimization.

Returns:

optimization result (positive if successful)

inline OptimizationFunctor(int inputs, int values)

Constructor for cost optimization parameters

Parameters:
  • inputs – number of inputs to be optimized

  • values – number of functions evaluation for gradient computation

inline int inputs() const

getter for size of input parameters to be optimized.

Returns:

number of input parameters optimized.

inline int values() const

getter for size of function evaluations performed at each optimization iteration.

Returns:

number of function evaluations at each optimization iteration.

Private Members

int mInputs
int mValues
struct optimizationOutput

Public Members

int optimizationSuccessful
int iter
Eigen::VectorXf optimizedVariable
struct optimizationParameters

Public Members

float learningRate = float(1e-1)
float epsfcn = 0
float ftol = 0.000345267
float gtol = 0
float xtol = 0.000345267
int maxfev = 400
struct OutOfRange : public dv::exceptions::info::EmptyException
class OutputBase
#include </builds/inivation/dv/dv-processing/include/dv-processing/io/output_base.hpp>

Output reader base class defining API interface for writing camera data into an IO resource.

Subclassed by dv::io::MonoCameraWriter, dv::io::NetworkWriter

Public Functions

virtual ~OutputBase() = default
virtual void writeEvents(const dv::EventStore &events) = 0

Write event data into the output.

Parameters:

events – Write events into the output.

virtual void writeFrame(const dv::Frame &frame) = 0

Write a frame into the output.

Parameters:

frame – Write a frame into the output.

virtual void writeImu(const std::vector<dv::IMU> &imu) = 0

Write imu data into the output.

Parameters:

imu – Write imu into the output.

virtual void writeTriggers(const std::vector<dv::Trigger> &triggers) = 0

Write trigger data into the output.

Parameters:

triggers – Write trigger into the output.

virtual std::string getCameraName() const = 0

Retrieve camera name of this writer output instance.

Returns:

Configured camera name.

struct OutputError

Public Types

using Info = ErrorInfo

Public Static Functions

static inline std::string format(const Info &info)
struct ParsedData

Public Functions

inline void clear()

Public Members

dv::EventPacket events = {}
std::vector<dv::Frame> frames = {}
dv::IMUPacket imu = {}
dv::TriggerPacket triggers = {}
class Parser : public dv::io::camera::parser::ParserBase

Public Functions

inline Parser(const SensorModel sensorModel, const cv::Size dvsResolutionDevice, const cv::Size apsResolutionDevice, const dv::io::camera::imu::ImuModel imuModel, const uint32_t dvsOrientation, const uint32_t apsOrientation, const uint32_t imuOrientation, const dv::PixelArrangement apsColorFilter)
inline bool getDVSFlipHorizontal() const
inline void setDVSFlipHorizontal(const bool flip_x)
inline bool getDVSFlipVertical() const
inline void setDVSFlipVertical(const bool flip_y)
inline bool getDVSConvertAllOn() const
inline void setDVSConvertAllOn(const bool convertAllOn)
inline bool getFrameFlipHorizontal() const
inline void setFrameFlipHorizontal(const bool flip_x)
inline bool getFrameFlipVertical() const
inline void setFrameFlipVertical(const bool flip_y)
inline dv::PixelArrangement getFrameColorFilter() const
inline ColorMode getFrameColorMode() const
inline void setFrameColorMode(const ColorMode mode)
inline bool getIMUFlipX() const
inline void setIMUFlipX(const bool flip_x)
inline bool getIMUFlipY() const
inline void setIMUFlipY(const bool flip_y)
inline bool getIMUFlipZ() const
inline void setIMUFlipZ(const bool flip_z)
inline virtual void parseData(const std::span<const uint8_t> buffer, const bool dataLost) override

Public Members

int64_t wrapAdd = {0}
cv::Size sizeDevice
bool invertXY = {false}
std::atomic<bool> flipXControl = {false}
std::atomic<bool> flipYControl = {false}
std::atomic<bool> convertAllOn = {false}
uint16_t lastY = {0}
cv::Size sizeUser
std::atomic<dv::PixelArrangement> colorFilterControl = {dv::PixelArrangement::MONO}
std::atomic<ColorMode> colorMode = {ColorMode::DEFAULT}
bool flipXSync = {false}
bool flipYSync = {false}
bool flipX = {false}
bool flipY = {false}
bool globalShutter = {true}
dv::PixelArrangement colorFilterSync = {dv::PixelArrangement::MONO}
ApsReadoutType currentReadoutType = {ApsReadoutType::RESET}
std::array<uint16_t, 2> countX = {}
std::array<uint16_t, 2> countY = {}
uint16_t expectedCountX = {0}
uint16_t expectedCountY = {0}
int64_t startOfExposureTimestamp = {0}
cv::Mat pixels
uint16_t tmpData = {0}
uint16_t update = {0}
uint16_t positionX = {0}
uint16_t positionY = {0}
uint16_t sizeX = {0}
uint16_t sizeY = {0}
struct dv::io::camera::parser::DAVIS::Parser roi
uint32_t currentFrameExposure = {0}
uint8_t tmpData = {0}
struct dv::io::camera::parser::DAVIS::Parser exposure
bool startPositionOdd = {false}
CDavisOffset direction = {CDavisOffset::INCREASING}
struct dv::io::camera::parser::DAVIS::Parser cDavisSupport
dv::io::camera::imu::ImuModel model
bool flipZ = {false}
std::atomic<bool> flipZControl = {false}
uint8_t type = {0}
uint8_t count = {0}
float accelScale = {}
float gyroScale = {}
dv::IMU currentEvent = {}

Private Types

enum class ApsReadoutType : bool

Values:

enumerator RESET
enumerator SIGNAL
enum class CDavisOffset : bool

Values:

enumerator INCREASING
enumerator DECREASING

Private Functions

inline void apsInitFrame()
inline void apsROIUpdateSizes()
inline void apsUpdateFrame(const uint8_t dataValue)
inline void cdavisUpdateFrame(const uint8_t dataValue)
inline bool apsEndFrame()

Private Members

struct dv::io::camera::parser::DAVIS::Parser mTimestamps
struct dv::io::camera::parser::DAVIS::Parser mDvs
struct dv::io::camera::parser::DAVIS::Parser mFrame
struct dv::io::camera::parser::DAVIS::Parser mImu
SensorModel mModel

Private Static Functions

static inline float calculateIMUAccelScale(const uint8_t imuAccelScale)
static inline float calculateIMUGyroScale(const uint8_t imuGyroScale)
static inline void frameFixCDavis(dv::Frame &frame, const dv::PixelArrangement colorFilter)
static inline void frameDebayer(dv::Frame &frame, const ColorMode colorMode, const dv::PixelArrangement colorFilter)
static inline void frameSplitIntoQuadrants(dv::Frame &frame)

Private Static Attributes

static constexpr uint16_t TS_WRAP_ADD = {0x8000}
static constexpr uint8_t IMU_TYPE_TEMP = {0x01}
static constexpr uint8_t IMU_TYPE_GYRO = {0x02}
static constexpr uint8_t IMU_TYPE_ACCEL = {0x04}
static constexpr uint8_t IMU_TOTAL_COUNT = {14}
static constexpr uint8_t APS_RESET_CUTOFF = {96}
class Parser : public dv::io::camera::parser::ParserBase

Public Functions

Parser() = default
inline bool getDVSFlipHorizontal() const
inline void setDVSFlipHorizontal(const bool flip_x)
inline bool getDVSFlipVertical() const
inline void setDVSFlipVertical(const bool flip_y)
inline bool getDVSConvertAllOn() const
inline void setDVSConvertAllOn(const bool convertAllOn)
inline virtual void parseData(const std::span<const uint8_t> buffer, const bool dataLost) override

Public Members

int64_t wrapAdd = {0}
std::atomic<bool> flipX = {true}
std::atomic<bool> flipY = {true}
std::atomic<bool> convertAllOn = {false}

Private Members

struct dv::io::camera::parser::DVS128::Parser mTimestamps
struct dv::io::camera::parser::DVS128::Parser mDvs

Private Static Attributes

static constexpr uint8_t DVS128_TIMESTAMP_WRAP_MASK = {0x80}
static constexpr uint8_t DVS128_TIMESTAMP_RESET_MASK = {0x40}
static constexpr uint8_t DVS128_POLARITY_SHIFT = {0}
static constexpr uint16_t DVS128_POLARITY_MASK = {0x0001}
static constexpr uint8_t DVS128_Y_ADDR_SHIFT = {8}
static constexpr uint16_t DVS128_Y_ADDR_MASK = {0x007F}
static constexpr uint8_t DVS128_X_ADDR_SHIFT = {1}
static constexpr uint16_t DVS128_X_ADDR_MASK = {0x007F}
static constexpr uint16_t DVS128_SYNC_EVENT_MASK = {0x8000}
static constexpr uint16_t TS_WRAP_ADD = {0x4000}
class Parser : public dv::io::camera::parser::ParserBase

Public Functions

inline explicit Parser(const cv::Size eventsResolution, const uint32_t dvsOrientation, const uint32_t imuOrientation)
inline bool getDVSFlipHorizontal() const
inline void setDVSFlipHorizontal(const bool flip_x)
inline bool getDVSFlipVertical() const
inline void setDVSFlipVertical(const bool flip_y)
inline bool getDVSDualBinning() const
inline void setDVSDualBinning(const bool dual_binning)
inline bool getIMUFlipX() const
inline void setIMUFlipX(const bool flip_x)
inline bool getIMUFlipY() const
inline void setIMUFlipY(const bool flip_y)
inline bool getIMUFlipZ() const
inline void setIMUFlipZ(const bool flip_z)
inline virtual void parseData(const std::span<const uint8_t> buffer, const bool dataLost) override

Public Members

int64_t wrapAdd = {0}
cv::Size size
bool flipX = {false}
bool flipY = {false}
std::atomic<bool> flipXControl = {false}
std::atomic<bool> flipYControl = {false}
std::atomic<bool> dualBinning = {false}
uint16_t lastX = {0}
uint16_t lastYG1 = {0}
uint16_t lastYG2 = {0}
bool flipZ = {false}
std::atomic<bool> flipZControl = {false}
uint8_t type = {0}
uint8_t count = {0}
uint8_t tmpData = {0}
float accelScale = {}
float gyroScale = {}
dv::IMU currentEvent = {}

Private Members

struct dv::io::camera::parser::DVXplorer::Parser mTimestamps
struct dv::io::camera::parser::DVXplorer::Parser mDvs
struct dv::io::camera::parser::DVXplorer::Parser mImu

Private Static Functions

static inline float calculateIMUAccelScale(const uint8_t imuAccelScale)
static inline float calculateIMUGyroScale(const uint8_t imuGyroScale)

Private Static Attributes

static constexpr uint16_t TS_WRAP_ADD = {0x8000}
static constexpr uint8_t IMU_TYPE_TEMP = {0x01}
static constexpr uint8_t IMU_TYPE_GYRO = {0x02}
static constexpr uint8_t IMU_TYPE_ACCEL = {0x04}
static constexpr uint8_t IMU_TOTAL_COUNT = {14}
class Parser : public dv::io::camera::parser::ParserBase

Public Functions

Parser() = default
inline void setFlipHorizontal(const bool flip)
inline bool getFlipHorizontal() const
inline void setFlipVertical(const bool flip)
inline bool getFlipVertical() const
inline virtual void parseData(const std::span<const uint8_t> buffer, const bool dataLost) override
inline void injectImu(dv::IMU imuData)

Public Members

bool flipHorizontal = {false}
bool flipVertical = {false}
int16_t columnAddress = {-1}
std::array<bool, NUM_GROUPS> groupsUsed = {}
std::array<std::array<uint8_t, EVENT_GROUPS_FAST>, 2> columnEvents = {}
size_t resetIndex = {0}
size_t lastCommittedEvents = {0}
int64_t reference = {-1}
int64_t referenceOverflow = {0}
int32_t lastReference = {-1}
int32_t lastUsedSub = {-1}
int64_t lastUsedReference = {-1}
std::atomic<bool> flipHorizontal = {false}
std::atomic<bool> flipVertical = {false}

Private Types

enum class State

Values:

enumerator WAIT_FOR_TIMESTAMP_REFERENCE
enumerator WAIT_FOR_START_OF_FRAME
enumerator WAIT_FOR_SMGROUP
enumerator WAIT_FOR_COLUMN_OR_SMGROUP

Private Functions

inline void aUpdateTimestampReference(const eTimestampReference timestampRef)
inline void aFrameStart(const eColumn column)
inline void aSetColumnData(const eColumn column)
inline void bitIndexToOutput(std::vector<dv::Event> &dst, const size_t *src, const bool polarity) const noexcept
inline void aTransformPreviousColumn()
inline void aGenerateEventsFromSMGroup(const eSMGroup mGroup)
inline void aResetEventFrame()
inline bool gIsIncreasingTimestamp(const eColumn column) const
inline bool gIsSameSubTimestamp(const eColumn column) const
inline bool gIsIncreasingColumnAddress(const eColumn column) const
inline bool gIsGroupParsedAlready(const eSMGroup mGroup) const
inline void processEvent(const eDataLost event)
inline void processEvent(const eTimestampReference event)
inline void processEvent(const eColumn event)
inline void processEvent(const eSMGroup event)

Private Members

struct dv::io::camera::parser::S5K231Y::Parser mDvs
struct dv::io::camera::parser::S5K231Y::Parser mTimestamps
struct dv::io::camera::parser::S5K231Y::Parser mControls
State mState = {State::WAIT_FOR_TIMESTAMP_REFERENCE}

Private Static Functions

static inline uint8_t reverseByte(const uint8_t n) noexcept
static inline bool gIsFrameStart(const eColumn column)

Private Static Attributes

static constexpr std::array<uint8_t, 16> REVERSE_LOOKUP_TABLE = {0x0, 0x8, 0x4, 0xC, 0x2, 0xA, 0x6, 0xE, 0x1, 0x9, 0x5, 0xD, 0x3, 0xB, 0x7, 0xF}
static constexpr int16_t NUM_GROUPS = {HEIGHT / 8}
static constexpr size_t EVENT_GROUPS_FAST = {64}
static constexpr size_t EVENT_GROUPS_REPEAT = {EVENT_GROUPS_FAST / sizeof(size_t)}
class Parser : public dv::io::camera::parser::ParserBase

Public Functions

Parser() = default
inline virtual void parseData(const std::span<const uint8_t> buffer, const bool dataLost) override
inline void injectImu(dv::IMU imuData)

Public Members

std::array<std::array<uint8_t, EVENT_GROUPS_FAST>, 2> columnEvents = {}
int16_t columnAddress = {-1}
uint8_t frameNumber = {0}
bool mirrorMode = {false}
std::array<bool, NUM_GROUPS> groupsUsed = {}
size_t resetIndex = {0}
size_t lastCommittedEvents = {0}
int64_t reference = {-1}
int64_t referenceOverflow = {0}
int32_t lastReference = {-1}

Private Types

enum class State

Values:

enumerator WAIT_FOR_TIMESTAMP_REFERENCE
enumerator WAIT_FOR_TIMESTAMP_SUB_UNIT
enumerator WAIT_FOR_COLUMN_START
enumerator WAIT_FOR_SMGROUP
enumerator WAIT_FOR_COLUMN_OR_SMGROUP_OR_FRAME_END

Private Functions

inline void aUpdateTimestampReference(const eTimestampReference timestampRef)
inline void aUpdateTimestampSubUnit(const eTimestampSubUnit timestampSub)
inline void aSetFrameNumber(const eColumn column)
inline void aSetColumnData(const eColumn column)
inline void bitIndexToOutput(std::vector<dv::Event> &dst, const size_t *src, const bool polarity) const noexcept
inline void aTransformPreviousColumn()
inline void aGenerateEventsFromSMGroup(const eSMGroup mGroup, const int16_t group2Address)
inline void aResetEventFrame()
inline bool gIsSameFrameNumber(const eColumn column) const
inline bool gVerifyFrameNumber(const eFrameEnd frameEnd) const
inline bool gIsGroupParsedAlready(const eSMGroup mGroup, const int16_t group2Address) const
inline void processEvent(const eDataLost event)
inline void processEvent(const eTimestampReference event)
inline void processEvent(const eTimestampSubUnit event)
inline void processEvent(const eColumn event)
inline void processEvent(const eSMGroup event)
inline void processEvent(const eFrameEnd event)

Private Members

struct dv::io::camera::parser::S5KRC1S::Parser mDvs
struct dv::io::camera::parser::S5KRC1S::Parser mTimestamps
State mState = {State::WAIT_FOR_TIMESTAMP_REFERENCE}

Private Static Functions

static inline bool gIsFrameStart(const eColumn column)

Private Static Attributes

static constexpr int16_t NUM_GROUPS = {HEIGHT / 8}
static constexpr size_t EVENT_GROUPS_FAST = {96}
static constexpr size_t EVENT_GROUPS_REPEAT = {EVENT_GROUPS_FAST / sizeof(size_t)}
class ParserBase

Subclassed by dv::io::camera::parser::DAVIS::Parser, dv::io::camera::parser::DVS128::Parser, dv::io::camera::parser::DVXplorer::Parser, dv::io::camera::parser::S5K231Y::Parser, dv::io::camera::parser::S5KRC1S::Parser

Public Functions

virtual ~ParserBase() = default
virtual void parseData(std::span<const uint8_t> buffer, bool dataLost) = 0
inline void setLogger(ParserLoggerCallback loggerFunction)
inline void setDataCommitCallback(ParserDataCommitCallback dataCommitCallback)
inline void setTimeInitCallback(ParserTimeInitCallback timeInitCallback)
inline void setTimeInterval(const std::chrono::microseconds timeInterval)
inline std::chrono::microseconds getTimeInterval() const
inline void setSystemOffset(const std::chrono::microseconds systemOffset)
inline std::chrono::microseconds getSystemOffset() const
inline void adjustTimestamps(const std::chrono::microseconds adjust)
inline std::chrono::microseconds getOutstandingTimestampAdjustment() const

Public Members

std::atomic<int64_t> commitInterval = {10000}
std::atomic<int64_t> systemOffset = {0}
std::atomic<int64_t> adjust = {0}
int64_t lastDevice = {0}
int64_t current = {0}
int64_t nextCommit = {-1}
std::atomic<bool> resetTimingAsync = {false}

Protected Functions

inline int64_t getCurrentTimestamp() const
template<bool STRICT_MONOTONIC_TIMESTAMP = true>
inline void updateTimestamp(const int64_t timestamp)

Update timestamp tracking with the latest timestamp from the device. Always implies a dataCommit() too.

Template Parameters:

STRICT_MONOTONIC_TIMESTAMP – wether the timestamps from device increase strictly or not.

Parameters:

timestamp – latest timestamp from device.

inline void dataCommit()

Send all data currently available to the consumer, up to the current timestamp excluded. Usually called on timestamp update. All data previous to this timestamp should be inside the data buffers, but in some cases this is not possible, currently only the case with DAVIS frames.

inline void timestampInit(const std::chrono::microseconds firstTimestamp)

Call on first timestamp initialization and timestamp resets. This will ensure the system offset is set correctly. NOTE: callers must guarantee their own internal timestamp tracking is fully initialized and ready when calling this function!

Parameters:

firstTimestamp – first timestamp parsed from data stream from the device’s point-of-view.

Protected Attributes

ParserLoggerCallback mLogger = {}
ParsedData mBuffers = {}

Private Functions

inline void resetTiming()

Reset timestamping. Only called in the parser thread (via async flag)! systemOffset must have been set prior to calling this.

Private Members

ParserDataCommitCallback mDataCommitCallback = {}
ParserTimeInitCallback mTimeInitCallback = {}
struct dv::io::camera::parser::ParserBase mTimestamps
template<concepts::AddressableEvent EventType, class EventPacketType>
class PartialEventData
#include </builds/inivation/dv/dv-processing/include/dv-processing/core/core.hpp>

INTERNAL USE ONLY Internal event container class that holds a shard of events. A PartialEventData holds a shared pointer to an EventPacket, which is the underlying data structure. The underlying data can either be const, in which case no addition is allowed, or non const, in which addition of new data is allowed. Slicing is allowed in both cases, as it only modifies the control structure. All the events in the partial have to be monotonically increasing in time. A PartialEventData can be sliced both from the front as well as from the back. By doing so, the memory footprint of the structure is not modified, just the internal bookkeeping pointers are readjusted. The PartialEventData keeps track of lowest as well as highest times of events in the structure.

The data PartialEventData points to can be shared between multiple PartialEventData, each with potentially different slicings.

Public Functions

inline explicit PartialEventData(const size_t capacity = 10000)

Creates a new PartialEventData shard. Allocates new memory on the heap to keep the data. Upon constructions, the newly created object is the sole owner of the data.

Parameters:

capacity – Number of events this data partial can store.

inline explicit PartialEventData(std::shared_ptr<const EventPacketType> data)

Creates a new PartialEventData shard from existing const data. Copies the supplied shared_ptr into the structure, acquiring shared ownership of the supplied data.

Parameters:

data – The shared pointer to the data to which we want to obtain shared ownership

PartialEventData(const PartialEventData &other) = default

Copy constructor. Creates a shallow copy of other without copying the actual data over. As slicing does not alter the underlying data, the new copy may be sliced without affecting the orignal object.

Parameters:

other

inline iterator iteratorAtTime(const int64_t time) const

Returns an iterator to the first element that is bigger than the supplied timestamp. If every element is bigger than the supplied time, an iterator to the first element is returned (same as begin()). If all elements have a smaller timestamp than the supplied, the end iterator is returned (same as end()).

Parameters:

time – The requested time. The iterator will be the first element with a timestamp larger than this time.

Returns:

An iterator to the first element larger than the supplied time.

inline iterator begin() const

Returns an iterator to the first element of the PartialEventData. The iterator is according to the current slice and not to the underlying datastore. E.g. when slicing the shard from the front, the begin() will change.

Returns:

Returns an iterator at the beginning data partial

inline iterator end() const

Returns an iterator to one after the last element of the PartialEventData. The iterator is according to the current slice and not to the underlying datastore. E.g. when slicing the shard from the back, the result of end() will change.

Returns:

Returns an iterator at the end of the data partial

inline void sliceFront(const size_t number)

Slices off number events from the front of the PartialEventData. This operation just adjust the bookkeeping of the datastructure without actually modifying the underlying data representation. If there are not enough events left, a range_error exception is thrown.

Other instances of PartialEventData which share the same underlying data are not affected by this.

Parameters:

number – amount of events to be removed from the front.

inline void sliceBack(const size_t number)

Slices off number events from the back of the PartialEventData. This operation just adjust the bookkeeping of the datastructure without actually modifying the underlying data representation. If there are not enough events left, a range_error exception is thrown.

Other instances of PartialEventData which share the same underlying data are not affected by this.

Parameters:

number – amount of events to be removed from the back.

inline size_t sliceTimeFront(const int64_t time)

Slices off all the events that occur before the supplied time. The resulting data structure has a lowestTime > time where time is the supplied time.

This operation just adjust the bookkeeping of the datastructure without actually modifying the underlying data representation. If there are not enough events left, a range_error exception is thrown.

Other instances of PartialEventData which share the same underlying data are not affected by this.

Parameters:

time – the threshold time. All events <= time will be sliced off

Returns:

number of events that actually got sliced off as a result of this operation.

inline size_t sliceTimeBack(const int64_t time)

Slices off all the events that occur after the supplied time. The resulting data structure has a lowestTime < time where time is the supplied time.

This operation just adjust the bookkeeping of the datastructure without actually modifying the underlying data representation. If there are not enough events left, a range_error exception is thrown.

Other instances of PartialEventData which share the same underlying data are not affected by this.

Parameters:

time – the threshold time. All events > time will be sliced off

Returns:

number of events that actually got sliced off as a result of this operation.

inline void _unsafe_addEvent(const EventType &event)

UNSAFE OPERATION Copies the data of the supplied event into the underlying data structure and updates the internal bookkeeping to accommodate the event.

NOTE: This function does not perform any boundary checks. Any call to function is expected to have performed the following boundary checks: canStoreMoreEvents() to see if there is space to accommodate the new event. getHighestTime() has to be smaller or equal than the new event’s timestamp, as we require events to be monotonically increasing.

Parameters:

event – The event to be added

inline void _unsafe_moveEvent(EventType &&event)

UNSAFE OPERATION Moves the data of the supplied event into the underlying data structure and updates the internal bookkeeping to accommodate the event.

NOTE: This function does not perform any boundary checks. Any call to function is expected to have performed the following boundary checks: canStoreMoreEvents() to see if there is space to accommodate the new event. getHighestTime() has to be smaller or equal than the new event’s timestamp, as we require events to be monotonically increasing.

Parameters:

event – The event to be added

inline EventType &front()

Get a reference to the first available event in the partial.

Returns:

Reference to first element in the partial.

inline EventType &back()

Get a reference to the last available event in the partial.

Returns:

Reference to last element in the partial.

inline size_t getLength() const

The length of the current slice of data. This value can be in range [0; capacity].

Returns:

the current length of the slice in number of events.

inline int64_t getLowestTime() const

Gets the lowest timestamp of an event that is represented in this Partial. The lowest timestamp is always identical to the timestamp of the first event of the slice.

Returns:

The timestamp of the first event in the slice. This is also the lowest time present in this slice.

inline int64_t getHighestTime() const

Gets the highest timestamp of an event that is represented in this Partial. The lowest timestamp is always identical to the timestamp of the last event of the slice.

Returns:

The timestamp of the last event in the slice. This is also the highest timestamp present in this slice.

inline const EventType &operator[](const size_t offset) const

Returns a reference to the element at the given offset of the slice.

Parameters:

offset – The offset in the slice of which element a reference should be obtained

Returns:

A reference to the object at offset offset

inline bool canStoreMoreEvents() const

Checks if it is safe to add more events to this partial. It is safe to add more events when the following conditions are fulfilled:

  • The partial does not represent const data. In that case, any modification of the underlying buffer is impossible.

  • The partial does not exceed the sharding count limit

  • The partial hasn’t been sliced from the back

If it has been sliced from the back, adding new events would put them in unreachable space.

Returns:

true if there is space available to store more events in this partial.

inline size_t availableCapacity() const

Amount of space still available in this data partial.

Returns:

Amount of events this data partial can store additionally.

inline bool merge(const PartialEventData &other)

Merge the other data partial into this one by copying the contents, if that is possible. If merge is not possible, the function returns false and does nothing.

Parameters:

other – Other data partial to be merged into this one.

Returns:

True if merge was successful, false otherwise.

Private Types

using iterator = typename std::vector<EventType>::const_iterator

Private Members

bool referencesConstData_
size_t start_
size_t length_
size_t capacity_
int64_t lowestTime_
int64_t highestTime_
std::shared_ptr<EventPacketType> modifiableDataPtr_
std::shared_ptr<const EventPacketType> data_

Friends

friend class dv::io::MonoCameraWriter
friend class dv::io::NetworkWriter
template<concepts::AddressableEvent EventType, class EventPacketType>
class PartialEventDataTimeComparator
#include </builds/inivation/dv/dv-processing/include/dv-processing/core/core.hpp>

INTERNAL USE ONLY Comparator Functor that checks if a given time lies within bounds of the event packet

Public Functions

inline explicit PartialEventDataTimeComparator(const bool lower)
inline bool operator()(const PartialEventData<EventType, EventPacketType> &partial, const int64_t time) const

Returns true, if the comparator is set to not lower and the given time is higher than the highest timestamp of the partial, or when it is set to lower and the timestamp is higher than the lowest timestamp of the partial.

Parameters:
  • partial – The partial to be analysed

  • time – The time to be compared against

Returns:

true, if time is higher than either lowest or highest timestamp of partial depending on state

inline bool operator()(const int64_t time, const PartialEventData<EventType, EventPacketType> &partial) const

Returns true, if the comparator is set to not lower and the given time is higher than the lowest timestamp of the partial, or when it is set to lower and the timestamp is higher than the highest timestamp of the partial.

Parameters:
  • partial – The partial to be analysed

  • time – The time to be compared against

Returns:

true, if time is higher than either lowest or lowest timestamp of partial depending on state

Private Members

const bool lower_
struct PixelDisparity
#include </builds/inivation/dv/dv-processing/include/dv-processing/depth/sparse_event_block_matcher.hpp>

Structure containing disparity results for a point of interest.

Public Functions

inline PixelDisparity(const cv::Point2i &coordinates, const bool valid, const std::optional<float> correlation = std::nullopt, const std::optional<float> score = std::nullopt, const std::optional<int32_t> disparity = std::nullopt, const std::optional<cv::Point2i> &templatePosition = std::nullopt, const std::optional<cv::Point2i> &matchedPosition = std::nullopt)

Initialize the disparity structure.

Parameters:
  • coordinates – Point of interest coordinates, this will contain same coordinates that were passed into the algorithm.

  • valid – Holds true if the disparity match valid. False otherwise.

  • correlation – Pearson correlation value for the best matching block, if available. This value is in the range [-1.0; 1.0].

  • score – Matching score value, if available. This value is in the range [0.0; 1.0].

  • disparity – Disparity value in pixels, if available. The value is in the range [minDisparity; maxDisparity].

  • templatePosition – Requested coordinate of interest point in the left (rectified) image pixel space.

  • matchedPosition – Best match coordinate on the right (rectified) image pixel space.

Public Members

cv::Point2i coordinates

Point of interest coordinates, this will contain same coordinates that were passed into the algorithm.

bool valid

Holds true if the disparity match valid. False otherwise.

std::optional<float> correlation

Pearson correlation value for the best matching block, if available. This value is in the range [-1.0; 1.0]. Correlation value of -1.0 will mean that matched patch is an inverse of the original template patch, 1.0 will be an equal match, 0.0 is no correlation. A positive value indicates a positive correlation between searched template patch and best match, which could be considered a good indication of a correct match.

std::optional<float> score

Standard score (Z-score) for the match, if available. The score is the number of standard deviations the highest probability value is above the mean of all probabilities of the matching method.

std::optional<int32_t> disparity

Disparity value in pixels, if available. The value is in the range [minDisparity; maxDisparity].

std::optional<cv::Point2i> templatePosition

Coordinates of the matching template on the left (rectified) image space. Set to std::nullopt if the template coordinates are out-of-bounds.

std::optional<cv::Point2i> matchedPosition

Coordinates of the matched template on the right (rectified) image space. Set to std::nulltopt if a match cannot be reliably found, otherwise contains coordinates with highest correlation match on the right side rectified camera pixel space.

class PixelMotionPredictor

Public Types

using SharedPtr = std::shared_ptr<PixelMotionPredictor>
using UniquePtr = std::unique_ptr<PixelMotionPredictor>

Public Functions

inline explicit PixelMotionPredictor(const camera::CameraGeometry::SharedPtr &cameraGeometry)

Construct pixel motion predictor class.

Parameters:

cameraGeometry – Camera geometry class instance containing intrinsic calibration of the camera sensor.

virtual ~PixelMotionPredictor() = default
inline dv::EventStore predictEvents(const dv::EventStore &events, const Transformationf &dT, const float depth) const

Apply delta transformation to event input and generate new transformed event store with new events that are within the new camera perspective (after applying delta transform).

Parameters:
  • events – Input events.

  • dT – Delta transformation to be applied.

  • depth – Scene depth.

Returns:

Transformed events.

template<concepts::Coordinate2DMutableIterable Output, concepts::Coordinate2DIterable Input>
inline Output predictSequence(const Input &points, const Transformationf &dT, const float depth) const

Apply delta transformation to coordinate input and generate new transformed coordinate array with new coordinates that are within the new camera perspective (after applying delta transform).

Parameters:
  • points – Input coordinate array.

  • dT – Delta transformation to be applied.

  • depth – Scene depth.

Returns:

Transformed point coordinates.

template<concepts::Coordinate2DConstructible Output, concepts::Coordinate2D Input>
inline Output predict(const Input &pixel, const Transformationf &dT, const float depth) const

Reproject given pixel coordinates using the delta transformation and depth.

Parameters:
  • pixel – Input pixel coordinates.

  • dT – Delta transformation.

  • depth – Scene depth.

Returns:

Transformed pixel coordinate using the delta transform, camera geometry and scene depth.

inline bool isUseDistortion() const

Is the distortion model enabled for the reprojection of coordinates.

Returns:

True if the distortion model is enabled, false otherwise.

inline void setUseDistortion(bool useDistortion_)

Enable of disable the usage of a distortion model.

Parameters:

useDistortion_ – Pass true to enable usage of the distortion model, false otherwise.

Private Members

const dv::camera::CameraGeometry::SharedPtr camera
bool useDistortion = false
struct Pose : public flatbuffers::NativeTable

Public Types

typedef PoseFlatbuffer TableType

Public Functions

inline Pose()
inline Pose(int64_t _timestamp, const Vec3f &_translation, const Quaternion &_rotation, const std::string &_referenceFrame, const std::string &_targetFrame)

Public Members

int64_t timestamp
Vec3f translation
Quaternion rotation
std::string referenceFrame
std::string targetFrame

Public Static Functions

static inline constexpr const char *GetFullyQualifiedName()

Friends

inline friend std::ostream &operator<<(std::ostream &os, const Pose &packet)
struct PoseBuilder

Public Functions

inline void add_timestamp(int64_t timestamp)
inline void add_translation(const Vec3f *translation)
inline void add_rotation(const Quaternion *rotation)
inline void add_referenceFrame(flatbuffers::Offset<flatbuffers::String> referenceFrame)
inline void add_targetFrame(flatbuffers::Offset<flatbuffers::String> targetFrame)
inline explicit PoseBuilder(flatbuffers::FlatBufferBuilder &_fbb)
PoseBuilder &operator=(const PoseBuilder&)
inline flatbuffers::Offset<PoseFlatbuffer> Finish()

Public Members

flatbuffers::FlatBufferBuilder &fbb_
flatbuffers::uoffset_t start_
struct PoseFlatbuffer : private flatbuffers::Table
#include </builds/inivation/dv/dv-processing/include/dv-processing/data/pose_base.hpp>

A struct holding timestamp and pose.

Public Types

typedef Pose NativeTableType

Public Functions

inline int64_t timestamp() const

Timestamp (µs).

inline const Vec3f *translation() const

Translational vector.

inline const Quaternion *rotation() const

Rotation quaternion.

inline const flatbuffers::String *referenceFrame() const

Name of the reference frame (transforming from)

inline const flatbuffers::String *targetFrame() const

Name of the target frame (transforming into)

inline bool Verify(flatbuffers::Verifier &verifier) const
inline Pose *UnPack(const flatbuffers::resolver_function_t *_resolver = nullptr) const
inline void UnPackTo(Pose *_o, const flatbuffers::resolver_function_t *_resolver = nullptr) const

Public Static Functions

static inline const flatbuffers::TypeTable *MiniReflectTypeTable()
static inline constexpr const char *GetFullyQualifiedName()
static inline void UnPackToFrom(Pose *_o, const PoseFlatbuffer *_fb, const flatbuffers::resolver_function_t *_resolver = nullptr)
static inline flatbuffers::Offset<PoseFlatbuffer> Pack(flatbuffers::FlatBufferBuilder &_fbb, const Pose *_o, const flatbuffers::rehasher_function_t *_rehasher = nullptr)

Public Static Attributes

static constexpr const char *identifier = "POSE"
class PoseVisualizer
#include </builds/inivation/dv/dv-processing/include/dv-processing/visualization/pose_visualizer.hpp>

Visualize the current and past poses of objects as an image.

Public Types

enum class ViewMode

Pre-defined viewing modes; poses of the virtual camera used to visualize the 3D scene. The naming of the different view modes is based on the plane that the virtual camera sees in that mode, such that the virtual camera is placed perpendicular to that plane.

The first letter in the viewing mode defines the horizontal axis of the plane, while the second letter defines the vertical axis of the plane.

Values:

enumerator CUSTOM
enumerator VIEW_XY
enumerator VIEW_YZ
enumerator VIEW_ZX
enumerator VIEW_XZ
enumerator VIEW_YX
enumerator VIEW_ZY
enum class GridPlane

Defined grid plane orientations that can be drawn into the 3D scene. The first letter defines the horizontal axis of the plane, while the second letter defines the vertical axis of the plane.

Values:

enumerator PLANE_NONE
enumerator PLANE_XY
enumerator PLANE_YZ
enumerator PLANE_ZX

Public Functions

inline explicit  PoseVisualizer (const cv::Size &resolution=cv::Size(640, 480), const cv::Scalar &backgroundColor=colors::darkGray, const cv::Scalar &gridColor=colors::gray, const float gridSpacing=1.0f, const size_t maxTrajectoryLength=10 '000, const size_t landmarkLimit=10 '000)

Initialize a pose visualizer to visualize the current and past poses of objects as an image. This is done by creating a virtual camera in the 3D scene, and using this camera to visualize object poses/landmarks in the 3D scene.

The virtual camera is assumed to have an ideal pinhole camera model.

Parameters:
  • resolution – Resolution of the virtual camera used for visualizing the 3D scene. This is used to set the intrinsics of the virtual camera using an ideal pinhole camera model and defines the resolution of the generated image.

  • backgroundColor – Background color.

  • gridColor – Color of drawn grid plane.

  • gridSpacing – Spacing between drawn lines for grid plane, in physical units [m].

  • maxTrajectoryLength – Maximum number of poses that are visualized for each object.

  • landmarkLimit – Maximum number of landmarks that are visualized.

inline  PoseVisualizer (const float fx, const float fy, const float cx, const float cy, const cv::Size &resolution, const cv::Scalar &backgroundColor=colors::darkGray, const cv::Scalar &gridColor=colors::gray, const float gridSpacing=1.0f, const size_t maxTrajectoryLength=10 '000, const size_t landmarkLimit=10 '000)

Initialize a pose visualizer to visualize the current and past poses of objects as an image. This is done by creating a virtual camera in the 3D scene, and using this camera to visualize object poses/landmarks in the 3D scene.

The virtual camera is assumed to have an ideal pinhole camera model.

Parameters:
  • fx – Focal length X for the virtual camera, measured in pixels.

  • fy – Focal length Y for the virtual camera, measured in pixels.

  • cx – Central point coordinate X for the virtual camera, in pixels.

  • cy – Central point coordinate Y for the virtual camera, in pixels.

  • resolution – Resolution of the virtual camera. This defines the resolution of the generated image.

  • backgroundColor – Background color.

  • gridColor – Color of drawn grid plane.

  • gridSpacing – Spacing between drawn lines for grid plane, in physical units [m]

  • maxTrajectoryLength – Maximum number of poses that are visualized for each object.

  • landmarkLimit – Maximum number of landmarks that are visualized.

inline explicit  PoseVisualizer (const dv::camera::CameraGeometry &cameraGeometry, const cv::Scalar &backgroundColor=colors::darkGray, const cv::Scalar &gridColor=colors::gray, const float gridSpacing=1.0f, const size_t maxTrajectoryLength=10 '000, const size_t landmarkLimit=10 '000)

Initialize a pose visualizer to visualize the current and past poses of objects as an image. This is done by creating a virtual camera in the 3D scene, and using this camera to visualize object poses/landmarks in the 3D scene.

Parameters:
  • cameraGeometry – CameraGeometry instance defining the intrinsics of the virtual camera used for visualizing the 3D scene.

  • backgroundColor – Background color.

  • gridColor – Color of drawn grid plane.

  • gridSpacing – Spacing between drawn lines for grid plane, in physical units [m]

  • maxTrajectoryLength – Maximum number of poses that are visualized for each object.

  • landmarkLimit – Maximum number of landmarks that are visualized.

inline void accept(const std::string &objectName, const dv::Pose &objectPose)

Add a new object pose to the visualization.

Parameters:
  • objectName – Name of the object for which the pose is to be added.

  • objectPose – Added pose of the object.

inline void accept(const std::string &objectName, const dv::kinematics::Transformationf &objectPose)

Add a new object pose to the visualization.

Parameters:
  • objectName – Name of the object for which the pose is to be added.

  • objectPose – Added pose of the object.

inline void accept(const dv::Pose &objectPose)

Add a new object pose to the visualization.

Parameters:

objectPose – Added pose of the object.

inline void accept(const dv::kinematics::Transformationf &objectPose)

Add a new object pose to the visualization.

Parameters:

objectPose – Added pose of the object.

inline void accept(const dv::LandmarksPacket &landmarks)

Add landmarks to the visualization.

Parameters:

landmarks – A packet of landmarks to be added to the list of landmarks drawn.

inline void accept(const dv::Landmark &landmark)

Add a landmark to the visualization.

Parameters:

landmark – A single landmark to be added to the list of landmarks drawn.

inline dv::Frame generateFrame()

Return a visualization image of the 3D scene from the defined virtual camera perspective.

Returns:

The generated image for visualization.

inline int64_t getTimestamp() const

Return the timestamp of the most recent object pose.

Returns:

Timestamp in Unix microsecond format.

inline void clearLandmarkWithId(const int64_t id)

Clear landmark with given ID from the visualization.

Parameters:

id – ID of landmark to be removed.

inline void clearLandmarksInInterval(const int64_t startTime, const int64_t endTime)

Clear all landmarks from the visualization within the time interval [startTime, endTime).

Parameters:
  • startTime – The start time of the removed slice of landmarks (inclusive).

  • endTime – The end time of the removed slice of landmarks (exclusive).

inline void clearLandmarks()

Clear all landmarks from the visualization.

inline void clearObject(const std::string &objectName)

Clear all saved poses for the given object from the visualization.

Parameters:

objectName – Name of the object for which the list of poses is to be removed.

inline void clearObjectPosesInInterval(const int64_t startTime, const int64_t endTime)

Clear all saved object poses from the visualization with the time interval [startTime, endTime).

Parameters:
  • startTime – The start time of the removed slice of object poses (inclusive).

  • endTime – The end time of the removed slice of object poses (exclusive).

inline void clearAllObjects()

Clear all saved object poses from the visualization.

inline void reset()

Reset the visualizer to a default state (default camera position, resolution, etc.).

inline cv::Size getResolution() const

Getters and setters.

Get the resolution of the virtual camera used for visualization.

Returns:

Resolution of the virtual camera.

inline void setResolution(const cv::Size &resolution)

Set the resolution of the virtual camera used for visualization.

Parameters:

resolution – Resolution of the virtual camera.

inline dv::camera::CameraGeometry getCameraIntrinsics() const

Get the intrinsics of the virtual camera used for visualization.

Returns:

CameraGeometry instance describing the intrinsics of the virtual camera.

inline void setCameraIntrinsics(const dv::camera::CameraGeometry &cameraGeometry)

Set the intrinsics of the virtual camera used for visualization.

Parameters:

cameraGeometry – CameraGeometry instance describing the intrinsics of the virtual camera.

inline void setCameraIntrinsics(const float fx, const float fy, const float cx, const float cy, const cv::Size &resolution)

Set the intrinsics of the virtual camera used for visualization assuming an ideal pinhole camera model.

Parameters:
  • fx – Focal length X for the virtual camera, measured in pixels.

  • fy – Focal length Y for the virtual camera, measured in pixels.

  • cx – Central point coordinate X for the virtual camera, in pixels.

  • cy – Central point coordinate Y for the virtual camera, in pixels.

  • resolution – Resolution of the virtual camera.

template<concepts::Coordinate3DConstructible Output = Eigen::Vector3f>
inline Output getCameraPosition() const

Get the position of the virtual camera used for visualization with respect to the world coordinate system.

Template Parameters:

Output – Type of the 3D vector used for the camera position.

Returns:

Position of the virtual camera in the world coordinate system.

template<concepts::Coordinate3D InputType>
inline void setCameraPosition(const InputType &newPosition)

Set the position of the virtual camera used for visualization with respect to the world coordinate system.

Template Parameters:

InputType – Type of the 3D vector representing the camera position.

Parameters:

newPosition – Position of the virtual camera in the world coordinate system.

template<concepts::Coordinate3DConstructible Output = Eigen::Vector3f>
inline Output getCameraOrientation() const

Get the orientation of the virtual camera used for visualization as XYZ Euler angles (in degrees) with respect to the world coordinate system.

Template Parameters:

Output – Type of the 3D vector representing the camera orientation.

Returns:

XYZ Euler angles representing the camera orientation with respect to the world coordinate system (in degrees).

template<concepts::Coordinate3D InputType>
inline void setCameraOrientation(const InputType &newOrientation)

Set the orientation of the virtual camera used for visualization as XYZ Euler angles (in degrees) with respect to the world coordinate system.

Template Parameters:

InputType – Type of the 3D vector representing the camera orientation

Parameters:

newOrientation – XYZ Euler angles representing the new camera orientation with respect to the world coordinate system (in degrees).

inline dv::kinematics::Transformationf getCameraPose()

Get the pose of the virtual camera used for visualization with respect to the world coordinate system.

Returns:

Transformationf instance describing the pose of the virtual camera with respect to the world coordinate system.

inline void setCameraPose(const dv::kinematics::Transformationf &T_W_C)

Set the pose of the virtual camera used for visualization with respect to the world coordinate system.

Parameters:

T_W_C – Transformationf instance describing the pose of the virtual camera with respect to the world coordinate system.

inline ViewMode getViewMode() const

Get the view mode (pre-defined pose) of the virtual camera used for visualization.

Returns:

The current viewing mode of the virtual camera.

inline void setViewMode(const ViewMode mode)

Set the view mode (pre-defined pose) of the virtual camera used for visualization to one of the pre-defined view modes.

Parameters:

mode – Viewing mode of the virtual camera.

inline GridPlane getGridPlane() const

Get the grid plane orientation with respect to the world coordinate system (from a list of pre-defined orientations).

Returns:

Orientation of the visualized grid plane.

inline void setGridPlane(const GridPlane plane)

Set the grid plane orientation with respect to the world coordinate system (from a list of pre-defined orientations). If set to PLANE_NONE, the grid plane is removed from the visualization.

Parameters:

plane – Orientation of the visualized grid plane.

inline float getGridSpacing() const

Get the spacing between drawn lines used for visualizing the grid plane, in physical units [m].

Returns:

Spacing between drawn lines for visualizing the grid plane.

inline void setGridSpacing(const float gridSpacing)

Set the spacing between drawn lines used for visualizing the grid plane, in physical units [m].

Parameters:

gridSpacing – Spacing between drawn lines for visualizing the grid plane.

inline float getWorldCoordinateSize() const

Get the size of the visualized world coordinate frame, in physical units [m].

Returns:

Size of the visualized world coordinate frame.

inline void setWorldCoordinateSize(const float newSize)

Set the size of the visualized world coordinate frame, in physical units [m]. To disable the visualization of the world coordinate frame, set this value to a value <=0.

Parameters:

newSize – Size of the visualized world coordinate frame.

inline float getObjectCoordinateSize() const

Get the size of the visualized coordinate frame for object poses, in physical units [m].

Returns:

Size of the visualized coordinate frame for object poses.

inline void setObjectCoordinateSize(const float newSize)

Get the size of the visualized coordinate frame for object poses, in physical units [m]. To disable the visualization of the coordinate frame, set this value to a value <=0.

Parameters:

newSize – Size of the visualized coordinate frame for object poses.

inline int32_t getLineThickness() const

Get the line thickness used for drawing, in pixels.

Returns:

Drawing line thickness in pixels.

inline void setLineThickness(const int32_t newThickness)

Set the line thickness used for drawing, in pixels.

Parameters:

newThickness – Drawing line thickness in pixels.

inline const cv::Scalar &getBackgroundColor() const

Get the background color.

Returns:

Background color.

inline void setBackgroundColor(const cv::Scalar &backgroundColor)

Set new background color.

Parameters:

backgroundColor – OpenCV scalar for the background color.

inline const cv::Scalar &getGridColor() const

Get the grid line color.

Returns:

Grid line color

inline void setGridColor(const cv::Scalar &gridColor)

Set new grid line color

Parameters:

mGridColor – OpenCV scalar for the grid line color.

inline size_t getLandmarkLimit() const

Get the maximum number of landmarks to be drawn.

Returns:

Maximum number of landmarks

inline void setLandmarkLimit(const size_t numLandmarks)

Set a limit for number of landmarks that are stored and drawn.

Parameters:

numLandmarks – Number of landmarks

inline size_t getNumberOfLandmarks() const

Get the number of landmarks currently stored in the visualizer.

Returns:

Number of landmarks stored in the visualizer

inline size_t getNumberOfObjects() const

Get the number of objects who’s poses are currently stored in the visualizer.

Returns:

Number of objects who’s poses are stored in the visualizer

inline bool isAutoScalingEnabled() const

Get if auto-scaling of the visualizer is enabled; scaling of the grid and viewing camera position based on the minimum and maximum bounds from the provided data.

Returns:

True if auto-scaling is enabled.

inline void setAutoScaling(const bool enable)

Enable/disable auto-scaling of the visualizer; scaling of the grid and viewing camera position based on the minimum and maximum bounds from the provided data.

Parameters:

enable – If true, enable grid autoscaling.

inline bool isLegendVisualizationEnabled() const

Get if the visualization of the legend is enabled.

Returns:

True if legend visualization is enabled.

inline void setLegendVisualization(const bool enable)

Enable the visualization of a legend showing the viewing camera position, latest timestamp of provided data, and the spacing of the drawn grid.

Parameters:

enable – If true, enable legend visualization.

Private Functions

inline void updateBounds(const float x, const float y, const float z)

Update the minimum and maximum bounds of the visualizer data based on a provided 3D point.

Parameters:
  • x – X position of 3D point.

  • y – Y position of 3D point.

  • z – Z position of 3D point.

inline float computeSpan(const GridPlane grid) const

Compute the maximum span of the provided visualizer data along a given plane.

Parameters:

grid – Plane along which the maximum span of the provided visualizer data is computed.

Returns:

Maximum span of provided visualizer data along the given plane.

inline void setCameraExtrinsicsFromViewMode()

Use the camera view to set the pose the virtual camera.

inline float getGridHalfWidth() const

Compute a nice value for the half width of the drawn grid based on the viewing camera position in the world coordinate system. The value is chosen so that when the grid is placed relative to the viewing camera position, and the viewing camera is oriented at 90 degrees with respect to the grid, the width of the grid is at least 5 times the minimum width so that edge of the drawn grid is visible in the camera view.

Returns:

Half width of the drawn grid.

inline bool isWithinGridBounds(const Eigen::Vector3f &pos) const
inline void updateGridSpacing()

For use in autoscaling mode, update the grid spacing based on the viewing camera position so that the number of drawn grid lines along each axis of the grid is less than LIMIT_NUM_GRID_LINES. Note that rather than continuously varying the grid spacing, the grid spacing grows/shrinks in integer multiples of GRID_SPACING_GROWTH_FACTOR.

inline void addText(cv::Mat &image, const cv::Point2i ptInPx, const std::string &text, const cv::Scalar &color) const

Add text to the visualizer image.

Parameters:
  • image – Visualization image to draw to.

  • ptInPx – 2D pixel position of added text.

  • text – Text to be added to the image.

  • color – Color of the added text.

inline cv::Size getTextSize(const std::string &text) const

Get the size of the drawn text.

Parameters:

text – Text to draw.

Returns:

Size of the bounding box for the drawn text on the image.

inline cv::Point2i project(const Eigen::Vector3f &ptInCam) const
inline void drawPoint(cv::Mat &image, const Eigen::Vector3f &pt, const int32_t radiusPx, const cv::Scalar &color, const std::string &textToAdd = "") const

Draw a 3D point represented in the world coordinate system onto the visualization image.

Parameters:
  • image – Visualization image to draw to.

  • pt – 3D point represented in the world coordinate system.

  • radiusPx – Radius of the drawn circle for the point, represented in pixels.

  • color – Color of the drawn point.

  • textToAdd – Optional text to additionally add next to the drawn point.

inline void drawLine(cv::Mat &image, const Eigen::Vector3f &startPt, const Eigen::Vector3f &endPt, const cv::Scalar &color, const std::string &textToAdd = "") const

Draw a line in 3D space based on the endpoints of the line in the world coordinate system.

Parameters:
  • image – Visualization image to draw to.

  • startPt – 3D point given in the world coordinate system representing the start point of the line.

  • endPt – 3D point given in the world coordinate system representing the end point of the line.

  • color – Color of the drawn line.

  • textToAdd – Optional text to additionally add next to the drawn line.

inline void drawGrid(cv::Mat &image) const

Draw a uniform grid onto the visualization image representing the defined grid plane.

Parameters:

image – Visualization image to draw to.

inline void drawCoordinateFrame(cv::Mat &image, const dv::kinematics::Transformationf &T_W_O, const float frameSize, const std::optional<cv::Scalar> &frameColor = std::nullopt, const std::string &textToAdd = "") const

Draw a coordinate frame onto the visualization image.

Parameters:
  • image – Visualization image to draw to.

  • T_W_O – Transform representing the pose of the coordinate frame with respect to the world coordinate system.

  • frameSize – Size of the frame to be drawn [m].

  • frameColor – Optional color of the coordinate frame drawn. If not specified, defaults to red for x axis, lime for y axis, and blue for z axis.

  • textToAdd – Optional text to additionally add next to the drawn coordinate frame.

inline void drawWorldCoordinateSystem(cv::Mat &image) const

Draw the world coordinate frame onto the visualization image.

Parameters:

image – Visualization image to draw to.

inline void drawObjectPoses(cv::Mat &image) const

Draw the different object poses onto the visualization image.

Parameters:

image – Visualization image to draw to.

inline void drawLandmarks(cv::Mat &image) const

Draw the defined landmarks onto the visualization image.

Parameters:

image – Visualization image to draw to.

inline void drawLegend(cv::Mat &image) const

Draw a legend onto the image specifying the viewing camera position, latest timestamp, and grid spacing

Parameters:

image

Private Members

dv::camera::CameraGeometry mViewingCameraGeometry

Intrinsics of the virtual camera used for visualization.

dv::kinematics::Transformationf mT_C_W

Extrinsics of the virtual camera used for visualization.

Eigen::Vector3f mCameraPosition
ViewMode mViewMode = {ViewMode::VIEW_XY}
cv::Scalar mBackgroundColor = {dv::visualization::colors::darkGray}

Colors for drawing the visualization image.

cv::Scalar mGridColor = {dv::visualization::colors::gray}
GridPlane mGridPlane = {GridPlane::PLANE_XY}

Parameters for visualized grid.

bool mEnableAutoScale = {true}
float mGridSpacing

Parameters for scaling grid spacing.

bool mBoundsSet = {false}

Bounds for range of drawn points.

float mMinBoundX = {std::numeric_limits<float>::max()}
float mMinBoundY = {std::numeric_limits<float>::max()}
float mMinBoundZ = {std::numeric_limits<float>::max()}
float mMaxBoundX = {std::numeric_limits<float>::min()}
float mMaxBoundY = {std::numeric_limits<float>::min()}
float mMaxBoundZ = {std::numeric_limits<float>::min()}
std::unordered_map<std::string, dv::kinematics::LinearTransformerf> mObjectTrajectories

Visualized object poses and the maximum number of poses that are visualized.

size_t mTrajectoryLength
std::unordered_map<int64_t, dv::Landmark> mLandmarks

Visualized landmarks and the maximum number landmarks that are visualized.

size_t mLandmarkLimit
float mWorldCoordinateFrameSize = {1.f}

Visualization parameters for size of coordinate frames and line thicknesses.

float mObjectCoordinateFrameSize = {0.2}
int32_t mLineThickness = {1}
bool mEnableLegend = {true}

Option to enable/disable legend visualization.

Private Static Attributes

static constexpr float PIXEL_PITCH = {9e-6f}
static constexpr float DEFAULT_CAMERA_HEIGHT = {5.0f}
static constexpr float GRID_SPACING_GROWTH_FACTOR = {2.0f}
static constexpr size_t LIMIT_NUM_GRID_LINES = {100}
static constexpr float RAD_TO_DEG = 180.0f / std::numbers::pi_v<float>

Static variables for converting from degrees to radians.

static constexpr float DEG_TO_RAD = 1.f / RAD_TO_DEG

Friends

inline friend std::ostream &operator<<(std::ostream &os, const PoseVisualizer::ViewMode &var)
inline friend std::ostream &operator<<(std::ostream &os, const PoseVisualizer::GridPlane &var)
class Reader

Public Types

using ReadHandler = dv::std_function_exact<void(std::vector<std::byte>&, const int64_t)>

Public Functions

inline explicit Reader(dv::io::support::TypeResolver resolver = dv::io::support::defaultTypeResolver, std::unique_ptr<dv::io::support::IOStatistics> stats = nullptr)
~Reader() = default
Reader(const Reader &other) = delete
Reader &operator=(const Reader &other) = delete
Reader(Reader &&other) noexcept = default
Reader &operator=(Reader &&other) noexcept = default
inline void verifyVersion(const ReadHandler &readHandler)
inline std::unique_ptr<const dv::IOHeader> readHeader(const ReadHandler &readHandler)
inline std::unique_ptr<const dv::FileDataTable> readFileDataTable(const uint64_t size, const int64_t position, const ReadHandler &readHandler)
inline std::tuple<dv::PacketHeader, std::unique_ptr<dv::types::TypedObject>, const dv::io::support::Sizes> readPacket(const ReadHandler &readHandler)
inline std::tuple<dv::PacketHeader, std::unique_ptr<dv::types::TypedObject>, const dv::io::support::Sizes> readPacket(const int64_t byteOffset, const ReadHandler &readHandler)
inline dv::PacketHeader readPacketHeader(const ReadHandler &readHandler)
inline dv::PacketHeader readPacketHeader(const int64_t byteOffset, const ReadHandler &readHandler)
inline std::pair<std::unique_ptr<dv::types::TypedObject>, const dv::io::support::Sizes> readPacketBody(const dv::FileDataDefinition &packet, const ReadHandler &readHandler)
inline std::pair<std::unique_ptr<dv::types::TypedObject>, const dv::io::support::Sizes> readPacketBody(const int32_t streamId, const uint64_t size, const ReadHandler &readHandler)
inline std::pair<std::unique_ptr<dv::types::TypedObject>, const dv::io::support::Sizes> readPacketBody(const int32_t streamId, const uint64_t size, const int64_t byteOffset, const ReadHandler &readHandler)
inline std::unique_ptr<const dv::FileDataTable> buildFileDataTable(const uint64_t fileSize, const ReadHandler &readHandler)
inline std::vector<dv::io::Stream> getStreams() const
inline CompressionType getCompressionType() const

Private Functions

inline void readFromInput(const uint64_t length, const int64_t position, const ReadHandler &readHandler)
inline void decompressData()

Private Members

dv::io::support::TypeResolver mTypeResolver
std::unique_ptr<dv::io::support::IOStatistics> mStats
std::unique_ptr<dv::io::compression::DecompressionSupport> mDecompressionSupport
std::vector<std::byte> mReadBuffer
std::vector<std::byte> mDecompressBuffer
std::unordered_map<int32_t, dv::io::Stream> mStreams

Private Static Functions

static inline std::unique_ptr<const dv::IOHeader> decodeHeader(const std::vector<std::byte> &header)
static inline std::unique_ptr<const dv::FileDataTable> decodeFileDataTable(const std::vector<std::byte> &table)
static inline std::unique_ptr<dv::types::TypedObject> decodePacketBody(const std::vector<std::byte> &packet, const dv::types::Type &type)
class ReadOnlyFile : private dv::io::SimpleReadOnlyFile

Public Functions

ReadOnlyFile() = delete
inline explicit ReadOnlyFile(const std::filesystem::path &filePath, const dv::io::support::TypeResolver &resolver = dv::io::support::defaultTypeResolver, std::unique_ptr<dv::io::support::IOStatistics> stats = nullptr)
inline const auto &getFileInfo() const
inline std::vector<std::pair<std::unique_ptr<dv::types::TypedObject>, const dv::io::support::Sizes>> read(const int64_t startTimestamp, const int64_t endTimestamp, const int32_t streamId)

Return all packets containing data with timestamps between a given start and end timestamp, meaning all data with a timestamp in [start, end].

Parameters:
  • startTimestamp – start timestamp of range, inclusive.

  • endTimestamp – end timestamp of range, inclusive.

  • streamId – data stream ID (separate logical type).

Returns:

packets containing data within given timestamp range.

inline std::pair<std::unique_ptr<dv::types::TypedObject>, const dv::io::support::Sizes> read(const dv::FileDataDefinition &packet)
inline std::pair<std::unique_ptr<const dv::types::TypedObject>, const dv::io::support::Sizes> read(const int32_t streamId, const uint64_t size, const int64_t byteOffset)

Public Static Functions

static inline bool inRange(const int64_t rangeStart, const int64_t rangeEnd, const dv::FileDataDefinition &packet)
static inline bool aheadOfRange(const int64_t rangeStart, const int64_t rangeEnd, const dv::FileDataDefinition &packet)
static inline bool pastRange(const int64_t rangeStart, const int64_t rangeEnd, const dv::FileDataDefinition &packet)

Private Functions

inline void parseHeader()
inline void loadFileDataTable()
inline void readClbk(std::vector<std::byte> &data, const int64_t byteOffset)
inline void createFileInfo()

Private Members

dv::io::FileInfo mFileInfo
dv::io::Reader mReader

Private Static Functions

static inline std::vector<dv::FileDataDefinition>::const_iterator getStartingPointForTimeRangeSearch(const int64_t startTimestamp, const dv::FileDataTable &streamDataTable)
class RedetectionStrategy
#include </builds/inivation/dv/dv-processing/include/dv-processing/features/redetection_strategy.hpp>

Implementation of different redetection strategies for trackers.

Subclassed by dv::features::FeatureCountRedetection, dv::features::NoRedetection, dv::features::UpdateIntervalOrFeatureCountRedetection, dv::features::UpdateIntervalRedetection

Public Types

typedef std::shared_ptr<RedetectionStrategy> SharedPtr
typedef std::unique_ptr<RedetectionStrategy> UniquePtr

Public Functions

virtual bool decideRedetection(const dv::features::TrackerBase &tracker) = 0

Decide the redetection of tracker features depending on the state of the tracker.

Parameters:

tracker – Current state of the tracker.

Returns:

True to perform redetection of features, false to continue.

virtual ~RedetectionStrategy() = default
struct Result
#include </builds/inivation/dv/dv-processing/include/dv-processing/features/tracker_base.hpp>

Result of tracking.

Public Types

typedef std::shared_ptr<Result> SharedPtr
typedef std::shared_ptr<const Result> ConstPtr

Public Functions

inline Result(const int64_t _timestamp, const std::vector<dv::TimedKeyPoint> &_keypoints, const bool keyframe)

Construct tracking result

Parameters:
  • _timestamp – Execution time of tracking

  • _keypoints – The resulting features

  • keyframe – Whether this set of features can be regarded as a keyframe (redetection was triggered)

Result() = default

Public Members

std::vector<dv::TimedKeyPoint> keypoints = {}

A vector of keypoints.

bool asKeyFrame = false

A flag that notifies the user of the tracker that this specific input caused redetection to happen and the tracker not only tracked the buffered events, but also detected new features.

int64_t timestamp = 0

Timestamp of the execution, it can be frame timestamp or last timestamp of an event slice.

class RotationIntegrator

Public Functions

inline explicit RotationIntegrator(const dv::kinematics::Transformationf &T_S_target = dv::kinematics::Transformationf(), int64_t sensorToTargetTimeOffset = 0, const Eigen::Vector3f &gyroscopeOffset = {0.f, 0.f, 0.f})
Parameters:
  • T_S_target – initial target position wrt to sensor

  • sensorToTargetTimeOffset – temporal offset between sensor (imu) and target. t_target = t_sensor - offset

  • gyroscopeOffset – constant measurement offset in gyroscope samples [radians].

inline Eigen::Matrix3f getRotation() const

Getter outputting current target transformation relative to (target) initial one

Returns:

[3x3] rotation matrix

inline void setT_S_target(const dv::kinematics::Transformationf &T_S_target)

Setter to update target position wrt to the sensor

Parameters:

T_S_target – new target transformation wrt sensor

inline int64_t getTimestamp() const

Getter outputting timestamp of current target transformation

Returns:

timestamp

inline dv::kinematics::Transformation<float> getTransformation() const

Getter returning [4x4] transformation corresponding to current target position wrt (target) initial one

Returns:

4x4 transformation corresponding to current integrated rotation

inline void accept(const dv::IMU &imu)

Update sensor position with new measurement

Parameters:

imu – single imu measurement

Private Functions

inline Eigen::Matrix3f rotationMatrixFromImu(const dv::IMU &imu, const float dt)

Transform gyroscope measurement into rotation matrix representation

Parameters:

imu – single imu measurement

Returns:

[3x3] rotation matrix corresponding to rotation measured from gyroscope

Private Members

Eigen::Matrix4f mT_S0_target

matrix storing target position wrt to the sensor (imu)

int64_t mSensorToTargetTimeOffset

offset [us] between sensor and target: t_target = t_sensor - offset

Eigen::Vector3f mGyroscopeOffset

measurement offset [radians] along each x, y, z axis of the sensor

Eigen::Matrix3f mR_S0_S = Eigen::Matrix3f::Identity(3, 3)

matrix storing current sensor orientation wrt initial sensor orientation

int64_t mTimestamp = -1

timestamp of current sensor position wrt initial time.

class RotationLossFunctor : public dv::optimization::OptimizationFunctor<float>
#include </builds/inivation/dv/dv-processing/include/dv-processing/optimization/contrast_maximization_rotation.hpp>

Given a chunk of events, the idea of contrast maximization is to warp events in space and time given a predefined motion model. Contrast maximization aims at finding the optimal parameters of the given motion model. The idea is that if the motion is perfectly estimated, all events corresponding to the same point in the scene, will be warped to the same image plane location, at a given point in time. If this happens, the reconstructed event image will be sharp, having high contrast. This high contrast is measured as variance in the image. For this reason, contrast maximization searches for the best motion parameters which maximize the contrast of the event image reconstructed after warping events in space to a spacific point in time. In order to warp event in space and time we use the “dv::kinematics::MotionCompensator” class. This contrast maximization class assumes pure camera rotational motion model. Given a set of imu samples and events in a time range, gyroscope measurement offset if optimized. The gyroscope offset is optimized instead of each single gyroscope measurement in order to limit the search space of the non linear optimization. In addition, given the high sample rate of imu, it would be hard to achieve real time computing optimizing each single gyroscope value. For this reason, the gyroscope offset (x, y, z) is optimized and assumed to be constant among all the gyroscope samples.

Public Functions

inline RotationLossFunctor(dv::camera::CameraGeometry::SharedPtr &camera, const dv::EventStore &events, float contribution, const std::vector<dv::IMU> &imuSamples, const dv::kinematics::Transformationf &T_S_target, int64_t imuToCamTimeOffsetUs, int inputDim, int numMeasurements)

This contrast maximization class assumes pure camera rotational motion model. Given a set of imu samples and events in a time range, gyroscope measurement offset if optimized. The gyroscope offset is optimized instead of each single gyroscope measurement in order to limit the search space of the non linear optimization.

Parameters:
  • camera – Camera geometry used to create motion compensator

  • events – Events used to compute motion compensated image

  • contribution – Contribution value of each event to the total pixel intensity

  • imuSamples – Chunk of imu samples used to compensate events. These values (gyrosvcope part) are updated with the gyroscope measurement offset, which is the optimized variable.

  • T_S_target – Transformation from sensor (imu) to target (camera). Used to convert imu motion into camera motion.

  • imuToCamTimeOffsetUs – Time synchronization offset between imu and camera

  • inputDim – Number of parameters to optimize

  • numMeasurements – Number of function evaluation performed to compute the gradient

inline virtual int operator()(const Eigen::VectorXf &gyroscopeOffsetImu, Eigen::VectorXf &stdInverse) const

Implementation of the objective function: optimize gyroscope offset. Current cost is stored in stdInverse. Notice that since we want to maximize the contrast but optimizer minimize cost function we use as cost 1/contrast

Private Members

dv::camera::CameraGeometry::SharedPtr mCamera

Camera geometry data. This information is used to create motionCompensator and compensate events.

const dv::EventStore mEvents

Raw events compensated using imu data.

float mContribution

Event contribution for total pixel intensity. This parameter is very important since it strongly influence contrast value. It needs to be tuned based on scene and length of event chunk.

const std::vector<dv::IMU> mImuSamples

Imu data used to compensate mEvents.

const dv::kinematics::Transformationf mT_S_target

Target (i.e. camera) to imu transformation. Used to construct rotationIntegrator that keeps track of camera position.

int64_t mImuToTargetTimeOffsetUs

Time offset between imu and target. Check rotationIntegrator class for more information.

struct RuntimeError : public dv::exceptions::info::EmptyException
template<dv::concepts::EventToFrameConverter<dv::EventStore> AccumulatorClass = dv::EdgeMapAccumulator>
class SemiDenseStereoMatcher
#include </builds/inivation/dv/dv-processing/include/dv-processing/depth/semi_dense_stereo_matcher.hpp>

Semi-dense stereo matcher - a class that performs disparity calculation using an OpenCV dense disparity calculation algorithm. The implementation performs accumulation of a stereo pair of images of input events and applies the given stereo disparity matcher algorithm (semi-global block matching by default).

Public Functions

inline SemiDenseStereoMatcher(std::unique_ptr<AccumulatorClass> leftAccumulator, std::unique_ptr<AccumulatorClass> rightAccumulator, const std::shared_ptr<cv::StereoMatcher> &matcher = cv::StereoSGBM::create())

Construct a semi dense stereo matcher object by providing custom accumulators for left and right cameras and a stereo matcher class.

Parameters:
  • leftAccumulatorAccumulator for the left camera.

  • rightAccumulatorAccumulator for the right camera.

  • matcher – Stereo matcher algorithm, if not provided, the implementation will use a cv::StereoSGBM class with default parameters.

inline explicit SemiDenseStereoMatcher(const cv::Size &leftResolution, const cv::Size &rightResolution, const std::shared_ptr<cv::StereoMatcher> &matcher = cv::StereoSGBM::create())

Construct a semi dense stereo matcher with default accumulator settings and a stereo matcher class.

Parameters:
  • leftResolution – Resolution of the left camera.

  • rightResolution – Resolution of the right camera.

  • matcher – Stereo matcher algorithm, if not provided, the implementation will use a cv::StereoSGBM class with default parameters.

inline explicit SemiDenseStereoMatcher(std::unique_ptr<dv::camera::StereoGeometry> geometry, std::shared_ptr<cv::StereoMatcher> matcher = dv::depth::defaultStereoMatcher())

Construct a semi dense stereo matcher with default accumulator settings and a stereo matcher class. The calibration information about camera will be extracted from the stereo geometry class instance.

Parameters:
  • geometry – Object describing the stereo camera geometry.

  • matcher – Stereo matcher algorithm, if not provided, the implementation will use a cv::StereoSGBM class with optimized parameters.

inline SemiDenseStereoMatcher(std::unique_ptr<dv::camera::StereoGeometry> geometry, std::unique_ptr<AccumulatorClass> leftAccumulator, std::unique_ptr<AccumulatorClass> rightAccumulator, std::shared_ptr<cv::StereoMatcher> matcher = dv::depth::defaultStereoMatcher())

Construct a semi dense stereo matcher object by providing custom accumulators for left and right cameras and a stereo matcher class. The calibration information about camera will be extracted from the stereo geometry class instance.

Parameters:
  • geometry – Object describing the stereo camera geometry.

  • leftAccumulatorAccumulator for the left camera.

  • rightAccumulatorAccumulator for the right camera.

  • matcher – Stereo matcher algorithm, if not provided, the implementation will use a cv::StereoSGBM class with optimized parameters.

inline cv::Mat computeDisparity(const dv::EventStore &left, const dv::EventStore &right)

Compute disparity of the two given event stores. The events will be accumulated using the accumulators for left and right camera accordingly and disparity is computed using the configured block matching algorithm. The function is not going to slice the input events, so event streams have to be synchronized and sliced accordingly. The dv::StereoEventStreamSlicer class is a good option for slicing stereo event streams.

NOTE: Accumulated frames will be rectified only if a stereo geometry class was provided during construction.

See also

dv::StereoEventStreamSlicer for synchronized slicing of a stereo event stream.

Parameters:
  • left – Events from left camera.

  • right – Events from right camera.

Returns:

Disparity map computed by the configured block matcher.

inline cv::Mat compute(const cv::Mat &leftImage, const cv::Mat &rightImage) const

Compute stereo disparity given a time synchronized pair of images. Images will be rectified before computing disparity if a StereoGeometry class instance was provided.

Parameters:
  • leftImage – Left image of a stereo pair of images.

  • rightImage – Right image of a stereo pair of images.

Returns:

Disparity map computed by the configured block matcher.

inline const dv::Frame &getLeftFrame() const

Retrieve the accumulated frame from the left camera event stream.

Returns:

An accumulated frame.

inline const dv::Frame &getRightFrame() const

Retrieve the accumulated frame from the right camera event stream.

Returns:

An accumulated frame.

inline dv::DepthEventStore estimateDepth(const cv::Mat &disparity, const dv::EventStore &events, const float disparityScale = 16.f) const

Estimate depth given the disparity map and a list of events. The coordinates will be rectified and a disparity value will be looked up in the disparity map. The depth of each event is calculated using an equation: depth = (focalLength * baseline) / disparity. focalLength is expressed in pixels, baseline in meters.

For practical applications, depth estimation should be evaluated prior to any use. The directly estimated depth values can contain measurable errors which should be accounted for - the errors can usually be within 10-20% fixed absolute error distance. Usually this comes from various inaccuracies and can be mitigated by introducing a correction factor for the depth estimate.

Parameters:
  • disparity – Disparity map.

  • events – Input events.

  • disparityScale – Scale of disparity value in the disparity map, if subpixel accuracy is enabled in the block matching, this value will be equal to 16.

Returns:

A depth event store, the events will contain the same information as in the input, but additionally will have the depth value in meters. Events whose coordinates are outside of image bounds after rectification will be skipped.

inline dv::DepthFrame estimateDepthFrame(const cv::Mat &disparity, const float disparityScale = 16.f) const

Convert a disparity map into a depth frame. Each disparity value is converted into depth using the equation depth = (focalLength * baseline) / (disparity * pixelPitch). Output frame contains distance values expressed in integer values of millimeter distance.

Parameters:
  • disparity – Input disparity map.

  • disparityScale – Scale of disparity value in the disparity map, if subpixel accuracy is enabled in the block matching, this value will be equal to 16.

Returns:

A converted depth frame.

Protected Attributes

std::shared_ptr<cv::StereoMatcher> mMatcher = nullptr
std::unique_ptr<AccumulatorClass> mLeftAccumulator = nullptr
std::unique_ptr<AccumulatorClass> mRightAccumulator = nullptr
dv::Frame mLeftFrame
dv::Frame mRightFrame
std::unique_ptr<dv::camera::StereoGeometry> mStereoGeometry = nullptr

Private Functions

inline void validateStereoGeometry() const

Validates stereo geometry pointer, throws an error if the value is unset.

struct ShiftedSourceBias

On-chip shifted-source bias current configuration. See ‘https://docs.inivation.com/hardware/hardware-advanced-usage/biasing.html’ for more details.

Public Functions

constexpr ShiftedSourceBias() = default
inline constexpr ShiftedSourceBias(const uint8_t ref, const uint8_t reg, const ShiftedSourceBiasOperatingMode mode = ShiftedSourceBiasOperatingMode::SHIFTED_SOURCE)

Public Members

uint8_t refValue = {0}

Shifted-source bias level, from 0 to 63.

uint8_t regValue = {0}

Shifted-source bias current for buffer amplifier, from 0 to 63.

ShiftedSourceBiasOperatingMode operatingMode = {ShiftedSourceBiasOperatingMode::SHIFTED_SOURCE}

Shifted-source operating mode (see ShiftedSourceBiasOperatingMode).

ShiftedSourceBiasVoltageLevel voltageLevel = {ShiftedSourceBiasVoltageLevel::SPLIT_GATE}

Shifted-source voltage level (see ShiftedSourceBiasVoltageLevel).

class SimpleFile

Subclassed by dv::io::SimpleReadOnlyFile, dv::io::SimpleWriteOnlyFile

Public Functions

constexpr SimpleFile() = default
inline explicit SimpleFile(const std::filesystem::path &filePath, const ModeFlags modeFlags, const WriteFlags writeFlags = WriteFlags::NONE, const size_t bufferSize = 65536)

Open a file for reading and/or writing, supporting extra modes for writing and buffer control. Will always do what you expect and throw an exception if there’s any issue.

Parameters:
  • filePath – file path to open.

  • modeFlags – Open file for reading, writing or both.

  • writeFlags – If opening for writing, extra flags for truncation and append modes.

  • bufferSize – Size of user-space buffer for file operations. Default 64KB, use 0 to disable buffering entirely.

inline ~SimpleFile() noexcept
SimpleFile(const SimpleFile &file) = delete
SimpleFile &operator=(const SimpleFile &rhs) = delete
inline SimpleFile(SimpleFile &&file) noexcept
inline SimpleFile &operator=(SimpleFile &&rhs) noexcept
inline bool isOpen() const
inline void flush()
inline void write(const std::string_view data)
template<typename T>
inline void write(const std::span<const T> data)
template<typename T>
inline void write(const T *elem, size_t num)
template<typename S, typename ...Args>
inline void format(const S &format, Args&&... args)
inline void readInto(std::string &data) const
template<typename T>
inline void readInto(std::vector<T> &data) const
template<typename T>
inline void readInto(T *elem, size_t num) const
inline std::vector<uint8_t> read(const size_t upToInBytes) const
inline std::vector<uint8_t> readAll() const
inline uint64_t tell() const
inline void seek(const uint64_t offsetInBytes, const SeekFlags flags = SeekFlags::START) const
inline void rewind() const
inline uint64_t fileSize() const
inline std::filesystem::path path() const

Private Functions

inline void close() noexcept

Private Members

std::FILE *f = {nullptr}
char *fBuffer = {nullptr}
std::filesystem::path fPath = {}
class SimpleReadOnlyFile : private dv::io::SimpleFile

Subclassed by dv::io::ReadOnlyFile

Public Functions

constexpr SimpleReadOnlyFile() = default
inline explicit SimpleReadOnlyFile(const std::filesystem::path &filePath, const size_t bufferSize = 65536)
inline uint64_t fileSize() const
inline bool isOpen() const
inline std::filesystem::path path() const
inline std::vector<uint8_t> read(const size_t upToInBytes) const
inline std::vector<uint8_t> readAll() const
inline void readInto(std::string &data) const
template<typename T>
inline void readInto(std::vector<T> &data) const
template<typename T>
inline void readInto(T *elem, size_t num) const
inline void rewind() const
inline void seek(const uint64_t offsetInBytes, const SeekFlags flags = SeekFlags::START) const
inline uint64_t tell() const
class SimpleWriteOnlyFile : private dv::io::SimpleFile

Subclassed by dv::io::WriteOnlyFile

Public Functions

constexpr SimpleWriteOnlyFile() = default
inline explicit SimpleWriteOnlyFile(const std::filesystem::path &filePath, const WriteFlags writeFlags = WriteFlags::NONE, const size_t bufferSize = 65536)
inline uint64_t fileSize() const
inline void flush()
template<typename S, typename ...Args>
inline void format(const S &format, Args&&... args)
inline bool isOpen() const
inline std::filesystem::path path() const
inline void rewind() const
inline void seek(const uint64_t offsetInBytes, const SeekFlags flags = SeekFlags::START) const
inline uint64_t tell() const
inline void write(const std::string_view data)
template<typename T>
inline void write(const std::span<const T> data)
template<typename T>
inline void write(const T *elem, size_t num)
struct Sizes

Public Members

uint64_t mPacketElements = {0}
uint64_t mPacketSize = {0}
uint64_t mDataSize = {0}
class SliceJob
#include </builds/inivation/dv/dv-processing/include/dv-processing/core/multi_stream_slicer.hpp>

Internal container of slice jobs.

Public Types

enum class SliceType

Values:

enumerator NUMBER
enumerator TIME
using JobCallback = std::function<void(const dv::TimeWindow&, const MapOfVariants&)>

Callback method signature alias.

Public Functions

inline SliceJob(const int64_t intervalUS, JobCallback callback)

Create a slice job

Parameters:
  • intervalUS – Job execution interval in microseconds

  • callback – The callback method

inline SliceJob(const size_t number, const TimeSlicingApproach slicing, JobCallback callback)

Create a slice by number job

Parameters:
  • number – Number of elements to be sliced

  • slicing – Slicing method for gaps between numeric slices

  • callback – The callback method

inline void run(const dv::TimeWindow &timeWindow, const MapOfVariants &data)

Public Members

SliceType mType
JobCallback mCallback

The callback method.

int64_t mInterval = -1

Job execution interval in microseconds.

size_t mNumberOfElements = 0

Slice by number configuration value.

TimeSlicingApproach mTimeSlicing = TimeSlicingApproach::BACKWARD

Time slicing method for slicing by number.

int64_t mLastEvaluatedTimestamp = 0

Timestamp specifying the last timestamp the job evaluated over.

class SliceJob

INTERNAL USE ONLY A single job of the EventStreamSlicer

Public Types

enum class SliceType

Values:

enumerator NUMBER
enumerator TIME

Public Functions

inline SliceJob(const SliceType type, const int64_t timeInterval, const size_t numberInterval, std::function<void(const dv::TimeWindow&, PacketType&)> callback)

INTERNAL USE ONLY Creates a new SliceJob of a certain type, interval and callback

Parameters:
  • type – The type of periodicity. Can be either NUMBER or TIME

  • timeInterval – The interval at which the job should be executed

  • numberInterval – The interval at which the job should be executed

  • callback – The callback function to call on execution.

SliceJob() = default
inline void run(const PacketType &packet)

INTERNAL USE ONLY This function establishes how much fresh data is availble and how often the callback can be executed on this fresh data. it then creates slices of the data and executes the callback as often as possible.

Parameters:

packet – the storage packet to slice on.

inline void setTimeInterval(const int64_t timeInterval)

INTERNAL USE ONLY Sets the time interval to the supplied value

Parameters:

timeInterval – the new time interval to use

inline void setNumberInterval(const size_t numberInterval)

INTERNAL USE ONLY Sets the number interval to the supplied value

Parameters:

numberInterval – the new interval to use

Public Members

size_t mLastCallEnd = 0

Private Members

SliceType mType = SliceType::TIME
const std::function<void(const TimeWindow&, PacketType&)> mCallback
int64_t mTimeInterval = 0
size_t mNumberInterval = 0
int64_t mLastCallEndTime = 0

Private Static Functions

template<class ElementVector>
static inline ElementVector sliceByNumber(const ElementVector &packet, const size_t fromIndex, const size_t number)
template<class ElementVector>
static inline ElementVector sliceByTime(const ElementVector &packet, const int64_t start, const int64_t end, size_t &endIndex)
class SocketBase
#include </builds/inivation/dv/dv-processing/include/dv-processing/io/network/socket_base.hpp>

Interface class to define a socket API.

Subclassed by dv::io::network::TCPTLSSocket, dv::io::network::UNIXSocket

Public Types

using CompletionHandler = std::function<void(const boost::system::error_code&, const size_t)>

Callback alias that is used to handle a completed IO operation.

Public Functions

virtual ~SocketBase() = default
virtual bool isOpen() const = 0

Check whether a socket is open and active.

Returns:

True if socket is open, false otherwise.

virtual void close() = 0

Close the underlying socket communication. Async reads/writes can be aborted during this function call.

virtual void write(const asio::const_buffer &buffer, CompletionHandler &&handler) = 0

Write a data buffer to the socket asynchronously. Completion handler is called when write to the socket is complete.

Parameters:
  • buffer – Data buffer to written to the socket.

  • handler – Completion handler, that is called when write is complete.

virtual void read(const asio::mutable_buffer &buffer, CompletionHandler &&handler) = 0

Read a data buffer from the socket asynchronously. Completion handler is called when read from the socket is complete.

Parameters:
  • buffer – Output buffer to place data from the socket.

  • wrHandler – Completion handler, that is called when write is complete.

virtual void syncWrite(const asio::const_buffer &buffer) = 0

Write data into the socket synchronously, this method is a blocking call which returns when writing data is complete.

Parameters:

buffer – Data to be written into the socket.

virtual void syncRead(const asio::mutable_buffer &buffer) = 0

Read data from the socket synchronously, this method is a blocking call which returns when reading data is complete.

Parameters:

buffer – Output buffer to place data from the socket.

class SortedPacketBuffers

Public Functions

inline void acceptData(parser::ParsedData data)
inline void clearBuffers()
inline std::optional<dv::EventPacket> popEventPacket()
inline std::optional<dv::Frame> popFrame()
inline std::optional<dv::IMUPacket> popImuPacket()
inline std::optional<dv::TriggerPacket> popTriggerPacket()
inline int64_t getEventSeekTime() const

Get latest timestamp of event data stream that has been read from the capture class.

Returns:

Latest processed event timestamp; returns -1 if no data was processed or stream is unavailable.

inline int64_t getFrameSeekTime() const

Get latest timestamp of frames stream that has been read from the capture class.

Returns:

Latest processed frame timestamp; returns -1 if no data was processed or stream is unavailable.

inline int64_t getImuSeekTime() const

Get latest timestamp of imu data that has been read from the capture class.

Returns:

Latest processed imu data timestamp; returns -1 if no data was processed or stream is unavailable.

inline int64_t getTriggerSeekTime() const

Get latest timestamp of trigger data stream that has been read from the capture class.

Returns:

Latest processed trigger timestamp; returns -1 if no data was processed or stream is unavailable.

inline std::variant<std::monostate, dv::EventPacket, dv::Frame, dv::IMUPacket, dv::TriggerPacket> readNextPacket()

Private Types

using PacketCountType = uint64_t

Private Members

mutable std::mutex mDataLock
PacketCountType mPacketCount = 0
int64_t mEventStreamSeek = -1
int64_t mFrameStreamSeek = -1
int64_t mImuStreamSeek = -1
int64_t mTriggerStreamSeek = -1
boost::circular_buffer<std::pair<PacketCountType, dv::EventPacket>> mEvents = {256}
boost::circular_buffer<std::pair<PacketCountType, dv::Frame>> mFrames = {256}
boost::circular_buffer<std::pair<PacketCountType, dv::IMUPacket>> mImu = {256}
boost::circular_buffer<std::pair<PacketCountType, dv::TriggerPacket>> mTriggers = {256}
class SparseEventBlockMatcher

Public Functions

inline explicit SparseEventBlockMatcher(const cv::Size &resolution, const cv::Size &windowSize = cv::Size(24, 24), const int32_t maxDisparity = 40, const int32_t minDisparity = 0, const float minScore = 1.0f)

Initialize sparse event block matcher. This constructor initializes the matcher in non-rectified space, so for accurate results the event coordinates should be already rectified.

Parameters:
  • resolution – Resolution of camera sensors. Assumes same resolution for left and right camera.

  • windowSize – Matching window size.

  • maxDisparity – Maximum disparity value.

  • minDisparity – Minimum disparity value.

  • minScore – Minimum matching score to consider matching valid.

inline explicit SparseEventBlockMatcher(std::unique_ptr<dv::camera::StereoGeometry> geometry, const cv::Size &windowSize = cv::Size(24, 24), const int32_t maxDisparity = 40, const int32_t minDisparity = 0, const float minScore = 1.0f)

Initialize a sparse stereo block matcher with a calibrated stereo geometry. This allows event rectification while calculating the disparity.

Parameters:
  • geometry – Stereo camera geometry.

  • windowSize – Matching window size.

  • maxDisparity – Maximum disparity value.

  • minDisparity – Minimum disparity value.

  • minScore – Minimum matching score to consider matching valid.

template<dv::concepts::Coordinate2DIterable InputPoints>
inline std::vector<PixelDisparity> computeDisparitySparse(const dv::EventStore &left, const dv::EventStore &right, const InputPoints &interestPoints)

Compute sparse disparity on given interest points. The events are accumulated sparsely only on the selected interest point regions. Returns a list of coordinates with their according disparity values, correlations and scores for each disparity match. If rectification is enabled, the returned disparity result will have valid flag set to false if the interest point coordinate lies outside of valid rectified pixel space.

Input event has to be passed in synchronized batches, no time validation is performed during accumulation.

Parameters:
  • left – Synchronised event batch from left camera.

  • right – Synchronised event batch from right camera.

  • interestPoints – List of interest coordinates in unrectified pixel space.

Returns:

A list of disparity results for each input interest point.

inline const cv::Mat &getLeftMask() const

Get the left camera image mask. The algorithm only accumulates the frames where actual matching is going to happen. The mask will contain non-zero pixel values where accumulation needs to happen.

Returns:

Interest region mask for left camera.

inline const cv::Mat &getRightMask() const

Get the right camera image mask. The algorithm only accumulates the frames where actual matching is going to happen. The mask will contain non-zero pixel values where accumulation needs to happen.

Returns:

Interest region mask for right camera.

inline dv::Frame getLeftFrame() const

Get the latest accumulated left frame.

Returns:

Accumulated image of the left camera from last disparity computation step.

inline dv::Frame getRightFrame() const

Get the latest accumulated right frame.

Returns:

Accumulated image of the right camera from last disparity computation step.

inline const cv::Size &getWindowSize() const

Get matching window size.

Returns:

Currently configured matching window size.

inline void setWindowSize(const cv::Size &windowSize)

Set matching window size. This is the size of cropped template image that is matched along the epipolar line of the stereo geometry.

Parameters:

windowSize – New matching window size.

inline int32_t getMaxDisparity() const

Get maximum disparity value.

Returns:

Currently configured maximum disparity value.

inline void setMaxDisparity(const int32_t maxDisparity)

Set maximum measured disparity. This parameter limits the matching space in pixels on the right camera image.

Parameters:

maxDisparity – New maximum disparity value.

inline int32_t getMinDisparity() const

Get minimum disparity value.

Returns:

Currently configured minimum disparity value.

inline void setMinDisparity(const int32_t minDisparity)

Set minimum measured disparity. This parameter limits the matching space in pixels on the right camera image.

Parameters:

minDisparity – New minimum disparity value.

inline float getMinScore() const

Get minimum matching score value.

Returns:

Currently configured minimum matching score value.

inline void setMinScore(const float minimumScore)

Set minimum matching score value to consider the matching valid. If matching score is below this threshold, the value for a point will be set to an invalid value and valid boolean to false.

Score is calculated by applying softmax function on the discrete distribution of correlation values from matching the template left patch on the epipolar line of the right camera image. This retrieves the probability mass function of the correlations. The best match is found by finding the max probability value and score is calculated for the best match by computing z-score over the probabilities.

Parameters:

minimumScore – New minimum score value.

Protected Functions

template<dv::concepts::Coordinate2D InputPoint>
inline cv::Rect getPointRoi(const InputPoint &center, const int32_t offsetX, const int32_t stretchX) const
inline void initializeMaskPoint(cv::Mat &mask, const int32_t offsetX, const int32_t stretchX, const cv::Point2i &coord, const std::optional<dv::camera::StereoGeometry::CameraPosition> cameraPosition = std::nullopt) const

Protected Attributes

cv::Mat mLeftMask
cv::Mat mRightMask
dv::Frame mLeftFrame
dv::Frame mRightFrame
dv::EdgeMapAccumulator mLeftAcc
dv::EdgeMapAccumulator mRightAcc
cv::Size mWindowSize
cv::Size mHalfWindowSize
int32_t mMaxDisparity
int32_t mMinDisparity
float mMinScore
std::unique_ptr<dv::camera::StereoGeometry> mStereoGeometry = nullptr
template<class EventStoreType, uint32_t patchDiameter = 8, typename ScalarType = uint8_t>
class SpeedInvariantTimeSurfaceBase : public dv::TimeSurfaceBase<EventStoreType, uint8_t>
#include </builds/inivation/dv/dv-processing/include/dv-processing/core/core.hpp>

A speed invariant time surface, as described by https://arxiv.org/abs/1903.11332

Template Parameters:
  • EventStoreType – Type of underlying event store

  • patchDiameter – Diameter of the patch to apply the speed invariant update. The paper defines parameter r which is half of the diameter value, so for an r = 5, use diameter = 2 * r or 10 in this case. The update is performed using eigen optimized routines, so the value has limits: it has to be in range (0; 16) and divisible by 2. By default set to 8 which gives the best performance.

Public Functions

inline explicit SpeedInvariantTimeSurfaceBase(const cv::Size &resolution)

Create a speed invariant time surface with known image dimensions.

Parameters:

resolution – Dimensions of the expected event data.

inline virtual SpeedInvariantTimeSurfaceBase &operator<<(const EventStoreType &store) override

Inserts the event store into the speed invariant time surface.

Parameters:

store – The event store to be added

Returns:

A reference to this TimeSurface.

inline virtual SpeedInvariantTimeSurfaceBase &operator<<(const typename EventStoreType::iterator::value_type &event) override

Inserts the event into the speed invariant time surface.

Parameters:

event – The event to be added

Returns:

A reference to this TimeSurface.

inline virtual void accept(const EventStoreType &store) override

Inserts the event store into the speed invariant time surface.

Parameters:

store – The event store to be added

inline virtual void accept(const typename EventStoreType::iterator::value_type &event) override

Inserts the event into the speed invariant time surface.

Parameters:

event – The event to be added

Protected Types

using BaseClassType = TimeSurfaceBase<EventStoreType, ScalarType>

Private Members

int64_t mLatestPixelValue
struct SPIConfigurationParameters

Public Members

uint8_t moduleAddress
uint8_t parameterAddress
uint32_t parameterValue
template<typename>
struct std_function_exact

std::function substitute with exact signature matching.

template<typename R, typename ...Args>
struct std_function_exact<R(Args...)> : public std::function<R(Args...)>

Public Functions

template<typename T, std::enable_if_t<std::is_invocable_v<T, Args...> && std::is_same_v<R, std::invoke_result_t<T, Args...>>, bool> = true>
inline std_function_exact(T &&t)
struct StereoCalibration

Public Functions

StereoCalibration() = default
inline explicit StereoCalibration(const std::string_view leftCameraName_, const std::string_view rightCameraName_, const std::span<const float> fundamentalMatrix_, const std::span<const float> essentialMatrix_, const std::optional<Metadata> &metadata_)
inline explicit StereoCalibration(const boost::property_tree::ptree &tree)
inline boost::property_tree::ptree toPropertyTree() const
inline bool operator==(const StereoCalibration &rhs) const
inline Eigen::Matrix3f getFundamentalMatrix() const

Retrieve the fundamental matrix as Eigen::Matrix3f.

Returns:

Fundamental matrix.

inline Eigen::Matrix3f getEssentialMatrix() const

Retrieve the essential matrix as Eigen::Matrix3f.

Returns:

Essential matrix.

Public Members

std::string leftCameraName

Name of the left camera.

std::string rightCameraName

Name of the right camera.

std::vector<float> fundamentalMatrix

Stereo calibration Fundamental Matrix.

std::vector<float> essentialMatrix

Stereo calibration Essential Matrix.

std::optional<Metadata> metadata

Metadata.

Friends

inline friend std::ostream &operator<<(std::ostream &os, const StereoCalibration &calibration)
class StereoCameraRecording

Public Functions

inline StereoCameraRecording(const std::filesystem::path &aedat4Path, const std::string &leftCameraName, const std::string &rightCameraName)

Create a reader for stereo camera recording. Expects at least one stream from two cameras available. Prior knowledge of stereo setup is required, otherwise it is not possible to differentiate between left and right cameras. This is just a convenience class that gives access to distinguished data streams in the recording.

Parameters:
  • aedat4Path – Path to the aedat4 file.

  • leftCameraName – Name of the left camera.

  • rightCameraName – Name of the right camera.

inline MonoCameraRecording &getLeftReader()

Access the left camera.

Returns:

A reference to the left camera reader.

inline MonoCameraRecording &getRightReader()

Access the right camera.

Returns:

A reference to the right camera reader.

Private Members

std::shared_ptr<ReadOnlyFile> mReader = nullptr
MonoCameraRecording mLeftCamera
MonoCameraRecording mRightCamera
class StereoCameraWriter

Public Functions

inline StereoCameraWriter(const std::filesystem::path &aedat4Path, const MonoCameraWriter::Config &leftConfig, const MonoCameraWriter::Config &rightConfig, const dv::io::support::TypeResolver &resolver = dv::io::support::defaultTypeResolver)

Open a file pass left / right camera configuration manually.

Parameters:
  • aedat4Path – Path to output file.

  • leftConfig – Left camera output stream configuration.

  • rightConfig – Right camera output stream configuration.

  • resolver – Type resolver for the output file.

inline StereoCameraWriter(const std::filesystem::path &aedat4Path, camera::SyncCameraInputBase &leftCamera, camera::SyncCameraInputBase &rightCamera, const CompressionType compression = CompressionType::LZ4, const dv::io::support::TypeResolver &resolver = dv::io::support::defaultTypeResolver)

Open a file and use capture device to inspect the capabilities of the cameras. This will create all possible output streams the devices can supply.

Parameters:
  • aedat4Path – Path to output file.

  • leftCamera – Capture object to inspect capabilities of the left camera.

  • rightCamera – Capture object to inspect capabilities of the right camera.

  • compression – Compression to be used for the output file.

  • resolver – Type resolver for the output file.

Public Members

MonoCameraWriter left

Left writing instance.

MonoCameraWriter right

Right writing instance.

Private Functions

inline std::string createStereoHeader(const dv::io::support::TypeResolver &resolver)
inline void configureStreamIds()

Private Members

MonoCameraWriter::Config leftUpdatedConfig
MonoCameraWriter::Config rightUpdatedConfig
StreamIdContainer leftIds
StreamIdContainer rightIds
MonoCameraWriter::StreamDescriptorMap mLeftOutputStreamDescriptors
MonoCameraWriter::StreamDescriptorMap mRightOutputStreamDescriptors
std::shared_ptr<WriteOnlyFile> file

Private Static Functions

static inline void configureCameraOutput(int32_t &index, dv::io::support::XMLTreeNode &mRoot, MonoCameraWriter::Config &config, const std::string &compression, StreamIdContainer &ids, MonoCameraWriter::StreamDescriptorMap &streamDescriptors, const dv::io::support::TypeResolver &resolver, const std::string &outputPrefix)
class StereoGeometry
#include </builds/inivation/dv/dv-processing/include/dv-processing/camera/stereo_geometry.hpp>

A class that performs stereo geometry operations and rectification of a stereo camera.

Public Types

enum class CameraPosition

Position enum for a single camera in a stereo configuration.

Values:

enumerator LEFT
enumerator RIGHT
enum class FunctionImplementation

Values:

enumerator LUT
enumerator SUB_PIXEL
using UniquePtr = std::unique_ptr<StereoGeometry>
using SharedPtr = std::shared_ptr<StereoGeometry>

Public Functions

inline StereoGeometry(const CameraGeometry &leftCamera, const CameraGeometry &rightCamera, const dv::kinematics::Transformationf &transformToLeft, std::optional<cv::Size> rectifiedResolution = std::nullopt)

Initialize a stereo geometry class using two camera geometries for each of the stereo camera pair and a transformation matrix that describes the transformation from right camera to the left.

Parameters:
  • leftCamera – Left camera geometry.

  • rightCamera – Right camera geometry.

  • transformToLeft – A vector containing a homogenous transformation from right to the left camera. Vector should contain exactly 16 numbers (as per 4x4 homogenous transformation matrix) in a row-major ordering.

  • rectifiedResolution – Resolution of the rectified image plane when remapping events/points/images from either the left or right camera (see remapEvents()/remapImage()). This can be the same, smaller, or larger than either the left or right camera resolutions, where downsampling/upsampling occurs if the #rectifiedResolution is smaller/larger than the camera resolution. Defaults to the left camera resolution if not provided.

inline StereoGeometry(const calibrations::CameraCalibration &leftCalibration, const calibrations::CameraCalibration &rightCalibration, std::optional<cv::Size> rectifiedResolution = std::nullopt)

Create a stereo geometry class from left and right camera calibration instances.

Parameters:
  • leftCalibration – Left camera calibration.

  • rightCalibration – Right camera calibration.

  • rectifiedResolution – Resolution of the rectified image plane when remapping events/points/images from either the left or right camera (see above constructor).

inline cv::Mat remapImage(const CameraPosition cameraPosition, const cv::Mat &image) const

Apply remapping to an input image to rectify it.

Parameters:
  • cameraPosition – Indication whether image is from left or right camera.

  • image – Input image.

Returns:

Rectified image.

inline dv::EventStore remapEvents(const CameraPosition cameraPosition, const dv::EventStore &events) const

Apply remapping on input events.

Parameters:
  • cameraPosition – Indication whether image is from left or right camera.

  • events – Input events.

Returns:

Event with rectified coordinates.

template<dv::concepts::Coordinate2DConstructible OutputPoint = cv::Point2i, dv::concepts::Coordinate2D InputPoint>
inline std::optional<OutputPoint> remapPoint(const CameraPosition position, const InputPoint &point) const

Remap a point coordinate from original camera pixel space into undistorted and rectified pixel space.

Parameters:
  • cameraPosition – Camera position in the stereo setup.

  • point – Coordinates in original camera pixel space.

Template Parameters:

Point

Returns:

Undistorted and rectified coordinates or std::nullopt if the resulting coordinates are outside of valid output pixel range.

template<dv::concepts::Coordinate2DConstructible OutputPoint = cv::Point2i, FunctionImplementation Implementation = FunctionImplementation::LUT, dv::concepts::Coordinate2D InputPoint>
inline OutputPoint unmapPoint(const CameraPosition position, const InputPoint &point) const

Unmap a point coordinate from undistorted and rectified pixel space into original distorted pixel.

Parameters:
  • position – Camera position in the stereo setup

  • point – Coordinates in undistorted rectified pixel space.

Template Parameters:
  • OutputPoint – Output point class

  • Implementation – Implementation type: LUT - performs a look-up operation on a precomputed look-up table, SubPixel - performs full computations and retrieves exact coordinates.

  • InputPoint – Input point class (automatically inferred)

Returns:

Coordinates of the pixel in original pixel space.

inline dv::camera::CameraGeometry getLeftCameraGeometry() const

Retrieve left camera geometry class that can project coordinates into stereo rectified space.

Returns:

Camera geometry instance.

inline dv::camera::CameraGeometry getRightCameraGeometry() const

Retrieve right camera geometry class that can project coordinates into stereo rectified space.

Returns:

Camera geometry instance.

inline dv::DepthEventStore estimateDepth(const cv::Mat &disparity, const dv::EventStore &events, const float disparityScale = 16.f) const

Estimate depth given the disparity map and a list of events. The coordinates will be rectified and a disparity value will be looked up in the disparity map. The depth of each event is calculated using an equation: depth = (focalLength * baseline) / disparity. focalLength is expressed in pixels, baseline in meters.

For practical applications, depth estimation should be evaluated prior to any use. The directly estimated depth values can contain measurable errors which should be accounted for - the errors can usually be within 10-20% fixed absolute error distance. Usually this comes from various inaccuracies and can be mitigated by introducing a correction factor for the depth estimate.

Parameters:
  • disparity – Disparity map.

  • events – Input events.

  • disparityScale – Scale of disparity value in the disparity map, if subpixel accuracy is enabled in the block matching, this value will be equal to 16.

Returns:

A depth event store, the events will contain the same information as in the input, but additionally will have the depth value in meters. Events whose coordinates are outside of image bounds after rectification will be skipped.

inline float convertDisparityToDepth(const float disparity) const

Convert a disparity value (in rectified coordinates) [px] to a depth value [mm] using the mapping depth = (focalLength * baseline) / disparity

Parameters:

disparity – Input disparity value [px]

Returns:

Corresponding depth value [mm]

inline dv::DepthFrame toDepthFrame(const cv::Mat &disparity, const float disparityScale = 16.f) const

Convert a disparity map into a depth frame. Each disparity value is converted into depth using the equation depth = (focalLength * baseline) / disparity. Output frame contains distance values expressed in integer values of millimeter distance.

NOTE: Output depth frame will not have a timestamp value, it is up to the user of this method to set correct timestamp of the disparity map.

Parameters:
  • disparity – Input disparity map.

  • disparityScale – Scale of disparity value in the disparity map, if subpixel accuracy is enabled in the block matching, this value will be equal to 16.

Returns:

A converted depth frame.

Public Static Functions

static inline dv::kinematics::Transformationf computeTransformBetween(const calibrations::CameraCalibration &src, const calibrations::CameraCalibration &target)

Compute the homogeneous transformation that transforms a point from a source camera to a target camera based on their respective calibrations.

Parameters:
  • src – Camera calibration for the source camera.

  • target – Camera calibration for the target camera.

Returns:

4x4 transformation from source to target.

Private Functions

inline void createLUTs(const cv::Size &resolution, const cv::Matx33f &cameraMatrix, const cv::Mat &distortion, const cv::Mat &R, const cv::Mat &P, std::vector<uint8_t> &outputMask, std::vector<cv::Point2i> &outputRemapLUT) const
template<concepts::Coordinate3DConstructible Output, concepts::Coordinate2D Input>
inline Output backProject(const StereoGeometry::CameraPosition position, const Input &pixel) const

Private Members

cv::Mat mLeftRemap1
cv::Mat mLeftRemap2
cv::Mat mRightRemap1
cv::Mat mRightRemap2
cv::Mat mLeftProjection
cv::Mat mRightProjection
std::vector<uint8_t> mLeftValidMask
std::vector<uint8_t> mRightValidMask
std::vector<cv::Point2i> mLeftRemapLUT
std::vector<cv::Point2i> mRightRemapLUT
std::vector<cv::Point2i> mLeftUnmapLUT
std::vector<cv::Point2i> mRightUnmapLUT
cv::Size mLeftResolution
cv::Size mRightResolution
std::vector<float> mDistLeft
DistortionModel mLeftDistModel
std::vector<float> mDistRight
DistortionModel mRightDistModel
cv::Mat RN[2]
cv::Mat Q
dv::kinematics::Transformationf mLeftRectifierInverse
dv::kinematics::Transformationf mRightRectifierInverse
const dv::camera::CameraGeometry mOriginalLeft
const dv::camera::CameraGeometry mOriginalRight
float mBaselineFocal

Private Static Functions

template<dv::concepts::Coordinate2DConstructible PointType = cv::Point2d>
static inline std::vector<PointType> initCoordinateList(const cv::Size &resolution)
static inline dv::EventStore remapEventsInternal(const dv::EventStore &events, const cv::Size &resolution, const std::vector<uint8_t> &mask, const std::vector<cv::Point2i> &remapLUT)

Friends

inline friend std::ostream &operator<<(std::ostream &os, const CameraPosition &var)
inline friend std::ostream &operator<<(std::ostream &os, const FunctionImplementation &var)
struct Stream
#include </builds/inivation/dv/dv-processing/include/dv-processing/io/stream.hpp>

Structure defining a stream of data. This class holds metadata information of a stream - id, name, source, resolution (if applicable), as well as data type, compression, and other technical information needed for application to be able send or receive streams of data.

Public Functions

Stream() = default

Default constructor with no information about the stream.

inline Stream(const int32_t id, const std::string_view name, const std::string_view sourceName, const std::string_view typeIdentifier, const dv::io::support::TypeResolver &resolver = dv::io::support::defaultTypeResolver)

Manual stream configuration.

Parameters:
  • idStream ID.

  • name – Name of the stream.

  • sourceNameStream source, usually a camera name.

  • typeIdentifier – Flatbuffer compiler generated type identifier string, unique for the stream type.

  • resolver – Type resolver, supports default streams, used only for custom generated type support.

inline void addMetadata(const std::string &name, const dv::io::support::VariantValueOwning &value)

Add metadata to the stream. If an entry already exists, it will be replaced with the new value.

Parameters:
  • name – Name of the metadata entry.

  • value – Metadata value.

inline std::optional<dv::io::support::VariantValueOwning> getMetadataValue(const std::string_view name) const

Get a metadata attribute value.

Parameters:

name – Name of a metadata attribute.

Returns:

Metadata value in a variant or std::nullopt if it’s not found.

inline void setTypeDescription(const std::string &description)

Set type description. This only sets type description metadata field.

Parameters:

description – Metadata string that describes the type in this stream.

inline void setModuleName(const std::string &moduleName)

Set module name that originally produces the data. This only sets the original module name metadata field.

Parameters:

moduleName – Module name that originally produces the data.

inline void setOutputName(const std::string &outputName)

Set original output name. This only sets the original output name metadata field.

Parameters:

outputName – Name of the output that produces the data, usually referring to DV module output.

inline void setCompression(const dv::CompressionType compression)

Set compression metadata field for this stream. This only sets the metadata field of this stream.

Parameters:

compression – Type of compression.

inline std::optional<std::string> getTypeDescription() const

Get type description.

Returns:

Type description string if available, std::nullopt otherwise.

inline std::optional<std::string> getModuleName() const

Get module name.

Returns:

Module name string if available, std::nullopt otherwise.

inline std::optional<std::string> getOutputName() const

Get output name.

Returns:

Output name string if available, std::nullopt otherwise.

inline std::optional<dv::CompressionType> getCompression() const

Get compression type string.

Returns:

compression type string if available, std::nullopt otherwise.

inline void setAttribute(const std::string_view name, const dv::io::support::VariantValueOwning &value)

Set an attribute of this stream, if the attribute field does not exist, it will be created.

Parameters:
  • name – Name of the attribute.

  • value – Attribute value.

inline std::optional<dv::io::support::VariantValueOwning> getAttribute(const std::string_view name) const

Get attribute value given it’s name.

Parameters:

name – Name of the attribute.

Returns:

Return variant of the value if the an attribute with given name exists, std::nullopt otherwise.

template<typename Type>
inline std::optional<Type> getAttributeValue(const std::string_view name) const

Get attribute value given it’s name.

Template Parameters:

Type – Type of the attribute.

Parameters:

name – Name of the attribute.

Returns:

Return the attribute value if the an attribute with given name exists, std::nullopt otherwise.

inline std::optional<cv::Size> getResolution() const

Get resolution of this stream by parsing metadata.

Returns:

Stream resolution or std::nullopt if resolution is not available.

inline void setResolution(const cv::Size &resolution)

Set the stream resolution in the metadata of this stream.

Parameters:

resolutionStream resolution.

inline std::optional<std::string> getSource() const

Get source name (usually the camera name) from metadata of the stream.

Returns:

Stream source or std::nullopt if a source name is not available.

inline void setSource(const std::string &source)

Set a source name of this stream, usually camera name.

Parameters:

source – Source name, usually camera name string.

Public Members

int32_t mId = 0

Stream ID.

std::string mName

Name of the stream.

std::string mTypeIdentifier

Stream type identifier.

dv::types::Type mType

Internal type definition.

dv::io::support::XMLTreeNode mXMLNode

XML tree node that can be used to encode information about the stream.

Public Static Functions

static inline Stream EventStream(const int32_t id, const std::string &name, const std::string &sourceName, const cv::Size &resolution)

Create an event stream.

Parameters:
  • idStream ID.

  • name – Name of the stream.

  • sourceNameStream source, usually a camera name.

  • resolution – Event sensor resolution.

Returns:

Stream definition.

static inline Stream FrameStream(const int32_t id, const std::string &name, const std::string &sourceName, const cv::Size &resolution)

Create a frame stream.

Parameters:
  • idStream ID.

  • name – Name of the stream.

  • sourceNameStream source, usually a camera name.

  • resolutionFrame sensor resolution.

Returns:

Stream definition.

static inline Stream IMUStream(const int32_t id, const std::string &name, const std::string &sourceName)

Create an IMU stream.

Parameters:
  • idStream ID.

  • name – Name of the stream.

  • sourceNameStream source, usually a camera name.

Returns:

Stream definition.

static inline Stream TriggerStream(const int32_t id, const std::string &name, const std::string &sourceName)

Create an triger stream.

Parameters:
  • idStream ID.

  • name – Name of the stream.

  • sourceNameStream source, usually a camera name.

Returns:

Stream definition.

template<class Type>
static inline Stream TypedStream(const int32_t id, const std::string &name, const std::string &sourceName, const dv::io::support::TypeResolver &resolver = dv::io::support::defaultTypeResolver)

Create a stream by providing providing a stream type packet type as a template parameter.

Template Parameters:

Type – Type of the stream.

Parameters:
  • idStream ID.

  • name – Name of the stream.

  • sourceNameStream source, usually a camera name.

  • resolver – Type resolver, supports default streams, used only for custom generated type support.

Returns:

Stream definition.

struct StreamDescriptor

Public Functions

inline explicit StreamDescriptor(const Stream &stream)

Public Members

size_t mSeekIndex = 0
dv::io::Stream mStream
std::map<std::string, std::string> mMetadata
struct StreamDescriptor

Public Functions

inline ~StreamDescriptor()
inline StreamDescriptor(uint32_t id, const types::Type *type)

Public Members

uint32_t id
const dv::types::Type *type
int64_t lastTimestamp
void *elementBuffer
std::function<void(void*)> freeElementBufferCall = nullptr
struct StreamIdContainer

Public Members

int32_t mEventStreamId = -1
int32_t mImuStreamId = -1
int32_t mTriggerStreamId = -1
int32_t mFrameStreamId = -1
template<class PacketType>
class StreamSlicer
#include </builds/inivation/dv/dv-processing/include/dv-processing/core/stream_slicer.hpp>

The StreamSlicer is a class that takes on incoming timestamped data, stores them in a minimal way and invokes functions at individual periods.

Public Functions

StreamSlicer() = default
inline void accept(const PacketType &data)

Add a full packet to the streaming buffer and evaluate jobs. This function copies the data over.

Parameters:

data – the packet to be added to the buffer.

template<class ElementType>
inline void accept(const ElementType &element)

Adds a single element of a stream to the slicer buffer and evaluate jobs.

Parameters:

element – the element to be added to the buffer

inline void accept(PacketType &&packet)

Adds full stream packet of data to the buffer and evaluates jobs.

Parameters:

packet – the packet to be added to the buffer

inline int doEveryNumberOfElements(const size_t n, std::function<void(PacketType&)> callback)

Adds a number-of-elements triggered job to the Slicer. A job is defined by its interval and callback function. The slicer calls the callback function every n elements are added to the stream buffer, with the corresponding data. The (cpu) time interval between individual calls to the function depends on the physical event rate as well as the bulk sizes of the incoming data.

Parameters:
  • n – the interval (in number of elements) in which the callback should be called

  • callback – the callback function that gets called on the data every interval

Returns:

A handle to uniquely identify the job.

inline int doEveryNumberOfElements(const size_t n, std::function<void(const dv::TimeWindow&, PacketType&)> callback)

Adds a number-of-elements triggered job to the Slicer. A job is defined by its interval and callback function. The slicer calls the callback function every n elements are added to the stream buffer, with the corresponding data. The (cpu) time interval between individual calls to the function depends on the physical event rate as well as the bulk sizes of the incoming data.

Parameters:
  • n – the interval (in number of elements) in which the callback should be called

  • callback – the callback function that gets called on the data every interval, also passes time window containing the inter

Returns:

A handle to uniquely identify the job.

inline int doEveryTimeInterval(const dv::Duration interval, std::function<void(const PacketType&)> callback)

Adds an element-timestamp-interval triggered job to the Slicer. A job is defined by its interval and callback function. The slicer calls the callback whenever the timestamp difference of an incoming event to the last time the function was called is bigger than the interval. As the timing is based on event times rather than CPU time, the actual time periods are not guaranteed, especially with a low event rate. The (cpu) time interval between individual calls to the function depends on the physical event rate as well as the bulk sizes of the incoming data.

Parameters:
  • interval – the interval in which the callback should be called

  • callback – the callback function that gets called on the data every interval

Returns:

A handle to uniquely identify the job.

inline int doEveryTimeInterval(const dv::Duration interval, std::function<void(const dv::TimeWindow&, const PacketType&)> callback)

Adds an element-timestamp-interval triggered job to the Slicer. A job is defined by its interval and callback function. The slicer calls the callback whenever the timestamp difference of an incoming event to the last time the function was called is bigger than the interval. As the timing is based on event times rather than CPU time, the actual time periods are not guaranteed, especially with a low event rate. The (cpu) time interval between individual calls to the function depends on the physical event rate as well as the bulk sizes of the incoming data.

Parameters:
  • interval – the interval in which the callback should be called

  • callback – the callback function that gets called with the time window information and the data as arguments every interval

Returns:

An id to uniquely identify the job.

inline bool hasJob(const int jobId) const

Returns true if the slicer contains the slicejob with the provided id

Parameters:

jobId – the id of the slicejob in question

Returns:

true, if the slicer contains the given slicejob

inline void removeJob(const int jobId)

Removes the given job from the list of current jobs.

Parameters:

jobId – The job id to be removed

inline void modifyTimeInterval(const int jobId, const dv::Duration timeInterval)

Modifies the time interval of the supplied job to the requested value

Parameters:
  • jobId – the job whose time interval should be changed

  • timeInterval – the new time interval value

inline void modifyNumberInterval(const int jobId, const size_t numberInterval)

Modifies the number interval of the supplied job to the requested value

Parameters:
  • jobId – the job whose number interval should be changed

  • numberInterval – the new number interval value

Private Functions

inline void evaluate()

Should get called as soon as there is fresh data available. It loops through all jobs and determines if they can run on the new data. The jobs get executed as often as possible. Afterwards, all data that has been processed by all jobs gets discarded.

Private Members

PacketType mStorePacket

Global storage packet that holds just as many data elements as minimally required for all outstanding calls.

std::map<int, SliceJob> mSliceJobs

List of all the sliceJobs.

int mHashCounter = 0
class SyncCameraInputBase : public dv::io::camera::CameraInputBase
#include </builds/inivation/dv/dv-processing/include/dv-processing/io/camera/sync_camera_input_base.hpp>

Camera input base class to abstract live camera and recorded files with a common interface.

Subclassed by dv::io::camera::DAVIS, dv::io::camera::DVS128, dv::io::camera::DVXplorer

Public Functions

virtual bool isMaster() const = 0

Report if this camera is a clock synchronization master.

Returns:

true if clock master, false otherwise.

inline bool isSynchronized() const

Report if this camera is properly synchronized with others in a multi-camera scenario.

Returns:

the camera is synchronized with others.

inline void synchronizeWith(const std::unique_ptr<SyncCameraInputBase> &secondary)

Synchronize this camera with another. This camera is expected to be the clock synchronization master.

Parameters:

secondary – secondary camera.

inline void synchronizeWith(SyncCameraInputBase &secondary)

Synchronize this camera with another. This camera is expected to be the clock synchronization master.

Parameters:

secondary – secondary camera.

inline void synchronizeWith(SyncCameraInputBase *secondary)

Synchronize this camera with another. This camera is expected to be the clock synchronization master.

Parameters:

secondary – secondary camera.

inline void synchronizeWith(const std::span<SyncCameraInputBase*> secondaryCameras)

Synchronize this camera with multiple others. This camera is expected to be the clock synchronization master.

Parameters:

secondaryCameras – secondary cameras (one or more).

Protected Types

enum class SyncState

Values:

enumerator WAIT_RESET
enumerator GOT_RESET
enumerator SYNC_OK

Protected Functions

virtual void sendTimestampReset() = 0

Send a timestamp reset command to the device.

virtual void setTimestampOffset(std::chrono::microseconds timestampOffset) = 0

Set a new timestamp offset value for the camera.

Parameters:

timestampOffset – New timestamp offset value in microseconds.

inline void waitForTimestampReset() const

Wait until a timestamp reset is observed from this camera.

inline bool gotReset() const

Report if we observed the timestamp reset from the device.

Returns:

wether we got a timestamp reset or not.

Protected Attributes

std::atomic<SyncState> mSyncState = {SyncState::WAIT_RESET}

Protected Static Attributes

static constexpr std::chrono::microseconds TIME_SYNC_TIMEOUT = {5'000'000}

Friends

friend class dv_capture_node::CaptureNode
friend class dv_runtime::TimeSync
class TCPTLSSocket : public dv::io::network::SocketBase
#include </builds/inivation/dv/dv-processing/include/dv-processing/io/network/tcp_tls_socket.hpp>

Minimal wrapper of TCP socket with optional TLS encryption.

Public Types

using socketType = asioTCP::socket

Public Functions

inline TCPTLSSocket(asioTCP::socket &&socket, const bool tlsEnabled, const asioSSL::stream_base::handshake_type tlsHandshake, asioSSL::context &tlsContext)

Create a TCP socket with optional TLS encryption.

Parameters:
  • socket – A connected TCP socket instance.

  • tlsEnabled – Whether TLS encryption is enabled, if true, TLS handshake will be immediately performed during construction.

  • tlsHandshake – Type of TLS handshake, this is ignored if TLS is disabled.

  • tlsContext – Pre-configured TLS context for encryption.

inline ~TCPTLSSocket() override
inline virtual bool isOpen() const override

Check whether socket is open and active.

Returns:

True if socket is open, false otherwise.

inline bool isSecured() const

Check whether socket has encryption enabled.

Returns:

True if socket has encryption enabled, false otherwise.

inline virtual void close() override

Close underlying TCP socket cleanly.

inline virtual void write(const asio::const_buffer &buf, SocketBase::CompletionHandler &&wrHandler) override

Write handler needs following signature: void (const boost::system::error_code &, size_t)

inline virtual void read(const asio::mutable_buffer &buf, SocketBase::CompletionHandler &&rdHandler) override

Read handler needs following signature: void (const boost::system::error_code &, size_t)

inline virtual void syncWrite(const asio::const_buffer &buf) override

Blocking write data to the socket.

Parameters:

buf – Data to write.

inline virtual void syncRead(const asio::mutable_buffer &buf) override

Blocking read from socket.

Parameters:

buf – Buffer for data to be read into.

inline asioTCP::endpoint local_endpoint() const

Retrieve local endpoint.

Returns:

Local endpoint.

inline asioIP::address local_address() const

Get the local address of the current endpoint.

Returns:

IP address of the local connection.

inline uint16_t local_port() const

Get local port number.

Returns:

Local port number.

inline asioTCP::endpoint remote_endpoint() const
inline asioIP::address remote_address() const

Remote endpoint IP address.

Returns:

Remote endpoint IP address.

inline uint16_t remote_port() const

Get remote endpoint port number.

Returns:

Remote endpoint port number.

Private Functions

inline asioTCP::socket &baseSocket()

Private Members

asioTCP::endpoint mLocalEndpoint
asioTCP::endpoint mRemoteEndpoint
asioSSL::stream<asioTCP::socket> mSocket
bool mSocketClosed = false
bool mSecureConnection = false
class ThreadExtra

Public Static Functions

static inline std::string getName()
static inline bool setName(const std::string &name)
static inline bool setPriorityHighest()
class ThreadNameSwitch

Public Functions

inline explicit ThreadNameSwitch(const std::string &temporaryName)
inline ~ThreadNameSwitch()

Private Members

std::string originalName
struct TimedKeyPoint : public flatbuffers::NativeTable

Public Types

typedef TimedKeyPointFlatbuffer TableType

Public Functions

inline TimedKeyPoint()
inline TimedKeyPoint(const Point2f &_pt, float _size, float _angle, float _response, int32_t _octave, int32_t _class_id, int64_t _timestamp)

Public Members

Point2f pt
float size
float angle
float response
int32_t octave
int32_t class_id
int64_t timestamp

Public Static Functions

static inline constexpr const char *GetFullyQualifiedName()
struct TimedKeyPointBuilder

Public Functions

inline void add_pt(const Point2f *pt)
inline void add_size(float size)
inline void add_angle(float angle)
inline void add_response(float response)
inline void add_octave(int32_t octave)
inline void add_class_id(int32_t class_id)
inline void add_timestamp(int64_t timestamp)
inline explicit TimedKeyPointBuilder(flatbuffers::FlatBufferBuilder &_fbb)
TimedKeyPointBuilder &operator=(const TimedKeyPointBuilder&)
inline flatbuffers::Offset<TimedKeyPointFlatbuffer> Finish()

Public Members

flatbuffers::FlatBufferBuilder &fbb_
flatbuffers::uoffset_t start_
struct TimedKeyPointFlatbuffer : private flatbuffers::Table

Public Types

typedef TimedKeyPoint NativeTableType

Public Functions

inline const Point2f *pt() const

coordinates of the keypoints.

inline float size() const

diameter of the meaningful keypoint neighborhood.

inline float angle() const

computed orientation of the keypoint (-1 if not applicable); it’s in [0,360) degrees and measured relative to image coordinate system, ie in clockwise.

inline float response() const

the response by which the most strong keypoints have been selected. Can be used for the further sorting or subsampling.

inline int32_t octave() const

octave (pyramid layer) from which the keypoint has been extracted.

inline int32_t class_id() const

object class (if the keypoints need to be clustered by an object they belong to).

inline int64_t timestamp() const

Timestamp (µs).

inline bool Verify(flatbuffers::Verifier &verifier) const
inline TimedKeyPoint *UnPack(const flatbuffers::resolver_function_t *_resolver = nullptr) const
inline void UnPackTo(TimedKeyPoint *_o, const flatbuffers::resolver_function_t *_resolver = nullptr) const

Public Static Functions

static inline const flatbuffers::TypeTable *MiniReflectTypeTable()
static inline constexpr const char *GetFullyQualifiedName()
static inline void UnPackToFrom(TimedKeyPoint *_o, const TimedKeyPointFlatbuffer *_fb, const flatbuffers::resolver_function_t *_resolver = nullptr)
static inline flatbuffers::Offset<TimedKeyPointFlatbuffer> Pack(flatbuffers::FlatBufferBuilder &_fbb, const TimedKeyPoint *_o, const flatbuffers::rehasher_function_t *_rehasher = nullptr)
struct TimedKeyPointPacket : public flatbuffers::NativeTable

Public Types

typedef TimedKeyPointPacketFlatbuffer TableType

Public Functions

inline TimedKeyPointPacket()
inline TimedKeyPointPacket(const std::vector<TimedKeyPoint> &_elements)

Public Members

std::vector<TimedKeyPoint> elements

Public Static Functions

static inline constexpr const char *GetFullyQualifiedName()

Friends

inline friend std::ostream &operator<<(std::ostream &os, const TimedKeyPointPacket &packet)
struct TimedKeyPointPacketBuilder

Public Functions

inline void add_elements(flatbuffers::Offset<flatbuffers::Vector<flatbuffers::Offset<TimedKeyPointFlatbuffer>>> elements)
inline explicit TimedKeyPointPacketBuilder(flatbuffers::FlatBufferBuilder &_fbb)
TimedKeyPointPacketBuilder &operator=(const TimedKeyPointPacketBuilder&)
inline flatbuffers::Offset<TimedKeyPointPacketFlatbuffer> Finish()

Public Members

flatbuffers::FlatBufferBuilder &fbb_
flatbuffers::uoffset_t start_
struct TimedKeyPointPacketFlatbuffer : private flatbuffers::Table

Public Types

typedef TimedKeyPointPacket NativeTableType

Public Functions

inline const flatbuffers::Vector<flatbuffers::Offset<TimedKeyPointFlatbuffer>> *elements() const
inline bool Verify(flatbuffers::Verifier &verifier) const
inline TimedKeyPointPacket *UnPack(const flatbuffers::resolver_function_t *_resolver = nullptr) const
inline void UnPackTo(TimedKeyPointPacket *_o, const flatbuffers::resolver_function_t *_resolver = nullptr) const

Public Static Functions

static inline const flatbuffers::TypeTable *MiniReflectTypeTable()
static inline constexpr const char *GetFullyQualifiedName()
static inline void UnPackToFrom(TimedKeyPointPacket *_o, const TimedKeyPointPacketFlatbuffer *_fb, const flatbuffers::resolver_function_t *_resolver = nullptr)
static inline flatbuffers::Offset<TimedKeyPointPacketFlatbuffer> Pack(flatbuffers::FlatBufferBuilder &_fbb, const TimedKeyPointPacket *_o, const flatbuffers::rehasher_function_t *_rehasher = nullptr)

Public Static Attributes

static constexpr const char *identifier = "TKPS"
struct TimeElementExtractor

Public Functions

inline constexpr TimeElementExtractor() noexcept
inline constexpr TimeElementExtractor(const int64_t startTimestamp_, const int64_t endTimestamp_) noexcept
~TimeElementExtractor() = default
TimeElementExtractor(const TimeElementExtractor &t) = default
TimeElementExtractor &operator=(const TimeElementExtractor &rhs) = default
TimeElementExtractor(TimeElementExtractor &&t) = default
TimeElementExtractor &operator=(TimeElementExtractor &&rhs) = default
inline constexpr bool operator==(const TimeElementExtractor &rhs) const noexcept
inline constexpr bool operator!=(const TimeElementExtractor &rhs) const noexcept

Public Members

int64_t startTimestamp
int64_t endTimestamp
int64_t numElements
template<class EventStoreType, typename ScalarType = int64_t>
class TimeSurfaceBase
#include </builds/inivation/dv/dv-processing/include/dv-processing/core/core.hpp>

TimeSurface class that builds the surface of the occurrences of the last timestamps.

Subclassed by dv::SpeedInvariantTimeSurfaceBase< EventStoreType, patchDiameter, ScalarType >

Public Types

using Scalar = ScalarType

Public Functions

TimeSurfaceBase() = default

Dummy constructor Constructs a new, empty TimeSurface without any data allocated to it.

inline explicit TimeSurfaceBase(const uint32_t rows, const uint32_t cols)

Creates a new TimeSurface with the given size. The Mat is zero initialized

Parameters:
  • rows – The number of rows of the TimeSurface

  • cols – The number of cols of the TimeSurface

inline explicit TimeSurfaceBase(const cv::Size &size)

Creates a new TimeSurface of the given size. The Mat is zero initialized.

Parameters:

size – The opencv size to be used to initialize

TimeSurfaceBase(const TimeSurfaceBase &other) = default

Copy constructor, constructs a new time surface with shared ownership of the data.

Parameters:

other – The time surface to be copied. The data is not copied but takes shared ownership.

virtual ~TimeSurfaceBase() = default

Destructor

inline virtual TimeSurfaceBase &operator<<(const EventStoreType &store)

Inserts the event store into the time surface.

Parameters:

store – The event store to be added

Returns:

A reference to this TimeSurfaceBase.

inline virtual TimeSurfaceBase &operator<<(const typename EventStoreType::iterator::value_type &event)

Inserts the event into the time surface.

Parameters:

event – The event to be added

Returns:

A reference to this TimeSurfaceBase.

inline dv::Frame &operator>>(dv::Frame &mat) const

Generates a frame from the data contained in the event store

Parameters:

mat – The storage where the frame should be generated

Returns:

A reference to the generated frame.

inline virtual void accept(const EventStoreType &store)

Inserts the event store into the time surface.

Parameters:

store – The event store to be added

inline virtual void accept(const typename EventStoreType::iterator::value_type &event)

Inserts the event into the time surface.

Parameters:

event – The event to be added

inline const ScalarType &at(const int16_t y, const int16_t x) const

Returns a const reference to the element at the given coordinates. The element can only be read from

Parameters:
  • y – The y coordinate of the element to be accessed.

  • x – The x coordinate of the element to be accessed.

Returns:

A const reference to the element at the requested coordinates.

inline ScalarType &at(const int16_t y, const int16_t x)

Returns a reference to the element at the given coordinates. The element can both be read from as well as written to.

Parameters:
  • y – The y coordinate of the element to be accessed.

  • x – The x coordinate of the element to be accessed.

Returns:

A reference to the element at the requested coordinates.

inline const ScalarType &operator()(const int16_t y, const int16_t x) const noexcept

Returns a const reference to the element at the given coordinates. The element can only be read from

Parameters:
  • y – The y coordinate of the element to be accessed.

  • x – The x coordinate of the element to be accessed.

Returns:

A const reference to the element at the requested coordinates.

inline ScalarType &operator()(const int16_t y, const int16_t x) noexcept

Returns a reference to the element at the given coordinates. The element can both be read from as well as written to.

Parameters:
  • y – The y coordinate of the element to be accessed.

  • x – The x coordinate of the element to be accessed.

Returns:

A reference to the element at the requested coordinates.

inline auto block(const int16_t topRow, const int16_t leftCol, const int16_t height, const int16_t width) const

Returns a block of the time surface

Parameters:
  • topRow – the row coordinate at the top of the block

  • leftCol – the column coordinate at the left of the block

  • height – the height of the block

  • width – the width of the block

Returns:

the block

inline auto block(const int16_t topRow, const int16_t leftCol, const int16_t height, const int16_t width)

Returns a block of the time surface

Parameters:
  • topRow – the row coordinate at the top of the block

  • leftCol – the column coordinate at the left of the block

  • height – the height of the block

  • width – the width of the block

Returns:

the block

inline dv::Frame generateFrame() const

Generates a frame from the data contained in the event store

Returns:

The generated frame.

template<class T = uint8_t>
inline std::pair<cv::Mat, int64_t> getOCVMat() const

Creates a new OpenCV matrix of the type given and copies the time data into this OpenCV matrix. This version does only subtracts an offset from the values for them to fit into the value range of the requested frame type. Therefore this method preserves the units of the timestamps that are contained in the time surface.

The data in the time surface is of signed 64bit integer type. There is no OpenCV type that can hold the full range of these values. Therefore, the returned data is a pair of an OpenCV Mat, of a type that can be chosen by the user, and an offset of signed 64bit integer, which contains the offset that can be added to each pixel value so that their values are in units of microseconds.

Template Parameters:

T – The type of the OpenCV Mat to be generated.

Returns:

An OpenCV Mat of the requested type, as well as an offset which can be added to the matrix in order for the data to be in microseconds.

template<typename T = uint8_t>
inline cv::Mat getOCVMatScaled(const std::optional<int64_t> lookBackOverride = std::nullopt) const

Creates a new OpenCV matrix of the type given and copies the time data into this OpenCV matrix. This version scales the values for them to fit into the value range of the requested frame type. Therefore the units of the timestamps are not preserved.

The data in the time surface is of signed 64bit integer type. There is no OpenCV type that can hold the full range of these values. Therefore, the returned data is a pair of an OpenCV Mat, of a type that can be chosen by the user, and an offset of signed 64bit integer, which contains the offset that can be added to each pixel value so that their values are in units of microseconds.

Template Parameters:

T – The type of the OpenCV Mat to be generated.

Parameters:

lookBackOverride – override the amount of time to look back into the past. Defaults to the complete range contained in the time surface. The unit of the parameter is the unit of time contained in the TimeSurface.

Returns:

An OpenCV Mat of the requested type, as well as an offset which can be added to the matrix in order for the data to be in microseconds.

inline void reset()

Sets all values in the time surface to zero

template<typename T>
inline TimeSurfaceBase operator+(const T &s) const

Adds a constant to the time surface. Values are bounds checked to 0. If the new time would become negative, it is set to 0.

Template Parameters:

T – The type of the constant. Accepts any numeric type.

Parameters:

s – The constant to be added

Returns:

A new TimeSurfaceBase with the changed times

template<typename T>
inline TimeSurfaceBase &operator+=(const T &s)

Adds a constant to the TimeSurface. Values are bounds checked to 0. If the new time would become negative, it is set to 0.

Template Parameters:

T – The type of the constant. Accepts any numeric type.

Parameters:

s – The constant to be added

Returns:

A reference to the TimeSurfaceBase

template<typename T>
inline TimeSurfaceBase operator-(const T &s) const

Subtracts a constant from the TimeSurface. Values are bounds checked to 0. If the new time would become negative, it is set to 0.

Template Parameters:

T – The type of the constant. Accepts any numeric type.

Parameters:

s – The constant to be subtracted

Returns:

A reference to the TimeSurfaceBase

template<typename T>
inline TimeSurfaceBase &operator-=(const T &s)

Subtracts a constant from the TimeSurface. Values are bounds checked to 0. If the new time would become negative, it is set to 0.

Template Parameters:

T – The type of the constant. Accepts any numeric type.

Parameters:

s – The constant to be subtracted

Returns:

A reference to the TimeSurfaceBase

template<typename T>
inline TimeSurfaceBase &operator=(const T &s)

Assigns constant to the TimeSurface. Values are bounds checked to 0. If the new time would become negative, it is set to 0.

Template Parameters:

T – The type of the constant. Accepts any numeric type.

Parameters:

s – The constant to be subtracted

Returns:

A reference to the TimeSurfaceBase

inline cv::Size size() const noexcept

The size of the TimeSurface.

Returns:

Returns the size of this time matrix as an opencv size

inline int16_t rows() const noexcept

Returns the number of rows of the TimeSurface

Returns:

the number of rows

inline int16_t cols() const noexcept

Returns the number of columns of the TimeSurface

Returns:

the number of columns

inline bool isEmpty() const noexcept

Returns true if the TimeSurface has zero size. In this case, it was not allocated with a size.

Returns:

true if the TimeSurface does not have a size > 0

Protected Functions

inline void addImpl(const ScalarType a, TimeSurfaceBase &target) const

Protected Attributes

Eigen::Matrix<ScalarType, Eigen::Dynamic, Eigen::Dynamic> mData
struct TimeWindow

Public Functions

inline TimeWindow(const int64_t timestamp, const dv::Duration duration)
inline TimeWindow(const int64_t startTime, const int64_t endTime)
inline dv::Duration duration() const

Public Members

int64_t startTime
int64_t endTime
class TrackerBase
#include </builds/inivation/dv/dv-processing/include/dv-processing/features/tracker_base.hpp>

A base class for implementing feature trackers, that track sets of features against streams of various inputs. This class specifically does not define an input type, so it could be defined by the specific implementation.

Subclassed by dv::features::ImageFeatureLKTracker, dv::features::MeanShiftTracker

Public Types

typedef std::shared_ptr<TrackerBase> SharedPtr
typedef std::unique_ptr<TrackerBase> UniquePtr

Public Functions

inline void setMaxTracks(size_t _maxTracks)

Set the maximum number of tracks.

Parameters:

_maxTracks – Maximum number of tracks

inline size_t getMaxTracks() const

Get the maximum number of tracks.

Returns:

Maximum number of tracks

inline const Result::SharedPtr &getLastFrameResults() const

Retrieve cached last frame detection results.

Returns:

Detection result from the last processed frame.

inline Result::ConstPtr runTracking()

Performed the tracking and cache the results.

Returns:

Tracking result.

virtual ~TrackerBase() = default
inline virtual void removeTracks(const std::vector<int> &trackIds)

Remove tracks from cached results, so the wouldn’t be tracked anymore. TrackIds are the class_id value of the keypoint structure.

Parameters:

trackIds – Track class_id values to be removed from cached tracker results.

Protected Functions

virtual Result::SharedPtr track() = 0

Virtual function that is called after all inputs were set. This function should perform tracking against lastFrameResults.

Returns:

Tracking result.

Protected Attributes

size_t maxTracks = 200

Maximum number of tracks.

Result::SharedPtr lastFrameResults

Cached results of last tracker execution.

template<std::floating_point Scalar>
class Transformation
#include </builds/inivation/dv/dv-processing/include/dv-processing/kinematics/transformation.hpp>

Basic transformation wrapper containing homogenous 3D transformation and timestamp.

Template Parameters:

Scalar – Customizable storage type - float or double.

Public Functions

inline EIGEN_MAKE_ALIGNED_OPERATOR_NEW Transformation(int64_t timestamp, const Eigen::Matrix<Scalar, 4, 4> &T)

Construct the transformation from a timestamp and 4x4 transformation matrix

Parameters:
  • timestamp – Unix timestamp in microsecond format

  • T – Homogenous 3D transformation matrix

inline Transformation()

Construct an identity transformation from with timestamp.

Parameters:

timestamp – Unix timestamp in microsecond format

inline Transformation(int64_t timestamp, const Eigen::Matrix<Scalar, 3, 1> &translation, const Eigen::Quaternion<Scalar> &rotation)

Construct the transformation from timestamp, 3D translation vector and quaternion describing the rotation.

Parameters:
  • timestamp – Unix timestamp in microsecond format

  • translation – 3D translation vector

  • rotation – Quaternion describing the rotation

inline Transformation(int64_t timestamp, const Eigen::Matrix<Scalar, 3, 1> &translation, const Eigen::Matrix<Scalar, 3, 3> &rotationMatrix)

Construct the transformation from timestamp, 3D translation vector and quaternion describing the rotation.

Parameters:
  • timestamp – Unix timestamp in microsecond format

  • translation – 3D translation vector

  • rotationMatrix – Rotation matrix describing the rotation

inline Transformation(int64_t timestamp, const cv::Mat &translation, const cv::Mat &rotation)

Construct the transformation from timestamp, 3D translation vector and quaternion describing the rotation.

Parameters:
  • timestamp – Unix timestamp in microsecond format

  • translation – 3D translation vector

  • rotation – 3x3 rotation matrix

inline int64_t getTimestamp() const

Get timestamp.

Returns:

Unix timestamp of the transformation in microseconds.

inline const Eigen::Matrix<Scalar, 4, 4> &getTransform() const

Get the transformation matrix.

Returns:

Transformation matrix in 4x4 format

inline Eigen::Matrix<Scalar, 3, 3> getRotationMatrix() const

Retrieve a copy of 3x3 rotation matrix.

Returns:

3x3 rotation matrix

inline Eigen::Quaternion<Scalar> getQuaternion() const

Retrieve rotation expressed as a quaternion.

Returns:

Quaternion containing rotation.

template<concepts::Coordinate3DConstructible Output = Eigen::Matrix<Scalar, 3, 1>>
inline Output getTranslation() const

Retrieve translation as 3D vector.

Returns:

Vector containing translation.

template<concepts::Coordinate3DConstructible Output = Eigen::Matrix<Scalar, 3, 1>, concepts::Coordinate3D Input>
inline Output transformPoint(const Input &point) const

Transform a point using this transformation.

Parameters:

point – Point to be transformed

Returns:

Transformed point

template<concepts::Coordinate3DConstructible Output = Eigen::Matrix<Scalar, 3, 1>, concepts::Coordinate3D Input>
inline Output rotatePoint(const Input &point) const

Apply rotation only transformation on the given point.

Parameters:

point – Point to be transformed

Returns:

Transformed point

inline Transformation<Scalar> inverse() const

Calculate the inverse homogenous transformation of this transform.

Returns:

Inverse transformation with the current timestamp.

inline Transformation<Scalar> delta(const Transformation<Scalar> &target) const

Find the transformation from current to target. (T_target_current s.t. p_target = T_target_current*p_current).

Parameters:

target – Target transformation.

Returns:

Transformation from this to target.

inline bool operator==(const Transformation<Scalar> &rhs) const

Public Static Functions

static inline Transformation fromNonHomogenous(int64_t timestamp, const Eigen::Matrix<Scalar, 3, 4> &T)

Construct the transformation from a timestamp and 3x4 non-homogenous transformation matrix.

Parameters:
  • timestamp – Unix timestamp in microsecond format

  • T – 3x4 3D transformation matrix

Private Members

int64_t mTimestamp

Timestamp of the transformation, Unix timestamp in microseconds.

Eigen::Matrix<Scalar, 4, 4> mT

The transformation itself, stored in 4x4 format:

R|T

0|1

class TranslationLossFunctor : public dv::optimization::OptimizationFunctor<float>
#include </builds/inivation/dv/dv-processing/include/dv-processing/optimization/contrast_maximization_translation_and_depth.hpp>

Given a chunk of events, the idea of contrast maximization is to warp events in space and time given a predefined motion model. Contrast maximization aims at finding the optimal parameters of the given motion model. The idea is that if the motion is perfectly estimated, all events corresponding to the same point in the scene, will be warped to the same image plane location, at a given point in time. If this happens, the reconstructed event image will be sharp, having high contrast. This high contrast is measured as variance in the image. For this reason, contrast maximization searches for the best motion parameters which maximize the contrast of the event image reconstructed after warping events in space to a specific point in time. In order to warp event in space and time we use the “dv::kinematics::MotionCompensator” class. This contrast maximization class assumes pure camera translation motion model. Given a set of events in a time range (init_time, end_time), assuming a constant translational speed between init_time and end_time, translation (x, y, z) and scene depth are optimized to maximize contrast of event image. Since the speed is assumed to be constant between init_time end end_time, the camera position at time t_k is computed as : t_k = speed*dt, where dt = t_k - init_time. The scene depth is included in the optimization since it is strongly correlated to the camera translation. Scene depth is assumed to be constant between init_time and end_time.

Public Functions

inline TranslationLossFunctor(dv::camera::CameraGeometry::SharedPtr &camera, const dv::EventStore &events, float contribution, int inputDim, int numMeasurements)

This contrast maximization class assumes pure camera translation motion model. Given a set of events in a time range (init_time, end_time), assuming a constant translational speed between init_time and end_time, translation (x, y, z) and scene depth are optimized to maximize contrast of event image.

Parameters:
  • camera – Camera geometry used to create motion compensator

  • events – Events used to compute motion compensated image

  • contribution – Contribution value of each event to the total pixel intensity

  • inputDim – Number of parameters to optimize

  • numMeasurements – Number of function evaluation performed to compute the gradient

inline virtual int operator()(const Eigen::VectorXf &translationAndDepth, Eigen::VectorXf &stdInverse) const

Implementation of the objective function: optimize camera translation (x, y, z) and scene depth. Current cost is stored in stdInverse. Notice that since we want to maximize the contrast but optimizer minimize cost function we use as cost 1/contrast

Private Members

dv::camera::CameraGeometry::SharedPtr mCamera

Camera geometry data. This information is used to create motionCompensator and compensate events.

const dv::EventStore mEvents

Raw events compensated using translation along x, y, z and current scene depth.

const float mContribution

Event contribution for total pixel intensity. This parameter is very important since it strongly influence contrast value. It needs to be tuned based on scene and length of event chunk.

struct Trigger : public flatbuffers::NativeTable

Public Types

typedef TriggerFlatbuffer TableType

Public Functions

inline Trigger()
inline Trigger(int64_t _timestamp, TriggerType _type)

Public Members

int64_t timestamp
TriggerType type

Public Static Functions

static inline constexpr const char *GetFullyQualifiedName()
struct TriggerBuilder

Public Functions

inline void add_timestamp(int64_t timestamp)
inline void add_type(TriggerType type)
inline explicit TriggerBuilder(flatbuffers::FlatBufferBuilder &_fbb)
TriggerBuilder &operator=(const TriggerBuilder&)
inline flatbuffers::Offset<TriggerFlatbuffer> Finish()

Public Members

flatbuffers::FlatBufferBuilder &fbb_
flatbuffers::uoffset_t start_
struct TriggerFlatbuffer : private flatbuffers::Table

Public Types

typedef Trigger NativeTableType

Public Functions

inline int64_t timestamp() const

Timestamp (µs).

inline TriggerType type() const

Type of trigger that occurred.

inline bool Verify(flatbuffers::Verifier &verifier) const
inline Trigger *UnPack(const flatbuffers::resolver_function_t *_resolver = nullptr) const
inline void UnPackTo(Trigger *_o, const flatbuffers::resolver_function_t *_resolver = nullptr) const

Public Static Functions

static inline const flatbuffers::TypeTable *MiniReflectTypeTable()
static inline constexpr const char *GetFullyQualifiedName()
static inline void UnPackToFrom(Trigger *_o, const TriggerFlatbuffer *_fb, const flatbuffers::resolver_function_t *_resolver = nullptr)
static inline flatbuffers::Offset<TriggerFlatbuffer> Pack(flatbuffers::FlatBufferBuilder &_fbb, const Trigger *_o, const flatbuffers::rehasher_function_t *_rehasher = nullptr)
struct TriggerPacket : public flatbuffers::NativeTable

Public Types

typedef TriggerPacketFlatbuffer TableType

Public Functions

inline TriggerPacket()
inline TriggerPacket(const std::vector<Trigger> &_elements)

Public Members

std::vector<Trigger> elements

Public Static Functions

static inline constexpr const char *GetFullyQualifiedName()

Friends

inline friend std::ostream &operator<<(std::ostream &os, const TriggerPacket &packet)
struct TriggerPacketBuilder

Public Functions

inline void add_elements(flatbuffers::Offset<flatbuffers::Vector<flatbuffers::Offset<TriggerFlatbuffer>>> elements)
inline explicit TriggerPacketBuilder(flatbuffers::FlatBufferBuilder &_fbb)
TriggerPacketBuilder &operator=(const TriggerPacketBuilder&)
inline flatbuffers::Offset<TriggerPacketFlatbuffer> Finish()

Public Members

flatbuffers::FlatBufferBuilder &fbb_
flatbuffers::uoffset_t start_
struct TriggerPacketFlatbuffer : private flatbuffers::Table

Public Types

typedef TriggerPacket NativeTableType

Public Functions

inline const flatbuffers::Vector<flatbuffers::Offset<TriggerFlatbuffer>> *elements() const
inline bool Verify(flatbuffers::Verifier &verifier) const
inline TriggerPacket *UnPack(const flatbuffers::resolver_function_t *_resolver = nullptr) const
inline void UnPackTo(TriggerPacket *_o, const flatbuffers::resolver_function_t *_resolver = nullptr) const

Public Static Functions

static inline const flatbuffers::TypeTable *MiniReflectTypeTable()
static inline constexpr const char *GetFullyQualifiedName()
static inline void UnPackToFrom(TriggerPacket *_o, const TriggerPacketFlatbuffer *_fb, const flatbuffers::resolver_function_t *_resolver = nullptr)
static inline flatbuffers::Offset<TriggerPacketFlatbuffer> Pack(flatbuffers::FlatBufferBuilder &_fbb, const TriggerPacket *_o, const flatbuffers::rehasher_function_t *_rehasher = nullptr)

Public Static Attributes

static constexpr const char *identifier = "TRIG"
struct Type

Public Functions

inline constexpr Type() noexcept
inline constexpr Type(const std::string_view identifier_, const size_t sizeOfType_, PackFuncPtr pack_, UnpackFuncPtr unpack_, ConstructPtr construct_, DestructPtr destruct_, TimeElementExtractorPtr timeElementExtractor_, TimeRangeExtractorPtr timeRangeExtractor_)
~Type() = default
Type(const Type &t) = default
Type &operator=(const Type &rhs) = default
Type(Type &&t) = default
Type &operator=(Type &&rhs) = default
inline constexpr bool operator==(const Type &rhs) const noexcept
inline constexpr bool operator!=(const Type &rhs) const noexcept

Public Members

uint32_t id
size_t sizeOfType
PackFuncPtr pack
UnpackFuncPtr unpack
ConstructPtr construct
DestructPtr destruct
TimeElementExtractorPtr timeElementExtractor
TimeRangeExtractorPtr timeRangeExtractor
struct TypedObject

Public Functions

inline constexpr TypedObject(const Type &type_)
inline ~TypedObject() noexcept
TypedObject(const TypedObject &t) = delete
TypedObject &operator=(const TypedObject &rhs) = delete
inline TypedObject(TypedObject &&t)
inline TypedObject &operator=(TypedObject &&rhs)
inline constexpr bool operator==(const TypedObject &rhs) const noexcept
inline constexpr bool operator!=(const TypedObject &rhs) const noexcept
template<class TargetType>
inline std::shared_ptr<TargetType> moveToSharedPtr()

Cast and move the pointer to the data into a shared pointer. The underlying data is not affected, but it invalidates this instance and passes the ownership of the data to the shared pointer - it will take care of memory management from the point of this method call.

Template Parameters:

TargetType – Target type to cast the typed obect into

Returns:

Public Members

void *obj
Type type
struct TypeError

Public Types

using Info = std::string

Public Static Functions

static inline std::string format(const Info &info)
class UNIXSocket : public dv::io::network::SocketBase
#include </builds/inivation/dv/dv-processing/include/dv-processing/io/network/unix_socket.hpp>

Minimal wrapper of UNIX socket. It follows RAII principle, the socket will closed and released when this object is released.

Public Types

using socketType = asioUNIX::socket

Public Functions

inline explicit UNIXSocket(asioUNIX::socket &&s)

Initial a socket wrapper by taking ownership of a connected socket.

Parameters:

s

inline ~UNIXSocket() override
inline virtual bool isOpen() const override

Check whether socket is open and active.

Returns:

True if socket is open, false otherwise.

inline virtual void close() override

Close underlying UNIX socket cleanly.

inline virtual void write(const asio::const_buffer &buf, CompletionHandler &&wrHandler) override

Write handler needs following signature: void (const boost::system::error_code &, size_t)

inline virtual void read(const asio::mutable_buffer &buf, CompletionHandler &&rdHandler) override

Read handler needs following signature: void (const boost::system::error_code &, size_t)

inline virtual void syncWrite(const asio::const_buffer &buf) override

Blocking write data to the socket.

Parameters:

buf – Data to write.

inline virtual void syncRead(const asio::mutable_buffer &buf) override

Blocking read from socket.

Parameters:

buf – Buffer for data to be read into.

Private Members

asioUNIX::socket socket
bool socketClosed = false
class UpdateIntervalOrFeatureCountRedetection : public dv::features::RedetectionStrategy
#include </builds/inivation/dv/dv-processing/include/dv-processing/features/redetection_strategy.hpp>

Redetection strategy based on interval from last detection or minimum number of tracks. This class combines redetection logic from UpdateIntervalRedetection and FeatureCountRedetection.

Public Functions

inline explicit UpdateIntervalOrFeatureCountRedetection(const dv::Duration updateInterval, const float minimumProportionOfTracks)

Redetection strategy based on updating if specific amount of time from last detection has passed or minimum number of tracks to follow.

inline virtual bool decideRedetection(const TrackerBase &tracker) override

Check whether to perform redetection.

Private Members

UpdateIntervalRedetection updateIntervalRedetection
FeatureCountRedetection featureCountRedetection
class UpdateIntervalRedetection : public dv::features::RedetectionStrategy
#include </builds/inivation/dv/dv-processing/include/dv-processing/features/redetection_strategy.hpp>

Redetection strategy based on interval from last detection.

Public Functions

inline explicit UpdateIntervalRedetection(const dv::Duration updateInterval)

Redetection strategy based on updating if specific amount of time from last detection has passed.

inline virtual bool decideRedetection(const TrackerBase &tracker) override

Check whether to perform redetection.

Protected Attributes

const int64_t mUpdateIntervalTime
int64_t mLastDetectionTime = -std::numeric_limits<int64_t>::infinity()
class USBDevice

Subclassed by dv::io::camera::DAVIS, dv::io::camera::DVS128, dv::io::camera::DVXplorer, dv::io::camera::USBDeviceNextGen

Public Types

enum class LogLevel : uint8_t

Values:

enumerator LVL_NONE
enumerator LVL_ERROR
enumerator LVL_WARNING
enumerator LVL_INFO
enumerator LVL_DEBUG
using loggerCallbackType = std::function<void(LogLevel level, std::string_view deviceName, std::string_view message)>

Public Functions

inline DeviceDescriptor getDeviceDescriptor() const

Get device descriptor. Describes this device uniquely.

Returns:

this device’s descriptor structure

inline uint8_t getFirmwareVersion() const

Get camera firmware version.

Returns:

camera firmware version

inline std::string getSerialNumber() const

Get camera serial number.

Returns:

camera serial number

inline std::string getDeviceName() const

Get camera name.

Returns:

camera name

inline libusb_speed getConnectionSpeed() const

Get USB connection speed.

Returns:

USB connection speed

inline void setLogLevel(const LogLevel level)

Set device’s log-level.

Parameters:

level – device log-level

inline LogLevel getLogLevel() const

Get device’s log-level.

Returns:

device log-level

inline void setTransfersNumber(const uint32_t transfersNumber)

Set number of USB buffers used for data transfer.

Parameters:

transfersNumber – number of USB buffers used for data transfer.

inline uint32_t getTransfersNumber() const

Get number of USB buffers used for data transfer.

Returns:

number of USB buffers used for data transfer.

inline void setTransfersSize(const uint32_t transfersSize)

Set size in bytes of USB buffers used for data transfer.

Parameters:

transfersSize – size in bytes of USB buffers used for data transfer.

inline uint32_t getTransfersSize() const

Get size in bytes of USB buffers used for data transfer.

Returns:

size in bytes of USB buffers used for data transfer.

Public Static Functions

static inline void setLibUsbLogger(loggerCallbackType logger)

Set log callback for libusb messages. If no per-device logger is set, this will be used as a fallback. Any such logger must be thread-safe, as the messages can come from multiple, different threads!

Parameters:

logger – logging function, takes messages and processes them.

Protected Types

using compatibleCameraCallbackType = std::function<std::optional<CameraModel>(uint16_t vid, uint16_t pid, USBDeviceType deviceType)>
using controlOutCallbackType = std::function<void(libusb_transfer_status status)>
using controlInCallbackType = std::function<void(libusb_transfer_status status, std::span<const uint8_t> buffer)>
using usbTransferCallbackType = std::function<void(std::span<const uint8_t> data)>
using usbShutdownCallbackType = std::function<void()>

Protected Functions

inline USBDevice(const DeviceDescriptor &deviceToOpen, const LogLevel deviceLogLevel, loggerCallbackType deviceLogger, const std::string_view shortName, const std::string_view longName, const compatibleCameraCallbackType &expectedDevice, const int32_t requiredFirmwareVersion = -1)

Construct and open a USB device. Match is based on VID, PID, USB Bus Address. If present, serial number is also verified to match.

Parameters:
  • deviceToOpen – description of which device to open exactly. VID, PID, USB Bus Address must be defined.

  • deviceLogLevel – initial per-device log-level for info logging, can be changed later with setLogLevel().

  • deviceLogger – per-device logging callback. Any such logger must be thread-safe.

  • shortName – brief name for related device threads. Should be at most 6 characters for maximum compatibility.

  • longName – long name for full device name, used together with serial number in logging.

  • expectedDevice – decide if given descriptor is valid to select a compatible camera model on opening.

  • requiredFirmwareVersion – required firmware version for successfull operation of this device (-1 disables the check, else the exact version given must correspond to the device one).

inline ~USBDevice()
inline void cleanupBuffers()
inline void usbControlTransferOutAsync(const uint8_t bRequest, const uint16_t wValue, const uint16_t wIndex, const std::span<const uint8_t> data, controlOutCallbackType controlOutCallback) const
inline void usbControlTransferInAsync(const uint8_t bRequest, const uint16_t wValue, const uint16_t wIndex, const size_t dataSize, controlInCallbackType controlInCallback) const
inline void usbControlTransferOut(const uint8_t bRequest, const uint16_t wValue, const uint16_t wIndex, const std::span<const uint8_t> data) const
inline void usbControlTransferIn(const uint8_t bRequest, const uint16_t wValue, const uint16_t wIndex, const std::span<uint8_t> data) const
inline void spiConfigSendMultiple(const std::span<SPIConfigurationParameters> configs) const
inline void spiConfigSendMultipleAsync(const std::span<SPIConfigurationParameters> configs, controlOutCallbackType callback) const
inline void spiConfigSend(const uint8_t moduleAddr, const uint16_t paramAddr, const uint32_t param) const
inline void spiConfigSendAsync(const uint8_t moduleAddr, const uint16_t paramAddr, const uint32_t param, controlOutCallbackType callback) const
inline uint32_t spiConfigReceive(const uint8_t moduleAddr, const uint16_t paramAddr) const
inline void spiConfigReceiveAsync(const uint8_t moduleAddr, const uint16_t paramAddr, std::function<void(libusb_transfer_status status, uint32_t param)> callback) const
inline void log(const LogLevel level, const std::string_view message) const
inline void resetCameraName(const std::string_view cameraName)
inline void setDataCallback(usbTransferCallbackType callback)

Set data handling callback, receives the buffers filled with data from the BULK endpoint. Not thread-safe, call before usbThreadStart() only.

Parameters:

callback – data handling callback

inline void setShutdownCallback(usbShutdownCallbackType callback)

Set exceptional shutdown handling callback, called when the device goes away unexpectedly. Not thread-safe, call before usbThreadStart() only. Called from a context where locks are held, so it cannot call any of the USB data transfer control functions: usbThreadStart(), usbThreadStop(), usbDataTransfersStart(), usbDataTransfersStop(), setTransfersNumber(), getTransfersNumber(), setTransfersSize(), getTransfersSize(). You should only use this to set some asynchronous notification flag to be safe!

Parameters:

callback – exceptional shutdown handling callback

inline void setDebugCallback(usbTransferCallbackType callback)

Set debug handling callback, receives the buffers filled with debug data from the INTERRUPT endpoint. Not thread-safe, call before usbThreadStart() only.

Parameters:

callback – debug data handling callback

inline void setDataEndpoint(const uint8_t dataEndPoint)

Set data handling callback, receives the buffers filled with data from the BULK endpoint. Not thread-safe, call before usbThreadStart() only.

Parameters:

dataEndPoint – new data endpoint address

inline uint8_t getDataEndpoint() const
inline bool isUSBThreadRunning() const
inline bool isUSBDataTransferRunning() const
inline void usbThreadStart()
inline void usbThreadStop()
inline void usbDataTransfersStart()
inline void usbDataTransfersStop()
inline void usbDebugTransfersStart()
inline void usbDebugTransfersStop()

Protected Static Functions

static inline std::vector<DeviceDescriptor> findCompatibleDevices(const compatibleCameraCallbackType &compatibleDevice)
static inline std::vector<DeviceDescriptor> filterCompatibleDevices(const std::vector<DeviceDescriptor> &list, const std::string_view filterBySerialNumber)

Protected Static Attributes

static constexpr uint8_t VENDOR_REQUEST_SPI_CONFIG = {0xBF}
static constexpr uint16_t VID_INIVATION = {0x152A}

Private Types

using LibusbManagedContext = std::unique_ptr<libusb_context, decltype(libusbContextDeleter)>
using LibusbManagedDeviceHandle = std::unique_ptr<libusb_device_handle, decltype(libusbDeviceHandleDeleter)>
using LibusbManagedDeviceList = std::unique_ptr<libusb_device*[], decltype(libusbDeviceListDeleter)>
using LibusbManagedTransfer = std::unique_ptr<libusb_transfer, decltype(libusbTransferDeleter)>

Private Functions

inline std::vector<LibusbManagedTransfer> usbAllocateTransfers(const size_t transfersNumber, const size_t transfersSize, const uint8_t endpoint, const libusb_transfer_type endpointType, const libusb_transfer_cb_fn callback)
inline void usbCancelTransfers(std::vector<LibusbManagedTransfer> &transfers)
inline void usbAllocateDataTransfersNLCK()
inline void usbCancelAndDeallocateDataTransfersNLCK()
inline void usbAllocateDebugTransfers()
inline void usbCancelAndDeallocateDebugTransfers()

Private Members

std::atomic<LogLevel> mUsbLogLevel = {LogLevel::LVL_WARNING}
loggerCallbackType mUsbDeviceLogger = {}
std::string mUsbDeviceName = {}
LibusbManagedContext mDeviceContext = {nullptr, libusbContextDeleter}
LibusbManagedDeviceHandle mDeviceHandle = {nullptr, libusbDeviceHandleDeleter}
DeviceDescriptor mDescriptor = {}
std::string mUsbThreadName = {}
std::thread mUsbThread = {}
std::atomic<bool> mUsbThreadRun = {false}
mutable std::mutex mUsbOpsLock = {}
uint8_t mDataEndPoint = {DEFAULT_DATA_ENDPOINT}
std::atomic<bool> mDataTransfersRun = {false}
mutable std::mutex mDataTransfersLock = {}
std::vector<LibusbManagedTransfer> mDataTransfersNLCK = {}
uint32_t mDataTransfersActiveNLCK = {0}
uint32_t mDataTransfersFailedNLCK = {0}
uint32_t mDataTransfersNumberNLCK = {32}
uint32_t mDataTransfersSizeNLCK = {8 * 1024}
usbTransferCallbackType mUsbDataCallback = {}
usbShutdownCallbackType mUsbShutdownCallback = {}
std::vector<LibusbManagedTransfer> mDebugTransfers = {}
std::atomic<uint32_t> mDebugTransfersActive = {0}
usbTransferCallbackType mUsbDebugCallback = {}

Private Static Functions

static inline void LIBUSB_CALL usbControlOutCallback (struct libusb_transfer *transfer)
static inline void LIBUSB_CALL usbControlInCallback (struct libusb_transfer *transfer)
static inline void LIBUSB_CALL libusbLogCallback (libusb_context *ctx, const enum libusb_log_level level, const char *str)
static inline std::string errorPrint(const std::string_view msg, const int code)
static inline std::string generateRepeatableSerialNumber(const DeviceDescriptor &desc)
static inline std::optional<std::string> fetchSerialNumber(libusb_device_handle *const handle, const uint8_t serialNumberIndex)
static inline void usbThreadFunc(USBDevice *state)
static inline void LIBUSB_CALL usbDataTransferCallback (struct libusb_transfer *transfer)
static inline void LIBUSB_CALL usbDebugTransferCallback (struct libusb_transfer *transfer)

Private Static Attributes

static constexpr size_t MAX_CONTROL_TRANSFER_SIZE = {4 * 1024}
static constexpr uint8_t MAX_SERIAL_NUMBER_LENGTH = {8}
static constexpr uint8_t DEFAULT_DATA_ENDPOINT = {0x82}
static constexpr uint8_t DEBUG_ENDPOINT = {0x81}
static constexpr uint32_t DEBUG_TRANSFERS_NUMBER = {8}
static constexpr uint32_t DEBUG_TRANSFERS_SIZE = {64}
static constexpr uint8_t VENDOR_REQUEST_LOG_LEVEL = {0xB1}
static constexpr uint8_t VENDOR_REQUEST_DATA_CLEANUP = {0xC6}
static constexpr uint8_t VENDOR_REQUEST_SPI_CONFIG_MULTIPLE = {0xC2}
static loggerCallbackType LIBUSB_LOGGER = {}
static constexpr auto libusbContextDeleter = [](libusb_context *ctx) {libusb_exit(ctx);}
static constexpr auto libusbDeviceHandleDeleter = [](libusb_device_handle *handle) {libusb_close(handle);}
static constexpr auto libusbDeviceListDeleter = [](libusb_device **list) {libusb_free_device_list(list, true);}
static constexpr auto libusbTransferDeleter = [](libusb_transfer *transfer) {libusb_free_transfer(transfer);}

Friends

inline friend std::ostream &operator<<(std::ostream &os, const LogLevel &var)
class USBDeviceNextGen : public dv::io::camera::USBDevice

Subclassed by dv::io::camera::DVXplorerM

Protected Functions

inline USBDeviceNextGen(const DeviceDescriptor &deviceToOpen, const LogLevel deviceLogLevel, loggerCallbackType deviceLogger, const std::string_view shortName, const std::string_view longName, const compatibleCameraCallbackType &expectedDevice, const int32_t requiredFirmwareVersion = -1)
void spiConfigSendMultiple(std::span<SPIConfigurationParameters>) const = delete
void spiConfigSendMultipleAsync(std::span<SPIConfigurationParameters>, controlOutCallbackType) const = delete
inline void spiConfigSend(const uint16_t moduleAddr, const uint16_t paramAddr, const uint64_t param) const
inline void spiConfigSendAsync(const uint16_t moduleAddr, const uint16_t paramAddr, const uint64_t param, controlOutCallbackType callback) const
inline uint64_t spiConfigReceive(const uint16_t moduleAddr, const uint16_t paramAddr) const
inline void spiConfigReceiveAsync(const uint16_t moduleAddr, const uint16_t paramAddr, std::function<void(libusb_transfer_status status, uint64_t param)> callback) const
struct VDACBias

On-chip voltage digital-to-analog converter configuration. See ‘https://docs.inivation.com/hardware/hardware-advanced-usage/biasing.html’ for more details.

Public Functions

constexpr VDACBias() = default
inline constexpr VDACBias(const uint8_t voltage, const uint8_t current = 7)

Public Members

uint8_t voltageValue = {0}

Voltage, between 0 and 63, as a fraction of 1/64th of VDD=3.3V.

uint8_t currentValue = {0}

Current, between 0 and 7, that drives the voltage.

struct WriteJob

Public Functions

inline WriteJob(const asio::const_buffer &buffer, SocketBase::CompletionHandler handler)

Public Members

asio::const_buffer mBuffer
SocketBase::CompletionHandler mHandler
class WriteOnlyFile : private dv::io::SimpleWriteOnlyFile

Public Functions

WriteOnlyFile() = delete
inline WriteOnlyFile(const std::filesystem::path &filePath, const std::string_view outputInfo, std::unique_ptr<dv::io::compression::CompressionSupport> compression, std::unique_ptr<dv::io::support::IOStatistics> stats = nullptr)
inline WriteOnlyFile(const std::filesystem::path &filePath, const std::string_view outputInfo, const CompressionType compression = CompressionType::NONE, std::unique_ptr<dv::io::support::IOStatistics> stats = nullptr)
inline ~WriteOnlyFile()
inline void write(const dv::types::TypedObject *const packet, const int32_t streamId)
inline void write(const void *ptr, const dv::types::Type &type, const int32_t streamId)

Private Functions

inline void pushVersion(const std::shared_ptr<const dv::io::support::IODataBuffer> version)
inline void pushHeader(const std::shared_ptr<const dv::io::support::IODataBuffer> header)
inline void pushPacket(const std::shared_ptr<const dv::io::support::IODataBuffer> packet)
inline void pushFileDataTable(const std::shared_ptr<const dv::io::support::IODataBuffer> fileDataTable)
inline void writeThread()
inline void stop()
inline void emptyWriteBuffer()
inline void writeVersion(const std::shared_ptr<const dv::io::support::IODataBuffer> packet)
inline void writeHeader(const std::shared_ptr<const dv::io::support::IODataBuffer> packet)
inline void writePacket(const std::shared_ptr<const dv::io::support::IODataBuffer> packet)
inline void writeFileDataTable(const std::shared_ptr<const dv::io::support::IODataBuffer> packet)

Private Members

std::string mOutputInfo
dv::io::Writer mWriter
std::mutex mMutex
std::queue<std::function<void(void)>> mWriteBuffer
std::atomic<bool> mStopRequested = {false}
std::thread mWriteThread
class WriteOrderedSocket
#include </builds/inivation/dv/dv-processing/include/dv-processing/io/network/write_ordered_socket.hpp>

Write ordered socket. Implemented because in asio simultaneous async_writes are not allowed.

Public Functions

inline explicit WriteOrderedSocket(std::unique_ptr<SocketBase> &&socket)
inline void write(const asio::const_buffer &buf, SocketBase::CompletionHandler &&wrHandler)

Add a buffer to be written out to the socket. This call adds the buffer to a ordered queue that guarantees that will chain multiple write_async calls to the socket so no simultaneous calls would happen.

Parameters:
  • buf – Buffers to be written into the socket.

  • wrHandler – Write handler that is called when buffer write is completed.

inline void close()

Close the underlying socket.

inline bool isOpen() const

Check whether underlying socket is open

Returns:

inline void read(const asio::mutable_buffer &buf, SocketBase::CompletionHandler &&rdHandler)

Read data from the socket. This only wraps the read call of the underlying socket.

Parameters:
  • buf

  • rdHandler

Private Members

std::deque<WriteJob> mWriteQueue

No locking for writeQueue because all changes are posted to io_service thread.

std::unique_ptr<dv::io::network::SocketBase> mSocket

Underlying socket.

class Writer

Public Types

using WriteHandler = dv::std_function_exact<void(const std::shared_ptr<const dv::io::support::IODataBuffer>)>

Public Functions

Writer() = delete
inline explicit Writer(std::unique_ptr<dv::io::compression::CompressionSupport> compression, std::unique_ptr<dv::io::support::IOStatistics> stats = nullptr, std::unique_ptr<dv::FileDataTable> dataTable = nullptr)
inline explicit Writer(const dv::CompressionType compression, std::unique_ptr<dv::io::support::IOStatistics> stats = nullptr, std::unique_ptr<dv::FileDataTable> dataTable = nullptr)
~Writer() = default
Writer(const Writer &other) = delete
Writer &operator=(const Writer &other) = delete
Writer(Writer &&other) noexcept = default
Writer &operator=(Writer &&other) noexcept = default
inline auto getCompressionType()
inline size_t writeAedatVersion(const WriteHandler &writeHandler)
inline size_t writeHeader(const int64_t dataTablePosition, const std::string_view infoNode, const WriteHandler &writeHandler)
inline size_t writePacket(const dv::types::TypedObject *const packet, const int32_t streamId, const WriteHandler &writeHandler)
inline size_t writePacket(const void *ptr, const dv::types::Type &type, const int32_t streamId, const WriteHandler &writeHandler)
inline int64_t writeFileDataTable(const WriteHandler &writeHandler)

Public Static Functions

static inline std::shared_ptr<dv::io::support::IODataBuffer> encodeAedat4Version()
static inline std::shared_ptr<dv::io::support::IODataBuffer> encodeFileHeader(const int64_t dataTablePosition, const std::string_view infoNode, const dv::CompressionType compressionType)
static inline void encodePacketHeader(const std::shared_ptr<dv::io::support::IODataBuffer> packet, const int32_t streamId)
static inline std::shared_ptr<dv::io::support::IODataBuffer> encodePacketBody(const void *ptr, const dv::types::Type &type)
static inline std::shared_ptr<dv::io::support::IODataBuffer> encodeFileDataTable(const dv::FileDataTable &table)

Private Functions

inline void writeToDestination(const std::shared_ptr<const dv::io::support::IODataBuffer> data, const WriteHandler &writeHandler)
inline void compressData(dv::io::support::IODataBuffer &packet)
inline void updateFileDataTable(const uint64_t byteOffset, const uint64_t numElements, const int64_t timestampStart, const int64_t timestampEnd, const dv::PacketHeader &header)

Private Members

std::unique_ptr<dv::io::support::IOStatistics> mStats
std::unique_ptr<dv::io::compression::CompressionSupport> mCompressionSupport
std::unique_ptr<dv::FileDataTable> mFileDataTable
uint64_t mByteOffset = {0}
class XMLConfigReader

Public Functions

XMLConfigReader() = delete
inline XMLConfigReader(const std::string_view xmlContent)
inline XMLConfigReader(const std::string_view xmlContent, const std::string_view expectedRootName)
inline const XMLTreeNode &getRoot() const

Private Functions

inline void parseXML(const std::string_view xmlContent, const std::string_view expectedRootName)

Private Members

XMLTreeNode mRoot

Private Static Functions

static inline std::vector<std::reference_wrapper<const boost::property_tree::ptree>> xmlFilterChildNodes(const boost::property_tree::ptree &content, const std::string &name)
static inline void consumeXML(const boost::property_tree::ptree &content, XMLTreeNode &node)
static inline dv::io::support::VariantValueOwning stringToValueConverter(const std::string &typeStr, const std::string &valueStr)
class XMLConfigWriter

Public Functions

XMLConfigWriter() = delete
inline XMLConfigWriter(const XMLTreeNode &root)
inline const std::string &getXMLContent() const

Private Functions

inline void writeXML(const XMLTreeNode &root)

Private Members

std::string mXMLOutputContent

Private Static Functions

static inline boost::property_tree::ptree generateXML(const XMLTreeNode &node, const std::string &prevPath)
static inline std::pair<std::string, std::string> valueToStringConverter(const dv::io::support::VariantValueOwning &value)
struct XMLTreeAttribute : public dv::io::support::XMLTreeCommon

Public Functions

XMLTreeAttribute() = delete
inline explicit XMLTreeAttribute(const std::string_view name)

Public Members

dv::io::support::VariantValueOwning mValue
struct XMLTreeCommon

Subclassed by dv::io::support::XMLTreeAttribute, dv::io::support::XMLTreeNode

Public Functions

XMLTreeCommon() = delete
inline explicit XMLTreeCommon(const std::string_view name)
inline bool operator==(const XMLTreeCommon &rhs) const noexcept
inline auto operator<=>(const XMLTreeCommon &rhs) const noexcept
inline bool operator==(const std::string_view &rhs) const noexcept
inline auto operator<=>(const std::string_view &rhs) const noexcept

Public Members

std::string mName
struct XMLTreeNode : public dv::io::support::XMLTreeCommon

Public Functions

inline explicit XMLTreeNode()
inline explicit XMLTreeNode(const std::string_view name)

Public Members

std::vector<XMLTreeNode> mChildren
std::vector<XMLTreeAttribute> mAttributes
class ZstdCompressionSupport : public dv::io::compression::CompressionSupport

Public Functions

inline explicit ZstdCompressionSupport(const CompressionType type)
inline explicit ZstdCompressionSupport(const int compressionLevel)

Create a Zstd compression support class with custom compression. Internally sets compression type to CompressionType::ZSTD.

See also

For more info on compression level values see here: https://facebook.github.io/zstd/zstd_manual.html

Parameters:

compressionLevel – Compression level, recommended range is [1, 22].

inline virtual void compress(dv::io::support::IODataBuffer &packet) override

Private Members

std::shared_ptr<ZSTD_CCtx_s> mContext
int mLevel = {3}
class ZstdDecompressionSupport : public dv::io::compression::DecompressionSupport

Public Functions

inline explicit ZstdDecompressionSupport(const CompressionType type)
inline virtual void decompress(std::vector<std::byte> &src, std::vector<std::byte> &target) override

Private Functions

inline void initDecompressionContext()

Private Members

std::shared_ptr<ZSTD_DCtx_s> mContext
template<class T>
concept MeanShiftKernel
template<class T1, class T2>
concept Accepts
template<class T>
concept AddressableEvent
template<class T>
concept BlockAccessible
template<class Type>
concept CompatibleWithSlicer
#include </builds/inivation/dv/dv-processing/include/dv-processing/core/stream_slicer.hpp>

Concept that verifies that given type is compatible for use with stream slicer.

tparam Type:

Type to verify

template<class T>
concept Coordinate2D
template<class T>
concept Coordinate2DAccessors
template<class T>
concept Coordinate2DConstructible
template<class T>
concept Coordinate2DIterable
template<class T>
concept Coordinate2DMembers
template<class T>
concept Coordinate2DMutableIterable
template<class T>
concept Coordinate3D
template<class T>
concept Coordinate3DAccessors
template<class T>
concept Coordinate3DConstructible
template<class T>
concept Coordinate3DIterable
template<class T>
concept Coordinate3DMembers
template<class T>
concept Coordinate3DMutableIterable
template<class Packet>
concept DataPacket
template<class T, class Input>
concept DVFeatureDetectorAlgorithm
template<class T>
concept EigenType
template<class T>
concept Enum
template<class T, class EventStoreType>
concept EventFilter
template<class T, class EventStoreType>
concept EventOutputGenerator
template<class T>
concept EventStorage
template<class T, class EventStoreType>
concept EventToEventConverter
template<class T, class EventStoreType>
concept EventToFrameConverter
template<class T, class Input>
concept FeatureDetectorAlgorithm
template<class T>
concept FlatbufferPacket
template<class T>
concept FrameOutputGenerator
template<class T, class EventStoreType>
concept FrameToEventConverter
template<class T>
concept FrameToFrameConverter
template<class T>
concept HasElementsVector
template<class T>
concept HasTimestampedElementsVector
template<class T>
concept HasTimestampedElementsVectorByAccessor
template<class T>
concept HasTimestampedElementsVectorByMember
template<class T1, class T2>
concept InputStreamableFrom
template<class T1, class T2>
concept InputStreamableTo
template<typename FUNC, typename RETURN_TYPE, typename ...ARGUMENTS_TYPES>
concept InvocableReturnArgumentsStrong
#include </builds/inivation/dv/dv-processing/include/dv-processing/core/concepts.hpp>

Checks if function is invocable with the given argument types exactly and its return value is the same as the given return type.

tparam FUNC:

function-like object to check.

tparam RETURN_TYPE:

required return type.

tparam ARGUMENTS_TYPES:

required argument types.

template<typename FUNC, typename RETURN_TYPE, typename ...ARGUMENTS_TYPES>
concept InvocableReturnArgumentsWeak
#include </builds/inivation/dv/dv-processing/include/dv-processing/core/concepts.hpp>

Checks if function is invocable with the given argument types and its return value is convertible to the given return type.

tparam FUNC:

function-like object to check.

tparam RETURN_TYPE:

required return type.

tparam ARGUMENTS_TYPES:

required argument types.

template<class T1, class T2>
concept IOStreamableFrom
template<class T1, class T2>
concept IOStreamableTo
template<typename T>
concept Iterable
template<class T>
concept KeyPoint
template<class T>
concept KeyPointConstructible
template<class T>
concept KeyPointIterable
template<class T>
concept KeyPointMutableIterable
template<typename T>
concept MutableIterable
template<typename T>
concept number
template<class T>
concept OpenCVFeatureDetectorAlgorithm
template<class T1, class T2>
concept OutputStreamableFrom
template<class T1, class T2>
concept OutputStreamableTo
template<class T>
concept SupportsConstantDepth
template<class T>
concept TimedImageContainer
template<class T>
concept Timestamped
template<class T>
concept TimestampedByAccessor
template<class T>
concept TimestampedByMember
template<class T>
concept TimestampedIterable
template<class T>
concept TimestampMatrixContainer
template<class T, class EventStoreType>
concept TimeSurface
template<typename T>
concept HasCustomExceptionFormatter
template<typename T>
concept HasExtraExceptionInfo
template<typename T>
concept NoCustomExceptionFormatter
namespace dv

Typedefs

using EventStore = dv::AddressableEventStorage<dv::Event, dv::EventPacket>
using DepthEventStore = dv::AddressableEventStorage<dv::DepthEvent, dv::DepthEventPacket>
using EventStreamSlicer = dv::StreamSlicer<dv::EventStore>
using FrameStreamSlicer = dv::StreamSlicer<std::vector<dv::Frame>>
using IMUStreamSlicer = dv::StreamSlicer<std::vector<dv::IMU>>
using TriggerStreamSlicer = dv::StreamSlicer<std::vector<dv::Trigger>>
using TimeSurface = TimeSurfaceBase<EventStore>
using SpeedInvariantTimeSurface = dv::SpeedInvariantTimeSurfaceBase<dv::EventStore>
using StereoEventStreamSlicer = AddressableStereoEventStreamSlicer<dv::EventStore>
using TimestampClock = std::chrono::system_clock
using TimestampResolution = std::chrono::microseconds
using Duration = TimestampResolution

Duration type that stores microsecond time period.

using TimePoint = std::chrono::time_point<TimestampClock, TimestampResolution>

Timepoint type that stores microsecond time point related to system clock.

Enums

enum class EventColor : uint8_t

The EventColor enum contains the color of the Bayer color filter for a specific event address. WHITE means White/No Filter. Please take into account that there are usually twice as many green pixels as there are red or blue ones.

Values:

enumerator WHITE
enumerator RED
enumerator GREEN1
enumerator BLUE
enumerator GREEN2
enum class PixelArrangement : uint8_t

Color pixel block arrangement on the sensor. The sensor usually contain one red, one blue, and two green pixels. They can be arranged in different order, so exact color extraction, the pixel arrangement needs to be known.

Values:

enumerator MONO

No color filter present, all light passes.

enumerator RGBG

Standard Bayer color filter, 1 red 2 green 1 blue. Variation 1.

enumerator GRGB

Standard Bayer color filter, 1 red 2 green 1 blue. Variation 2.

enumerator GBGR

Standard Bayer color filter, 1 red 2 green 1 blue. Variation 3.

enumerator BGRG

Standard Bayer color filter, 1 red 2 green 1 blue. Variation 4.

enum class TimeSlicingApproach

Time handling approaches for number based slicing.

Values:

enumerator BACKWARD

Assign gap elements between previous numeric slice and current one.

enumerator FORWARD

Assign gap elements between current numeric slice and next one.

enum class FrameFormat : int8_t

Format values are compatible with OpenCV. Pixel layout follows OpenCV standard.

Values:

enumerator OPENCV_8U_C1
enumerator OPENCV_8S_C1
enumerator OPENCV_16U_C1
enumerator OPENCV_16S_C1
enumerator OPENCV_32S_C1
enumerator OPENCV_32F_C1
enumerator OPENCV_64F_C1
enumerator OPENCV_16F_C1
enumerator OPENCV_8U_C2
enumerator OPENCV_8S_C2
enumerator OPENCV_16U_C2
enumerator OPENCV_16S_C2
enumerator OPENCV_32S_C2
enumerator OPENCV_32F_C2
enumerator OPENCV_64F_C2
enumerator OPENCV_16F_C2
enumerator OPENCV_8U_C3
enumerator OPENCV_8S_C3
enumerator OPENCV_16U_C3
enumerator OPENCV_16S_C3
enumerator OPENCV_32S_C3
enumerator OPENCV_32F_C3
enumerator OPENCV_64F_C3
enumerator OPENCV_16F_C3
enumerator OPENCV_8U_C4
enumerator OPENCV_8S_C4
enumerator OPENCV_16U_C4
enumerator OPENCV_16S_C4
enumerator OPENCV_32S_C4
enumerator OPENCV_32F_C4
enumerator OPENCV_64F_C4
enumerator OPENCV_16F_C4
enumerator MIN
enumerator MAX
enum class FrameSource : int8_t

Image data source.

Values:

enumerator UNDEFINED

Undefined source, this value indicates that source field shouldn’t be considered at all.

enumerator SENSOR

Image comes from a frame sensor.

enumerator ACCUMULATION

Image was accumulated.

enumerator MOTION_COMPENSATION

Image was accumulated using motion compensation.

enumerator SYNTHETIC

Image is synthetic, it does not represent any real data.

enumerator RECONSTRUCTION

Reconstructed image, it may come from a neural network that convert events to images.

enumerator VISUALIZATION

The image is designated for visualization only.

enumerator OTHER

Other sources, can be used to indicate a custom algorithm for image generation.

enumerator MIN
enumerator MAX
enum class TriggerType : int8_t

Values:

enumerator TIMESTAMP_RESET

A timestamp reset occurred.

enumerator EXTERNAL_SIGNAL_RISING_EDGE

A rising edge was detected (External Input module on device).

enumerator EXTERNAL_SIGNAL_FALLING_EDGE

A falling edge was detected (External Input module on device).

enumerator EXTERNAL_SIGNAL_PULSE

A pulse was detected (External Input module on device).

enumerator EXTERNAL_GENERATOR_RISING_EDGE

A rising edge was generated (External Generator module on device).

enumerator EXTERNAL_GENERATOR_FALLING_EDGE

A falling edge was generated (External Generator module on device).

enumerator APS_FRAME_START

An APS frame capture has started (Frame Event will follow).

enumerator APS_FRAME_END

An APS frame capture has completed (Frame Event is contemporary).

enumerator APS_EXPOSURE_START

An APS frame exposure has started (Frame Event will follow).

enumerator APS_EXPOSURE_END

An APS frame exposure has completed (Frame Event will follow).

enumerator MIN
enumerator MAX
enum class Constants : int32_t

Values:

enumerator AEDAT_VERSION_LENGTH
enumerator MIN
enumerator MAX
enum class CompressionType : int32_t

Values:

enumerator NONE
enumerator LZ4
enumerator LZ4_HIGH
enumerator ZSTD
enumerator ZSTD_HIGH
enumerator MIN
enumerator MAX

Functions

template<typename EXPR, typename MSG>
void runtime_assert(EXPR &&expression, MSG &&message, const std::source_location &location = std::source_location::current())
inline uint32_t coordinateHash(const int16_t x, const int16_t y)

Function that creates perfect hash for 2d coordinates.

Parameters:
  • x – x coordinate

  • y – y coordinate

Returns:

a 64 bit hash that uniquely identifies the coordinates

template<class EventStoreType>
inline void roiFilter(const EventStoreType &in, EventStoreType &out, const cv::Rect &roi)

Extracts only the events that are within the defined region of interest. This function copies the events from the in EventStore into the given out EventStore, if they intersect with the given region of interest rectangle.

Parameters:
  • in – The EventStore to operate on. Won’t be modified.

  • out – The EventStore to put the ROI events into. Will get modified.

  • roi – The rectangle with the region of interest.

template<class EventStoreType>
inline void polarityFilter(const EventStoreType &in, EventStoreType &out, bool polarity)

Filters events by polarity. Only events that exhibit the same polarity as given in polarity are kept.

Parameters:
  • in – Incoming EventStore to operate on. Won’t get modified.

  • out – The outgoing EventStore to store the kept events on

  • polarity – The polarity of the events that should be kept

template<class EventStoreType>
inline void maskFilter(const EventStoreType &in, EventStoreType &out, const cv::Mat &mask)

Filter event with a coordinate mask. Discards any events that happen on coordinates where mask has a zero value and retains all events with coordinates where mask has a non-zero value.

Template Parameters:

EventStoreType – Class for the event store container.

Parameters:
  • in – Incoming EventStore to operate on. Won’t get modified.

  • out – The outgoing EventStore to store the kept events on

  • mask – The mask to be applied (requires CV_8UC1 type).

template<class EventStoreType>
inline void scale(const EventStoreType &in, EventStoreType &out, double xDivision, double yDivision)

Projects the event coordinates onto a smaller range. The x- and y-coordinates the divided by xFactor and yFactor respectively and floored to the next integer. This forms the new coordinates of the event. Due to the nature of this, it can happen that multiple events end up happening simultaneously at the same location. This is still a valid event stream, as time keeps monotonically increasing, but is something that is unlikely to be generated by an event camera.

Parameters:
  • in – The EventStore to operate on. Won’t get modified

  • out – The outgoing EventStore to store the projected events on

  • xDivision – Division factor for the x-coordinate for the events

  • yDivision – Division factor for the y-coordinate of the events

template<class EventStoreType>
inline cv::Rect boundingRect(const EventStoreType &packet)

Computes and returns a rectangle with dimensions such that all the events in the given EventStore fall into the bounding box.

Parameters:

packet – The EventStore to work on

Returns:

The smallest possible rectangle that contains all the events in packet.

inline std::ostream &operator<<(std::ostream &os, const EventColor &var)
inline std::ostream &operator<<(std::ostream &os, const PixelArrangement &var)
inline EventColor colorForEvent(const dv::Event &event, const PixelArrangement pixelArrangement = PixelArrangement::MONO)

Determine the color of the Bayer color filter for a specific event, based on its address. Please take into account that there are usually twice as many green pixels as there are red or blue ones.

Parameters:
  • event – event to determine filter color for.

  • pixelArrangement – color pixel arrangement for a sensor.

Returns:

filter color.

inline EventColor colorForPoint(const cv::Point pixelCoordinates, const PixelArrangement pixelArrangement = PixelArrangement::MONO)

Determine the color of the Bayer color filter for specific pixel coordinates. Please take into account that there are usually twice as many green pixels as there are red or blue ones.

Parameters:
  • pixelCoordinates – position to determine filter color for.

  • pixelArrangement – color pixel arrangement for a sensor.

Returns:

filter color.

inline std::ostream &operator<<(std::ostream &os, const TimeSlicingApproach &var)
inline TimePoint toTimePoint(const int64_t timestamp)

Convert a 64-bit integer microsecond timestamp into a chrono time-point.

Parameters:

timestamp – 64-bit integer microsecond timestamp

Returns:

Chrono time point (microseconds, system clock).

inline int64_t fromTimePoint(const TimePoint timepoint)

Convert a chrono time-point into a 64-bit integer microsecond timestamp.

Parameters:

timestamp – Chrono time point (microseconds, system clock).

Returns:

64-bit integer microsecond timestamp

inline int64_t now()
Returns:

Current system clock timestamp in microseconds as 64-bit integer.

template<dv::concepts::Enum Enumeration>
constexpr std::underlying_type_t<Enumeration> EnumAsInteger(const Enumeration value) noexcept

Functions to help handle enumerations and their values.

template<dv::concepts::Enum Enumeration, std::integral T>
constexpr Enumeration IntegerAsEnum(const T value) noexcept
template<typename T, typename U>
inline bool vectorContains(const std::vector<T> &vec, const U &item)

Functions to help deal with common vector operations: bool vectorContains(vec, item) bool vectorContainsIf(vec, predicate) bool vectorRemove(vec, item) bool vectorRemoveIf(vec, predicate) void vectorSortUnique(vec) void vectorSortUnique(vec, comparator)

template<typename T, typename Pred>
inline bool vectorContainsIf(const std::vector<T> &vec, Pred predicate)
template<typename T, typename U>
inline size_t vectorRemove(std::vector<T> &vec, const U &item)
template<typename T, typename Pred>
inline size_t vectorRemoveIf(std::vector<T> &vec, Pred predicate)
template<typename T>
inline void vectorSortUnique(std::vector<T> &vec)
template<typename T, typename Compare>
inline void vectorSortUnique(std::vector<T> &vec, Compare comp)
inline std::filesystem::path pathResolveNonExisting(const std::filesystem::path &path)

Path cleanup functions for existing paths (canonical) and possibly non-existing ones (absolute).

inline std::filesystem::path pathResolveExisting(const std::filesystem::path &path)
template<typename ObjectT, typename ...Args>
inline void *mallocConstructorSize(const size_t sizeOfObject, Args&&... args)
template<typename ObjectT, typename ...Args>
inline void *mallocConstructor(Args&&... args)
template<typename ObjectT>
inline void mallocDestructor(void *object) noexcept
inline std::string errnoToString(int errorNumber)
template<concepts::Coordinate2D Input>
inline bool isWithinDimensions(const Input &point, const cv::Size &resolution)

Check whether given point is non-negative and within dimensions of given resolution. The following check is performed: X ∈ [0; (width - 1)] and Y ∈ [0; (height - 1)]. Function will check floating point coordinate fractional part overflow, it will return false in case even fractional part is beyond the valid range.

Parameters:
  • point – Coordinates to check.

  • resolution – Pixel space resolution.

Returns:

True if coordinates are within valid range, false otherwise.

inline bool operator==(const BoundingBox &lhs, const BoundingBox &rhs)
inline bool operator==(const BoundingBoxPacket &lhs, const BoundingBoxPacket &rhs)
inline const flatbuffers::TypeTable *BoundingBoxTypeTable()
inline const flatbuffers::TypeTable *BoundingBoxPacketTypeTable()
inline flatbuffers::Offset<BoundingBoxFlatbuffer> CreateBoundingBox(flatbuffers::FlatBufferBuilder &_fbb, int64_t timestamp = 0, float topLeftX = 0.0f, float topLeftY = 0.0f, float bottomRightX = 0.0f, float bottomRightY = 0.0f, float confidence = 0.0f, flatbuffers::Offset<flatbuffers::String> label = 0)
inline flatbuffers::Offset<BoundingBoxFlatbuffer> CreateBoundingBoxDirect(flatbuffers::FlatBufferBuilder &_fbb, int64_t timestamp = 0, float topLeftX = 0.0f, float topLeftY = 0.0f, float bottomRightX = 0.0f, float bottomRightY = 0.0f, float confidence = 0.0f, const char *label = nullptr)
inline flatbuffers::Offset<BoundingBoxFlatbuffer> CreateBoundingBox(flatbuffers::FlatBufferBuilder &_fbb, const BoundingBox *_o, const flatbuffers::rehasher_function_t *_rehasher = nullptr)
inline flatbuffers::Offset<BoundingBoxPacketFlatbuffer> CreateBoundingBoxPacket(flatbuffers::FlatBufferBuilder &_fbb, flatbuffers::Offset<flatbuffers::Vector<flatbuffers::Offset<BoundingBoxFlatbuffer>>> elements = 0)
inline flatbuffers::Offset<BoundingBoxPacketFlatbuffer> CreateBoundingBoxPacketDirect(flatbuffers::FlatBufferBuilder &_fbb, const std::vector<flatbuffers::Offset<BoundingBoxFlatbuffer>> *elements = nullptr)
inline flatbuffers::Offset<BoundingBoxPacketFlatbuffer> CreateBoundingBoxPacket(flatbuffers::FlatBufferBuilder &_fbb, const BoundingBoxPacket *_o, const flatbuffers::rehasher_function_t *_rehasher = nullptr)
inline const dv::BoundingBoxPacketFlatbuffer *GetBoundingBoxPacket(const void *buf)
inline const dv::BoundingBoxPacketFlatbuffer *GetSizePrefixedBoundingBoxPacket(const void *buf)
inline const char *BoundingBoxPacketIdentifier()
inline bool BoundingBoxPacketBufferHasIdentifier(const void *buf)
inline bool VerifyBoundingBoxPacketBuffer(flatbuffers::Verifier &verifier)
inline bool VerifySizePrefixedBoundingBoxPacketBuffer(flatbuffers::Verifier &verifier)
inline void FinishBoundingBoxPacketBuffer(flatbuffers::FlatBufferBuilder &fbb, flatbuffers::Offset<dv::BoundingBoxPacketFlatbuffer> root)
inline void FinishSizePrefixedBoundingBoxPacketBuffer(flatbuffers::FlatBufferBuilder &fbb, flatbuffers::Offset<dv::BoundingBoxPacketFlatbuffer> root)
inline std::unique_ptr<BoundingBoxPacket> UnPackBoundingBoxPacket(const void *buf, const flatbuffers::resolver_function_t *res = nullptr)
inline bool operator==(const DepthEvent &lhs, const DepthEvent &rhs)
inline bool operator==(const DepthEventPacket &lhs, const DepthEventPacket &rhs)
inline const flatbuffers::TypeTable *DepthEventTypeTable()
inline const flatbuffers::TypeTable *DepthEventPacketTypeTable()
FLATBUFFERS_MANUALLY_ALIGNED_STRUCT (8) DepthEvent final
FLATBUFFERS_STRUCT_END (DepthEvent, 16)
inline flatbuffers::Offset<DepthEventPacketFlatbuffer> CreateDepthEventPacket(flatbuffers::FlatBufferBuilder &_fbb, flatbuffers::Offset<flatbuffers::Vector<const DepthEvent*>> elements = 0)
inline flatbuffers::Offset<DepthEventPacketFlatbuffer> CreateDepthEventPacketDirect(flatbuffers::FlatBufferBuilder &_fbb, const std::vector<DepthEvent> *elements = nullptr)
inline flatbuffers::Offset<DepthEventPacketFlatbuffer> CreateDepthEventPacket(flatbuffers::FlatBufferBuilder &_fbb, const DepthEventPacket *_o, const flatbuffers::rehasher_function_t *_rehasher = nullptr)
inline const dv::DepthEventPacketFlatbuffer *GetDepthEventPacket(const void *buf)
inline const dv::DepthEventPacketFlatbuffer *GetSizePrefixedDepthEventPacket(const void *buf)
inline const char *DepthEventPacketIdentifier()
inline bool DepthEventPacketBufferHasIdentifier(const void *buf)
inline bool VerifyDepthEventPacketBuffer(flatbuffers::Verifier &verifier)
inline bool VerifySizePrefixedDepthEventPacketBuffer(flatbuffers::Verifier &verifier)
inline void FinishDepthEventPacketBuffer(flatbuffers::FlatBufferBuilder &fbb, flatbuffers::Offset<dv::DepthEventPacketFlatbuffer> root)
inline void FinishSizePrefixedDepthEventPacketBuffer(flatbuffers::FlatBufferBuilder &fbb, flatbuffers::Offset<dv::DepthEventPacketFlatbuffer> root)
inline std::unique_ptr<DepthEventPacket> UnPackDepthEventPacket(const void *buf, const flatbuffers::resolver_function_t *res = nullptr)
inline bool operator==(const DepthFrame &lhs, const DepthFrame &rhs)
inline const flatbuffers::TypeTable *DepthFrameTypeTable()
inline flatbuffers::Offset<DepthFrameFlatbuffer> CreateDepthFrame(flatbuffers::FlatBufferBuilder &_fbb, int64_t timestamp = 0, int16_t sizeX = 0, int16_t sizeY = 0, uint16_t minDepth = 0, uint16_t maxDepth = 65535, uint16_t step = 1, flatbuffers::Offset<flatbuffers::Vector<uint16_t>> depth = 0)
inline flatbuffers::Offset<DepthFrameFlatbuffer> CreateDepthFrameDirect(flatbuffers::FlatBufferBuilder &_fbb, int64_t timestamp = 0, int16_t sizeX = 0, int16_t sizeY = 0, uint16_t minDepth = 0, uint16_t maxDepth = 65535, uint16_t step = 1, const std::vector<uint16_t> *depth = nullptr)
inline flatbuffers::Offset<DepthFrameFlatbuffer> CreateDepthFrame(flatbuffers::FlatBufferBuilder &_fbb, const DepthFrame *_o, const flatbuffers::rehasher_function_t *_rehasher = nullptr)
inline const dv::DepthFrameFlatbuffer *GetDepthFrame(const void *buf)
inline const dv::DepthFrameFlatbuffer *GetSizePrefixedDepthFrame(const void *buf)
inline const char *DepthFrameIdentifier()
inline bool DepthFrameBufferHasIdentifier(const void *buf)
inline bool VerifyDepthFrameBuffer(flatbuffers::Verifier &verifier)
inline bool VerifySizePrefixedDepthFrameBuffer(flatbuffers::Verifier &verifier)
inline void FinishDepthFrameBuffer(flatbuffers::FlatBufferBuilder &fbb, flatbuffers::Offset<dv::DepthFrameFlatbuffer> root)
inline void FinishSizePrefixedDepthFrameBuffer(flatbuffers::FlatBufferBuilder &fbb, flatbuffers::Offset<dv::DepthFrameFlatbuffer> root)
inline std::unique_ptr<DepthFrame> UnPackDepthFrame(const void *buf, const flatbuffers::resolver_function_t *res = nullptr)
inline bool operator==(const Event &lhs, const Event &rhs)
inline bool operator==(const EventPacket &lhs, const EventPacket &rhs)
inline const flatbuffers::TypeTable *EventTypeTable()
inline const flatbuffers::TypeTable *EventPacketTypeTable()
FLATBUFFERS_STRUCT_END (Event, 16)
inline flatbuffers::Offset<EventPacketFlatbuffer> CreateEventPacket(flatbuffers::FlatBufferBuilder &_fbb, flatbuffers::Offset<flatbuffers::Vector<const Event*>> elements = 0)
inline flatbuffers::Offset<EventPacketFlatbuffer> CreateEventPacketDirect(flatbuffers::FlatBufferBuilder &_fbb, const std::vector<Event> *elements = nullptr)
inline flatbuffers::Offset<EventPacketFlatbuffer> CreateEventPacket(flatbuffers::FlatBufferBuilder &_fbb, const EventPacket *_o, const flatbuffers::rehasher_function_t *_rehasher = nullptr)
inline const dv::EventPacketFlatbuffer *GetEventPacket(const void *buf)
inline const dv::EventPacketFlatbuffer *GetSizePrefixedEventPacket(const void *buf)
inline const char *EventPacketIdentifier()
inline bool EventPacketBufferHasIdentifier(const void *buf)
inline bool VerifyEventPacketBuffer(flatbuffers::Verifier &verifier)
inline bool VerifySizePrefixedEventPacketBuffer(flatbuffers::Verifier &verifier)
inline void FinishEventPacketBuffer(flatbuffers::FlatBufferBuilder &fbb, flatbuffers::Offset<dv::EventPacketFlatbuffer> root)
inline void FinishSizePrefixedEventPacketBuffer(flatbuffers::FlatBufferBuilder &fbb, flatbuffers::Offset<dv::EventPacketFlatbuffer> root)
inline std::unique_ptr<EventPacket> UnPackEventPacket(const void *buf, const flatbuffers::resolver_function_t *res = nullptr)
inline bool operator==(const Frame &lhs, const Frame &rhs)
inline const flatbuffers::TypeTable *FrameTypeTable()
inline const FrameFormat (&EnumValuesFrameFormat())[32]
inline const char *const *EnumNamesFrameFormat()
inline const char *EnumNameFrameFormat(FrameFormat e)
inline const FrameSource (&EnumValuesFrameSource())[8]
inline const char *const *EnumNamesFrameSource()
inline const char *EnumNameFrameSource(FrameSource e)
inline flatbuffers::Offset<FrameFlatbuffer> CreateFrame(flatbuffers::FlatBufferBuilder &_fbb, int64_t timestamp = 0, int64_t timestampStartOfFrame = 0, int64_t timestampEndOfFrame = 0, int64_t timestampStartOfExposure = 0, int64_t timestampEndOfExposure = 0, FrameFormat format = FrameFormat::OPENCV_8U_C1, int16_t sizeX = 0, int16_t sizeY = 0, int16_t positionX = 0, int16_t positionY = 0, flatbuffers::Offset<flatbuffers::Vector<uint8_t>> pixels = 0, int64_t exposure = 0, FrameSource source = FrameSource::UNDEFINED)
inline flatbuffers::Offset<FrameFlatbuffer> CreateFrameDirect(flatbuffers::FlatBufferBuilder &_fbb, int64_t timestamp = 0, int64_t timestampStartOfFrame = 0, int64_t timestampEndOfFrame = 0, int64_t timestampStartOfExposure = 0, int64_t timestampEndOfExposure = 0, FrameFormat format = FrameFormat::OPENCV_8U_C1, int16_t sizeX = 0, int16_t sizeY = 0, int16_t positionX = 0, int16_t positionY = 0, const std::vector<uint8_t> *pixels = nullptr, int64_t exposure = 0, FrameSource source = FrameSource::UNDEFINED)
inline flatbuffers::Offset<FrameFlatbuffer> CreateFrame(flatbuffers::FlatBufferBuilder &_fbb, const Frame *_o, const flatbuffers::rehasher_function_t *_rehasher = nullptr)
inline const flatbuffers::TypeTable *FrameFormatTypeTable()
inline const flatbuffers::TypeTable *FrameSourceTypeTable()
inline const dv::FrameFlatbuffer *GetFrame(const void *buf)
inline const dv::FrameFlatbuffer *GetSizePrefixedFrame(const void *buf)
inline const char *FrameIdentifier()
inline bool FrameBufferHasIdentifier(const void *buf)
inline bool VerifyFrameBuffer(flatbuffers::Verifier &verifier)
inline bool VerifySizePrefixedFrameBuffer(flatbuffers::Verifier &verifier)
inline void FinishFrameBuffer(flatbuffers::FlatBufferBuilder &fbb, flatbuffers::Offset<dv::FrameFlatbuffer> root)
inline void FinishSizePrefixedFrameBuffer(flatbuffers::FlatBufferBuilder &fbb, flatbuffers::Offset<dv::FrameFlatbuffer> root)
inline std::unique_ptr<Frame> UnPackFrame(const void *buf, const flatbuffers::resolver_function_t *res = nullptr)
inline bool operator==(const Point3f &lhs, const Point3f &rhs)
inline bool operator==(const Point2f &lhs, const Point2f &rhs)
inline bool operator==(const Vec3f &lhs, const Vec3f &rhs)
inline bool operator==(const Vec2f &lhs, const Vec2f &rhs)
inline bool operator==(const Quaternion &lhs, const Quaternion &rhs)
inline const flatbuffers::TypeTable *Point3fTypeTable()
inline const flatbuffers::TypeTable *Point2fTypeTable()
inline const flatbuffers::TypeTable *Vec3fTypeTable()
inline const flatbuffers::TypeTable *Vec2fTypeTable()
inline const flatbuffers::TypeTable *QuaternionTypeTable()
FLATBUFFERS_MANUALLY_ALIGNED_STRUCT (4) Point3f final

Structure representing absolute position of a 3D point.

Quaternion with Eigen compatible memory layout, should follow the Hamilton convention.

Structure representing a 2D vector.

Structure representing a 3D vector.

Structure representing absolute position of a 2D point.

FLATBUFFERS_STRUCT_END (Point3f, 12)
FLATBUFFERS_STRUCT_END (Point2f, 8)
FLATBUFFERS_STRUCT_END (Vec3f, 12)
FLATBUFFERS_STRUCT_END (Vec2f, 8)
FLATBUFFERS_STRUCT_END (Quaternion, 16)
inline bool operator==(const IMU &lhs, const IMU &rhs)
inline bool operator==(const IMUPacket &lhs, const IMUPacket &rhs)
inline const flatbuffers::TypeTable *IMUTypeTable()
inline const flatbuffers::TypeTable *IMUPacketTypeTable()
inline flatbuffers::Offset<IMUFlatbuffer> CreateIMU(flatbuffers::FlatBufferBuilder &_fbb, int64_t timestamp = 0, float temperature = 0.0f, float accelerometerX = 0.0f, float accelerometerY = 0.0f, float accelerometerZ = 0.0f, float gyroscopeX = 0.0f, float gyroscopeY = 0.0f, float gyroscopeZ = 0.0f, float magnetometerX = 0.0f, float magnetometerY = 0.0f, float magnetometerZ = 0.0f)
inline flatbuffers::Offset<IMUFlatbuffer> CreateIMU(flatbuffers::FlatBufferBuilder &_fbb, const IMU *_o, const flatbuffers::rehasher_function_t *_rehasher = nullptr)
inline flatbuffers::Offset<IMUPacketFlatbuffer> CreateIMUPacket(flatbuffers::FlatBufferBuilder &_fbb, flatbuffers::Offset<flatbuffers::Vector<flatbuffers::Offset<IMUFlatbuffer>>> elements = 0)
inline flatbuffers::Offset<IMUPacketFlatbuffer> CreateIMUPacketDirect(flatbuffers::FlatBufferBuilder &_fbb, const std::vector<flatbuffers::Offset<IMUFlatbuffer>> *elements = nullptr)
inline flatbuffers::Offset<IMUPacketFlatbuffer> CreateIMUPacket(flatbuffers::FlatBufferBuilder &_fbb, const IMUPacket *_o, const flatbuffers::rehasher_function_t *_rehasher = nullptr)
inline const dv::IMUPacketFlatbuffer *GetIMUPacket(const void *buf)
inline const dv::IMUPacketFlatbuffer *GetSizePrefixedIMUPacket(const void *buf)
inline const char *IMUPacketIdentifier()
inline bool IMUPacketBufferHasIdentifier(const void *buf)
inline bool VerifyIMUPacketBuffer(flatbuffers::Verifier &verifier)
inline bool VerifySizePrefixedIMUPacketBuffer(flatbuffers::Verifier &verifier)
inline void FinishIMUPacketBuffer(flatbuffers::FlatBufferBuilder &fbb, flatbuffers::Offset<dv::IMUPacketFlatbuffer> root)
inline void FinishSizePrefixedIMUPacketBuffer(flatbuffers::FlatBufferBuilder &fbb, flatbuffers::Offset<dv::IMUPacketFlatbuffer> root)
inline std::unique_ptr<IMUPacket> UnPackIMUPacket(const void *buf, const flatbuffers::resolver_function_t *res = nullptr)
inline bool operator==(const Observation &lhs, const Observation &rhs)
inline bool operator==(const Landmark &lhs, const Landmark &rhs)
inline bool operator==(const LandmarksPacket &lhs, const LandmarksPacket &rhs)
inline const flatbuffers::TypeTable *ObservationTypeTable()
inline const flatbuffers::TypeTable *LandmarkTypeTable()
inline const flatbuffers::TypeTable *LandmarksPacketTypeTable()
inline flatbuffers::Offset<ObservationFlatbuffer> CreateObservation(flatbuffers::FlatBufferBuilder &_fbb, int32_t trackId = 0, int32_t cameraId = 0, flatbuffers::Offset<flatbuffers::String> cameraName = 0, int64_t timestamp = 0)
inline flatbuffers::Offset<ObservationFlatbuffer> CreateObservationDirect(flatbuffers::FlatBufferBuilder &_fbb, int32_t trackId = 0, int32_t cameraId = 0, const char *cameraName = nullptr, int64_t timestamp = 0)
inline flatbuffers::Offset<ObservationFlatbuffer> CreateObservation(flatbuffers::FlatBufferBuilder &_fbb, const Observation *_o, const flatbuffers::rehasher_function_t *_rehasher = nullptr)
inline flatbuffers::Offset<LandmarkFlatbuffer> CreateLandmark(flatbuffers::FlatBufferBuilder &_fbb, const Point3f *pt = 0, int64_t id = 0, int64_t timestamp = 0, flatbuffers::Offset<flatbuffers::Vector<int8_t>> descriptor = 0, flatbuffers::Offset<flatbuffers::String> descriptorType = 0, flatbuffers::Offset<flatbuffers::Vector<float>> covariance = 0, flatbuffers::Offset<flatbuffers::Vector<flatbuffers::Offset<ObservationFlatbuffer>>> observations = 0)
inline flatbuffers::Offset<LandmarkFlatbuffer> CreateLandmarkDirect(flatbuffers::FlatBufferBuilder &_fbb, const Point3f *pt = 0, int64_t id = 0, int64_t timestamp = 0, const std::vector<int8_t> *descriptor = nullptr, const char *descriptorType = nullptr, const std::vector<float> *covariance = nullptr, const std::vector<flatbuffers::Offset<ObservationFlatbuffer>> *observations = nullptr)
inline flatbuffers::Offset<LandmarkFlatbuffer> CreateLandmark(flatbuffers::FlatBufferBuilder &_fbb, const Landmark *_o, const flatbuffers::rehasher_function_t *_rehasher = nullptr)
inline flatbuffers::Offset<LandmarksPacketFlatbuffer> CreateLandmarksPacket(flatbuffers::FlatBufferBuilder &_fbb, flatbuffers::Offset<flatbuffers::Vector<flatbuffers::Offset<LandmarkFlatbuffer>>> elements = 0, flatbuffers::Offset<flatbuffers::String> referenceFrame = 0)
inline flatbuffers::Offset<LandmarksPacketFlatbuffer> CreateLandmarksPacketDirect(flatbuffers::FlatBufferBuilder &_fbb, const std::vector<flatbuffers::Offset<LandmarkFlatbuffer>> *elements = nullptr, const char *referenceFrame = nullptr)
inline flatbuffers::Offset<LandmarksPacketFlatbuffer> CreateLandmarksPacket(flatbuffers::FlatBufferBuilder &_fbb, const LandmarksPacket *_o, const flatbuffers::rehasher_function_t *_rehasher = nullptr)
inline const dv::LandmarksPacketFlatbuffer *GetLandmarksPacket(const void *buf)
inline const dv::LandmarksPacketFlatbuffer *GetSizePrefixedLandmarksPacket(const void *buf)
inline const char *LandmarksPacketIdentifier()
inline bool LandmarksPacketBufferHasIdentifier(const void *buf)
inline bool VerifyLandmarksPacketBuffer(flatbuffers::Verifier &verifier)
inline bool VerifySizePrefixedLandmarksPacketBuffer(flatbuffers::Verifier &verifier)
inline void FinishLandmarksPacketBuffer(flatbuffers::FlatBufferBuilder &fbb, flatbuffers::Offset<dv::LandmarksPacketFlatbuffer> root)
inline void FinishSizePrefixedLandmarksPacketBuffer(flatbuffers::FlatBufferBuilder &fbb, flatbuffers::Offset<dv::LandmarksPacketFlatbuffer> root)
inline std::unique_ptr<LandmarksPacket> UnPackLandmarksPacket(const void *buf, const flatbuffers::resolver_function_t *res = nullptr)
inline bool operator==(const Pose &lhs, const Pose &rhs)
inline const flatbuffers::TypeTable *PoseTypeTable()
inline flatbuffers::Offset<PoseFlatbuffer> CreatePose(flatbuffers::FlatBufferBuilder &_fbb, int64_t timestamp = 0, const Vec3f *translation = 0, const Quaternion *rotation = 0, flatbuffers::Offset<flatbuffers::String> referenceFrame = 0, flatbuffers::Offset<flatbuffers::String> targetFrame = 0)
inline flatbuffers::Offset<PoseFlatbuffer> CreatePoseDirect(flatbuffers::FlatBufferBuilder &_fbb, int64_t timestamp = 0, const Vec3f *translation = 0, const Quaternion *rotation = 0, const char *referenceFrame = nullptr, const char *targetFrame = nullptr)
inline flatbuffers::Offset<PoseFlatbuffer> CreatePose(flatbuffers::FlatBufferBuilder &_fbb, const Pose *_o, const flatbuffers::rehasher_function_t *_rehasher = nullptr)
inline const dv::PoseFlatbuffer *GetPose(const void *buf)
inline const dv::PoseFlatbuffer *GetSizePrefixedPose(const void *buf)
inline const char *PoseIdentifier()
inline bool PoseBufferHasIdentifier(const void *buf)
inline bool VerifyPoseBuffer(flatbuffers::Verifier &verifier)
inline bool VerifySizePrefixedPoseBuffer(flatbuffers::Verifier &verifier)
inline void FinishPoseBuffer(flatbuffers::FlatBufferBuilder &fbb, flatbuffers::Offset<dv::PoseFlatbuffer> root)
inline void FinishSizePrefixedPoseBuffer(flatbuffers::FlatBufferBuilder &fbb, flatbuffers::Offset<dv::PoseFlatbuffer> root)
inline std::unique_ptr<Pose> UnPackPose(const void *buf, const flatbuffers::resolver_function_t *res = nullptr)
inline bool operator==(const TimedKeyPoint &lhs, const TimedKeyPoint &rhs)
inline bool operator==(const TimedKeyPointPacket &lhs, const TimedKeyPointPacket &rhs)
inline const flatbuffers::TypeTable *TimedKeyPointTypeTable()
inline const flatbuffers::TypeTable *TimedKeyPointPacketTypeTable()
inline flatbuffers::Offset<TimedKeyPointFlatbuffer> CreateTimedKeyPoint(flatbuffers::FlatBufferBuilder &_fbb, const Point2f *pt = 0, float size = 0.0f, float angle = 0.0f, float response = 0.0f, int32_t octave = 0, int32_t class_id = 0, int64_t timestamp = 0)
inline flatbuffers::Offset<TimedKeyPointFlatbuffer> CreateTimedKeyPoint(flatbuffers::FlatBufferBuilder &_fbb, const TimedKeyPoint *_o, const flatbuffers::rehasher_function_t *_rehasher = nullptr)
inline flatbuffers::Offset<TimedKeyPointPacketFlatbuffer> CreateTimedKeyPointPacket(flatbuffers::FlatBufferBuilder &_fbb, flatbuffers::Offset<flatbuffers::Vector<flatbuffers::Offset<TimedKeyPointFlatbuffer>>> elements = 0)
inline flatbuffers::Offset<TimedKeyPointPacketFlatbuffer> CreateTimedKeyPointPacketDirect(flatbuffers::FlatBufferBuilder &_fbb, const std::vector<flatbuffers::Offset<TimedKeyPointFlatbuffer>> *elements = nullptr)
inline flatbuffers::Offset<TimedKeyPointPacketFlatbuffer> CreateTimedKeyPointPacket(flatbuffers::FlatBufferBuilder &_fbb, const TimedKeyPointPacket *_o, const flatbuffers::rehasher_function_t *_rehasher = nullptr)
inline const dv::TimedKeyPointPacketFlatbuffer *GetTimedKeyPointPacket(const void *buf)
inline const dv::TimedKeyPointPacketFlatbuffer *GetSizePrefixedTimedKeyPointPacket(const void *buf)
inline const char *TimedKeyPointPacketIdentifier()
inline bool TimedKeyPointPacketBufferHasIdentifier(const void *buf)
inline bool VerifyTimedKeyPointPacketBuffer(flatbuffers::Verifier &verifier)
inline bool VerifySizePrefixedTimedKeyPointPacketBuffer(flatbuffers::Verifier &verifier)
inline void FinishTimedKeyPointPacketBuffer(flatbuffers::FlatBufferBuilder &fbb, flatbuffers::Offset<dv::TimedKeyPointPacketFlatbuffer> root)
inline void FinishSizePrefixedTimedKeyPointPacketBuffer(flatbuffers::FlatBufferBuilder &fbb, flatbuffers::Offset<dv::TimedKeyPointPacketFlatbuffer> root)
inline std::unique_ptr<TimedKeyPointPacket> UnPackTimedKeyPointPacket(const void *buf, const flatbuffers::resolver_function_t *res = nullptr)
inline bool operator==(const Trigger &lhs, const Trigger &rhs)
inline bool operator==(const TriggerPacket &lhs, const TriggerPacket &rhs)
inline const flatbuffers::TypeTable *TriggerTypeTable()
inline const flatbuffers::TypeTable *TriggerPacketTypeTable()
inline const TriggerType (&EnumValuesTriggerType())[10]
inline const char *const *EnumNamesTriggerType()
inline const char *EnumNameTriggerType(TriggerType e)
inline flatbuffers::Offset<TriggerFlatbuffer> CreateTrigger(flatbuffers::FlatBufferBuilder &_fbb, int64_t timestamp = 0, TriggerType type = TriggerType::TIMESTAMP_RESET)
inline flatbuffers::Offset<TriggerFlatbuffer> CreateTrigger(flatbuffers::FlatBufferBuilder &_fbb, const Trigger *_o, const flatbuffers::rehasher_function_t *_rehasher = nullptr)
inline flatbuffers::Offset<TriggerPacketFlatbuffer> CreateTriggerPacket(flatbuffers::FlatBufferBuilder &_fbb, flatbuffers::Offset<flatbuffers::Vector<flatbuffers::Offset<TriggerFlatbuffer>>> elements = 0)
inline flatbuffers::Offset<TriggerPacketFlatbuffer> CreateTriggerPacketDirect(flatbuffers::FlatBufferBuilder &_fbb, const std::vector<flatbuffers::Offset<TriggerFlatbuffer>> *elements = nullptr)
inline flatbuffers::Offset<TriggerPacketFlatbuffer> CreateTriggerPacket(flatbuffers::FlatBufferBuilder &_fbb, const TriggerPacket *_o, const flatbuffers::rehasher_function_t *_rehasher = nullptr)
inline const flatbuffers::TypeTable *TriggerTypeTypeTable()
inline const dv::TriggerPacketFlatbuffer *GetTriggerPacket(const void *buf)
inline const dv::TriggerPacketFlatbuffer *GetSizePrefixedTriggerPacket(const void *buf)
inline const char *TriggerPacketIdentifier()
inline bool TriggerPacketBufferHasIdentifier(const void *buf)
inline bool VerifyTriggerPacketBuffer(flatbuffers::Verifier &verifier)
inline bool VerifySizePrefixedTriggerPacketBuffer(flatbuffers::Verifier &verifier)
inline void FinishTriggerPacketBuffer(flatbuffers::FlatBufferBuilder &fbb, flatbuffers::Offset<dv::TriggerPacketFlatbuffer> root)
inline void FinishSizePrefixedTriggerPacketBuffer(flatbuffers::FlatBufferBuilder &fbb, flatbuffers::Offset<dv::TriggerPacketFlatbuffer> root)
inline std::unique_ptr<TriggerPacket> UnPackTriggerPacket(const void *buf, const flatbuffers::resolver_function_t *res = nullptr)
inline bool operator==(const PacketHeader &lhs, const PacketHeader &rhs)
inline bool operator==(const FileDataDefinition &lhs, const FileDataDefinition &rhs)
inline bool operator==(const FileDataTable &lhs, const FileDataTable &rhs)
inline const flatbuffers::TypeTable *PacketHeaderTypeTable()
inline const flatbuffers::TypeTable *FileDataDefinitionTypeTable()
inline const flatbuffers::TypeTable *FileDataTableTypeTable()
FLATBUFFERS_STRUCT_END (PacketHeader, 8)
inline flatbuffers::Offset<FileDataDefinitionFlatbuffer> CreateFileDataDefinition(flatbuffers::FlatBufferBuilder &_fbb, int64_t ByteOffset = 0, const PacketHeader *PacketInfo = 0, int64_t NumElements = 0, int64_t TimestampStart = 0, int64_t TimestampEnd = 0)
inline flatbuffers::Offset<FileDataDefinitionFlatbuffer> CreateFileDataDefinition(flatbuffers::FlatBufferBuilder &_fbb, const FileDataDefinition *_o, const flatbuffers::rehasher_function_t *_rehasher = nullptr)
inline flatbuffers::Offset<FileDataTableFlatbuffer> CreateFileDataTable(flatbuffers::FlatBufferBuilder &_fbb, flatbuffers::Offset<flatbuffers::Vector<flatbuffers::Offset<FileDataDefinitionFlatbuffer>>> Table = 0)
inline flatbuffers::Offset<FileDataTableFlatbuffer> CreateFileDataTableDirect(flatbuffers::FlatBufferBuilder &_fbb, const std::vector<flatbuffers::Offset<FileDataDefinitionFlatbuffer>> *Table = nullptr)
inline flatbuffers::Offset<FileDataTableFlatbuffer> CreateFileDataTable(flatbuffers::FlatBufferBuilder &_fbb, const FileDataTable *_o, const flatbuffers::rehasher_function_t *_rehasher = nullptr)
inline const dv::FileDataTableFlatbuffer *GetFileDataTable(const void *buf)
inline const dv::FileDataTableFlatbuffer *GetSizePrefixedFileDataTable(const void *buf)
inline const char *FileDataTableIdentifier()
inline bool FileDataTableBufferHasIdentifier(const void *buf)
inline bool VerifyFileDataTableBuffer(flatbuffers::Verifier &verifier)
inline bool VerifySizePrefixedFileDataTableBuffer(flatbuffers::Verifier &verifier)
inline void FinishFileDataTableBuffer(flatbuffers::FlatBufferBuilder &fbb, flatbuffers::Offset<dv::FileDataTableFlatbuffer> root)
inline void FinishSizePrefixedFileDataTableBuffer(flatbuffers::FlatBufferBuilder &fbb, flatbuffers::Offset<dv::FileDataTableFlatbuffer> root)
inline std::unique_ptr<FileDataTable> UnPackFileDataTable(const void *buf, const flatbuffers::resolver_function_t *res = nullptr)
inline bool operator==(const IOHeader &lhs, const IOHeader &rhs)
inline const flatbuffers::TypeTable *IOHeaderTypeTable()
inline const Constants (&EnumValuesConstants())[1]
inline const char *const *EnumNamesConstants()
inline const char *EnumNameConstants(Constants e)
inline const CompressionType (&EnumValuesCompressionType())[5]
inline const char *const *EnumNamesCompressionType()
inline const char *EnumNameCompressionType(CompressionType e)
inline flatbuffers::Offset<IOHeaderFlatbuffer> CreateIOHeader(flatbuffers::FlatBufferBuilder &_fbb, CompressionType compression = CompressionType::NONE, int64_t dataTablePosition = -1, flatbuffers::Offset<flatbuffers::String> infoNode = 0)
inline flatbuffers::Offset<IOHeaderFlatbuffer> CreateIOHeaderDirect(flatbuffers::FlatBufferBuilder &_fbb, CompressionType compression = CompressionType::NONE, int64_t dataTablePosition = -1, const char *infoNode = nullptr)
inline flatbuffers::Offset<IOHeaderFlatbuffer> CreateIOHeader(flatbuffers::FlatBufferBuilder &_fbb, const IOHeader *_o, const flatbuffers::rehasher_function_t *_rehasher = nullptr)
inline const flatbuffers::TypeTable *ConstantsTypeTable()
inline const flatbuffers::TypeTable *CompressionTypeTypeTable()
inline const dv::IOHeaderFlatbuffer *GetIOHeader(const void *buf)
inline const dv::IOHeaderFlatbuffer *GetSizePrefixedIOHeader(const void *buf)
inline const char *IOHeaderIdentifier()
inline bool IOHeaderBufferHasIdentifier(const void *buf)
inline bool VerifyIOHeaderBuffer(flatbuffers::Verifier &verifier)
inline bool VerifySizePrefixedIOHeaderBuffer(flatbuffers::Verifier &verifier)
inline void FinishIOHeaderBuffer(flatbuffers::FlatBufferBuilder &fbb, flatbuffers::Offset<dv::IOHeaderFlatbuffer> root)
inline void FinishSizePrefixedIOHeaderBuffer(flatbuffers::FlatBufferBuilder &fbb, flatbuffers::Offset<dv::IOHeaderFlatbuffer> root)
inline std::unique_ptr<IOHeader> UnPackIOHeader(const void *buf, const flatbuffers::resolver_function_t *res = nullptr)

Variables

static constexpr bool DEBUG_ENABLED = {true}
static constexpr std::array<std::array<EventColor, 4>, 5> PIXEL_COLOR_KEYS{{{EventColor::WHITE, EventColor::WHITE, EventColor::WHITE, EventColor::WHITE}, {EventColor::RED, EventColor::GREEN1, EventColor::GREEN2, EventColor::BLUE}, {EventColor::GREEN1, EventColor::RED, EventColor::BLUE, EventColor::GREEN2}, {EventColor::GREEN2, EventColor::BLUE, EventColor::RED, EventColor::GREEN1}, {EventColor::BLUE, EventColor::GREEN2, EventColor::GREEN1, EventColor::RED},}}

Address to Color mapping for events based on Bayer filter.

static constexpr int VERSION_MAJOR = {2}
static constexpr int VERSION_MINOR = {0}
static constexpr int VERSION_PATCH = {1}
static constexpr int VERSION = {((2 * 10000) + (0 * 100) + 1)}
static constexpr std::string_view NAME_STRING = {"dv-processing"}
static constexpr std::string_view VERSION_STRING = {"2.0.1"}
namespace dv
namespace camera

Enums

enum class DistortionModel

Values:

enumerator NONE
enumerator RADIAL_TANGENTIAL
enumerator EQUIDISTANT

Functions

inline std::ostream &operator<<(std::ostream &os, const DistortionModel &var)
inline DistortionModel stringToDistortionModel(const std::string_view model)

Convert a string into the Enum of the DistortionModel

Parameters:

model

Returns:

the enum corresponding to the string

inline std::string distortionModelToString(const DistortionModel &model)

Convert a DistortionModel Enum into a string

Parameters:

model

Returns:

the string that represent the Distortion model

namespace calibrations
namespace internal

Variables

static constexpr std::string_view NONE_MODEL_STRING = {"none"}
static constexpr std::string_view RADIAL_TANGENTIAL_MODEL_STRING = {"radialTangential"}
static constexpr std::string_view EQUIDISTANT_MODEL_STRING = {"equidistant"}
namespace cluster
namespace mean_shift

Typedefs

template<typename TYPE, int32_t ROWS = Eigen::Dynamic, int32_t COLUMNS = Eigen::Dynamic>
using MeanShiftRowMajorMatrixXX = MeanShiftEigenMatrixAdaptor<TYPE, ROWS, COLUMNS, Eigen::RowMajor>

Convenience alias for n-dimensional data in row-major sample order of arbitrary number of samples

template<typename TYPE, int32_t ROWS = Eigen::Dynamic, int32_t COLUMNS = Eigen::Dynamic>
using MeanShiftColMajorMatrixXX = MeanShiftEigenMatrixAdaptor<TYPE, ROWS, COLUMNS, Eigen::ColMajor>

Convenience alias for n-dimensional data in column-major sample order of arbitrary dimensions and number of samples

template<typename TYPE, int32_t SAMPLES = Eigen::Dynamic>
using MeanShiftRowMajorMatrixX1 = MeanShiftRowMajorMatrixXX<TYPE, SAMPLES, 1>

Convenience alias for 1-dimensional data in row-major sample order of arbitrary number of samples

template<typename TYPE, int32_t SAMPLES = Eigen::Dynamic>
using MeanShiftRowMajorMatrixX2 = MeanShiftRowMajorMatrixXX<TYPE, SAMPLES, 2>

Convenience alias for3-dimensional data in row-major sample order of arbitrary number of samples

template<typename TYPE, int32_t SAMPLES = Eigen::Dynamic>
using MeanShiftRowMajorMatrixX3 = MeanShiftRowMajorMatrixXX<TYPE, SAMPLES, 3>

Convenience alias for 3-dimensional data in row-major sample order of arbitrary number of samples

template<typename TYPE, int32_t SAMPLES = Eigen::Dynamic>
using MeanShiftRowMajorMatrixX4 = MeanShiftRowMajorMatrixXX<TYPE, SAMPLES, 4>

Convenience alias for 4-dimensional data in row-major sample order of arbitrary number of samples

template<typename TYPE, int32_t SAMPLES = Eigen::Dynamic>
using MeanShiftColMajorMatrix1X = MeanShiftColMajorMatrixXX<TYPE, 1, SAMPLES>

Convenience alias for 1-dimensional data in column-major sample order of arbitrary number of samples

template<typename TYPE, int32_t SAMPLES = Eigen::Dynamic>
using MeanShiftColMajorMatrix2X = MeanShiftColMajorMatrixXX<TYPE, 2, SAMPLES>

Convenience alias for 2-dimensional data in column-major sample order of arbitrary number of samples

template<typename TYPE, int32_t SAMPLES = Eigen::Dynamic>
using MeanShiftColMajorMatrix3X = MeanShiftColMajorMatrixXX<TYPE, 3, SAMPLES>

Convenience alias for 3-dimensional data in column-major sample order of arbitrary number of samples

template<typename TYPE, int32_t SAMPLES = Eigen::Dynamic>
using MeanShiftColMajorMatrix4X = MeanShiftColMajorMatrixXX<TYPE, 4, SAMPLES>

Convenience alias for 4-dimensional data in column-major sample order of arbitrary number of samples

namespace kernel
namespace concepts

Typedefs

template<class T>
using iterable_element_type = typename std::remove_reference_t<decltype(*(std::declval<T>().begin()))>

Variables

template<typename T>
constexpr bool is_eigen_type = internal::is_eigen_impl<T>::value
template<typename Needle, typename ...Haystack>
constexpr bool is_type_one_of = std::disjunction_v<std::is_same<Needle, Haystack>...>
namespace internal
namespace containers
namespace kd_tree

Typedefs

template<typename TYPE, int32_t ROWS = Eigen::Dynamic, int32_t COLUMNS = Eigen::Dynamic>
using KDTreeRowMajorXX = KDTreeMatrixAdaptor<TYPE, ROWS, COLUMNS, Eigen::RowMajor>

Convenience alias for n-dimensional data in row-major sample order of arbitrary number of samples

template<typename TYPE, int32_t ROWS = Eigen::Dynamic, int32_t COLUMNS = Eigen::Dynamic>
using KDTreeColMajorXX = KDTreeMatrixAdaptor<TYPE, ROWS, COLUMNS, Eigen::ColMajor>

Convenience alias for n-dimensional data in column-major sample order of arbitrary dimensions and number of samples

template<typename TYPE, int32_t SAMPLES = Eigen::Dynamic>
using KDTreeRowMajorX1 = KDTreeRowMajorXX<TYPE, SAMPLES, 1>

Convenience alias for 1-dimensional data in row-major sample order of arbitrary number of samples

template<typename TYPE, int32_t SAMPLES = Eigen::Dynamic>
using KDTreeRowMajorX2 = KDTreeRowMajorXX<TYPE, SAMPLES, 2>

Convenience alias for 2-dimensional data in row-major sample order of arbitrary number of samples

template<typename TYPE, int32_t SAMPLES = Eigen::Dynamic>
using KDTreeRowMajorX3 = KDTreeRowMajorXX<TYPE, SAMPLES, 3>

Convenience alias for 3-dimensional data in row-major sample order of arbitrary number of samples

template<typename TYPE, int32_t SAMPLES = Eigen::Dynamic>
using KDTreeRowMajorX4 = KDTreeRowMajorXX<TYPE, SAMPLES, 4>

Convenience alias for 4-dimensional data in row-major sample order of arbitrary number of samples

template<typename TYPE, int32_t SAMPLES = Eigen::Dynamic>
using KDTreeColMajor1X = KDTreeColMajorXX<TYPE, 1, SAMPLES>

Convenience alias for 1-dimensional data in column-major sample order of arbitrary number of samples

template<typename TYPE, int32_t SAMPLES = Eigen::Dynamic>
using KDTreeColMajor2X = KDTreeColMajorXX<TYPE, 2, SAMPLES>

Convenience alias for 2-dimensional data in column-major sample order of arbitrary number of samples

template<typename TYPE, int32_t SAMPLES = Eigen::Dynamic>
using KDTreeColMajor3X = KDTreeColMajorXX<TYPE, 3, SAMPLES>

Convenience alias for 3-dimensional data in column-major sample order of arbitrary number of samples

template<typename TYPE, int32_t SAMPLES = Eigen::Dynamic>
using KDTreeColMajor4X = KDTreeColMajorXX<TYPE, 4, SAMPLES>

Convenience alias for 5-dimensional data in column-major sample order of arbitrary number of samples

namespace data

Functions

template<dv::concepts::KeyPointIterable KPI>
inline std::vector<cv::KeyPoint> fromTimedKeyPoints(const KPI &points)

Convert TimedKeyPoint vector into cv::KeyPoint vector.

Parameters:

points – KeyPoints to be converted.

Returns:

A vector of cv::KeyPoint.

template<dv::concepts::KeyPointIterable KPI>
inline std::vector<cv::Point2f> convertToCvPoints(const KPI &points)

Convert TimedKeyPoint vector into cv::Point2f vector.

Parameters:

points – KeyPoints to be converted.

Returns:

A vector of cv::Point2f.

inline std::vector<dv::TimedKeyPoint> fromCvKeypoints(const std::vector<cv::KeyPoint> &points, const int64_t defaultTime = 0)

Create a vector of cv::KeyPoint from a given vector of dv::TimedKeyPoint.

Parameters:
  • points – cv::KeyPoint vector to be converted.

  • defaultTime – Timestamp in microseconds to be assigned to all new TimedKeyPoints.

Returns:

A vector of TimedKeyPoints.

inline cv::Mat depthFrameMap(dv::DepthFrame &frame)

Map a depth frame into an OpenCV Mat, no data copies are performed. The resulting cv::Mat will point to the same underlying data.

This function does not affect any data underlying, the const qualifier is not set since cv::Mat can’t be const.

Parameters:

frameFrame to be mapped.

Returns:

Mapped depth frame in cv::Mat with data type of CV_16UC1.

inline cv::Mat depthFrameInMeters(dv::DepthFrame &frame)

Converts the given depth frame into an OpenCV matrix containing depth values in meters.

Resulting cv::Mat will be of floating type and will apply conversion from millimeters to meters. Depth value of 0.0f should be considered invalid.

This function will copy and scale all values into meters.

Parameters:

frame – Depth frame to be converted.

Returns:

A cv::Mat containing scaled depth values in meters.

inline dv::DepthFrame depthFrameFromCvMat(const cv::Mat &depthImage)

Converts the given OpenCV matrix with depth values to DepthFrame.

cv::Mat can contain single-channel floating point containing depth values in meters or single-channel 16-bit unsigned integer values in millimeters. Zero should be used for invalid values.

This function will copy and scale all values into millimeter 16-bit integer representation.

Parameters:

depthImagecv::Mat containing the depth values.

Returns:

Depth frame containing depth values in 16-bit unsigned integer values representing distance in millimeters.

template<std::floating_point Scalar = float>
inline dv::kinematics::Transformation<Scalar> transformFromPose(const dv::Pose &pose)

Convert a pose message into a transformation.

Parameters:

pose – Input pose to be converted.

Returns:

Transformation representing the pose.

template<std::floating_point Scalar = float>
inline dv::Pose poseFromTransformation(const dv::kinematics::Transformation<Scalar> &transform)

Convert a transformation into a pose message.

Parameters:

transform – Input transform.

Returns:

Pose message representing the transform.

namespace generate

Functions

inline cv::Mat sampleImage(const cv::Size &resolution)

Generate a sample image (single channel 8-bit unsigned integer) containing a few gray rectangles in a black background.

Parameters:

resolution – Resolution of the output image.

Returns:

Generated image.

inline dv::EventStore eventLine(const int64_t timestamp, const cv::Point &a, const cv::Point &b, size_t steps = 0)

Generate events along a line between two given end-points.

Parameters:
  • timestamp – Fixed timestamp assigned for all events.

  • a – Starting point.

  • b – Ending point.

  • steps – Number of events generated for the line. If zero is provided, the function uses euclidean distance between the points.

Returns:

A batch of event along the line.

inline dv::EventStore eventRectangle(const int64_t timestamp, const cv::Point &tl, const cv::Point &br)

Generate events along a rectangle edges between two given top-left and bottom right points.

Parameters:
  • timestamp – Fixed timestamp assigned for all events.

  • tl – Top left coordinate of the rectangle.

  • br – Bottom right coordinate of the rectangle.

Returns:

Event batch containing events at the edges of a given rectangle.

inline dv::EventStore eventTestSet(const int64_t timestamp, const cv::Size &resolution)

Generate an event test set that contains event for a few intersecting rectangle edges.

Parameters:
  • timestamp – Fixed timestamp assigned for all events.

  • resolution – Expected resolution limits for the events.

Returns:

Generated event batch.

inline dv::EventStore uniformlyDistributedEvents(const int64_t timestamp, const cv::Size &resolution, const size_t count, const uint64_t seed = 0)

Generate a batch of uniformly distributed set of event within the given resolution.

Parameters:
  • timestamp – Fixed timestamp assigned for all events.

  • resolution – Resolution limits.

  • count – Number of events.

  • seed – Seed for the RNG.

Returns:

Generated event batch.

inline dv::EventStore normallyDistributedEvents(const int64_t timestamp, const dv::Point2f &center, const dv::Point2f &stddev, const size_t count, const uint64_t seed = 0)

Generate events normally distributed around a given center coordinates with given standard deviation.

Parameters:
  • timestamp – Timestamp to be assigned to the generated events

  • center – Center coordinates

  • stddev – Standard deviation for each of the axes

  • count – Number of events to generate

  • seed – Seed for the RNG

Returns:

Set of normally distributed events

inline dv::EventStore uniformEventsWithinTimeRange(const int64_t startTime, const dv::Duration duration, const cv::Size &resolution, const int64_t count, const uint64_t seed = 0)

Generate a batch of uniformly distributed (in pixel-space) randomly generated events. The timestamps are generated by monotonically increasing the timestamp within the time duration.

Parameters:
  • startTime – Start timestamp in microseconds.

  • duration – Duration of the generated data.

  • resolution – Pixel space resolution.

  • count – Number of output events.

  • seed – Seed for the RNG.

Returns:

Generated event batch.

inline cv::Mat dvLogo(const cv::Size &size, const bool colored = true, const cv::Scalar &bgColor = dv::visualization::colors::white, const cv::Scalar &pColor = dv::visualization::colors::iniBlue, const cv::Scalar &nColor = dv::visualization::colors::darkGray)

Generate a DV logo using simple drawing methods. Generates in color or grayscale.

Parameters:
  • size – Output dimensions of the drawing

  • colored – Colored output (CV_8UC3) if true, or grayscale (CV_8UC1) otherwise.

Returns:

Image containing DV logo.

inline dv::EventStore imageToEvents(const int64_t timestamp, const cv::Mat &image, const uint8_t positive, const uint8_t negative)

Convert an image into event by matching pixel intensities. The algorithm will match all pixel values available in the and match against positive and negative pixel intensity values, according events are going to be added into the output event store. Other pixel intensity values are ignored.

Parameters:
  • image – Input image for conversion

  • positive – Pixel brightness intensity value to consider the pixel to generate a positive polarity event.

  • negative – Pixel brightness intensity value to consider the pixel to generate a negative polarity event.

Returns:

Generated events.

inline dv::EventStore dvLogoAsEvents(const int64_t timestamp, const cv::Size &resolution)

Generate a DV logo using simple drawing methods. Generates negative polarity events on the pixels where logo has dark pixels and positive polarity events where pixels have brighter events.

Parameters:
  • timestamp – Timestamp assigned to each generated event.

  • resolution – Resolution of the events.

Returns:

Events that can be accumulated / visualized to generate a logo of DV.

inline dv::IMU levelImuMeasurement(const int64_t timestamp)

Generate an IMU measurement that measures a camera being on a stable and level surface. All measurement values are going to be zero, except for Y axis of accelerometer, it will measure -1.0G.

Parameters:

timestamp – Timestamp to be assigned to the measurement.

Returns:

Generated IMU measurement.

inline dv::IMU addNoiseToImu(const dv::IMU &measurement, const float accelerometerStddev, const float gyroscopeStddev, const uint64_t seed = 0)

Apply noise to imu measurements (accelerometer and gyroscope). The noise is modelled as a normal distribution with 0 mean and given standard deviation. The modelled noise is added to the given measurement and return a new dv::IMU structure with added noise.

Parameters:
  • measurementIMU measurement to add noise to.

  • accelerometerStddev – Accelerometer noise standard deviation.

  • gyroscopeStddev – Gyroscope noise standard deviation.

  • seed – Seed for the RNG.

Returns:

Generated measurement with added noise.

inline dv::IMU levelImuWithNoise(const int64_t timestamp, const float accelerometerStddev = 0.1f, const float gyroscopeStddev = 0.01f, const uint64_t seed = 0)

Generate an IMU measurement that measures a camera being on a stable and level surface with additional measurement noise. The noise is modelled as a normal distribution with 0 mean and given standard deviation.

Parameters:
  • timestamp – Timestamp to be assigned to the measurement.

  • accelerometerStddev – Accelerometer noise standard deviation.

  • gyroscopeStddev – Gyroscope noise standard deviation.

  • seed – Seed for the RNG.

Returns:

Generated IMU measurement.

namespace depth

Functions

inline std::shared_ptr<cv::StereoMatcher> defaultStereoMatcher()

Create a reasonable default stereo matcher, tailored for low texture images (that are generated by accumulating events) and for faster execution.

The method creates an instance of cv::StereoSGBM with following parameter values:

  • minDisparity = 0

  • numDisparities = 48

  • blockSize = 11 : highest recommended block size, small block sizes generate noise in low texture)

  • P1 = 8 * (blockSize ^ 2)

  • P2 = 32 * (blockSize ^ 2) : P1 and P2 are calculated using recommended equations

  • disp12MaxDiff = 0 : disparity is also calculated on right-left image pair, filter out any disparities that do not agree. This enables strong noise filtering (there can be a lot of noise due to low texture)

  • preFilterCap = cv::StereoBM::PREFILTER_NORMALIZED_RESPONSE : disable Sobel filter preprocessing

  • uniquenessRatio = 15 : this is also an aggressive value for a noise filter

  • speckleWindowSize = 240 : this is also an aggressive value for a speckle noise filter

  • speckleRange = 1 : this is also an aggressive value for a speckle noise filter

  • mode = cv::StereoSGBM::MODE_SGBM_3WAY : Fastest disparity calculation mode

Returns:

Stereo semi global block matching algorithm with reasonable defaults for low texture images.

namespace exceptions

Typedefs

using DirectoryError = internal::Exception_<info::DirectoryError>
using DirectoryNotFound = internal::Exception_<info::DirectoryNotFound, DirectoryError>
using FileError = internal::Exception_<info::FileError>
using FileOpenError = internal::Exception_<info::FileOpenError, FileError>
using FileReadError = internal::Exception_<info::FileReadError, FileError>
using FileWriteError = internal::Exception_<info::FileWriteError, FileError>
using FileNotFound = internal::Exception_<info::FileNotFound, FileError>
using AedatFileError = internal::Exception_<info::AedatFileError, FileError>
using AedatVersionError = internal::Exception_<info::AedatVersionError, AedatFileError>
using AedatFileParseError = internal::Exception_<info::AedatFileParseError, AedatFileError>
using EndOfFile = internal::Exception_<info::EndOfFile>
using RuntimeError = internal::Exception_<info::RuntimeError>
using BadAlloc = internal::Exception_<info::BadAlloc>
using OutOfRange = internal::Exception_<info::OutOfRange>
using LengthError = internal::Exception_<info::LengthError>
template<class TYPE>
using InvalidArgument = internal::Exception_<info::InvalidArgument<TYPE>>
using NullPointer = internal::Exception_<info::NullPointer>
using IOError = internal::Exception_<info::IOError>
using InputError = internal::Exception_<info::InputError, IOError>
using OutputError = internal::Exception_<info::OutputError, IOError>
using TypeError = internal::Exception_<info::TypeError>
namespace info
namespace internal

Functions

template<HasCustomExceptionFormatter T>
std::string format(const typename T::Info &info)
namespace features

Typedefs

using ImagePyrFeatureDetector = FeatureDetector<dv::features::ImagePyramid, cv::Feature2D>
using ImageFeatureDetector = FeatureDetector<dv::Frame, cv::Feature2D>
using EventFeatureBlobDetector = FeatureDetector<dv::EventStore, EventBlobDetector>
namespace internal

This class implement the Arc* corner detector presented in the following paper: https://www.research-collection.ethz.ch/bitstream/handle/20.500.11850/277131/RAL2018-camera-ready.pdf

Template Parameters:
  • radius1 – radius of the first circle on which the timestamps are checked for corner-ness

  • radius2 – radius of the second circle on which the timestamps are checked for corner-ness

namespace imgproc

Functions

template<typename T>
inline auto cvMatStepToEigenStride(const cv::MatStep &step)

Conversion from cv::MatStep to Eigen::Stride

cv::MatStep stores steps in units of bytes, as the underlying matrix is always stored in uint8_t arrays, which are then interpreted at run-time based on the type (e.g. CV_8U). Contrary to this, Eigen stores matrices in arrays of a type that is determined at compile-time based on a template argument, and therefore stores its strides in units of pointer increments. The conversion between the two can be computed by dividing by or multiplying with sizeof(T).

Template Parameters:

T – the type of the scalars stored in the matrices

Parameters:

step – the step (stride) in the matrix in units of bytes

Returns:

the corresponding Eigen::Stride for the cv::MatStep value provided

template<typename T>
inline auto cvMatToEigenMap(const cv::Mat &mat)

Maps an Eigen::Map onto a cv::Mat object. This provides a view to the internal storage of the cv::Mat, it doesn’t copy any data.

Template Parameters:

T – the type of the scalars stored in the matrices

Parameters:

mat – the cv::Mat onto which an Eigen::Map should be mapped

Returns:

the view into the cv::Mat via an Eigen::Map object

template<typename T>
inline auto cvMatToEigenMap(cv::Mat &mat)

Maps an Eigen::Map onto a cv::Mat object. This provides a view to the internal storage of the cv::Mat, it doesn’t copy any data.

Template Parameters:

T – the type of the scalars stored in the matrices

Parameters:

mat – the cv::Mat onto which an Eigen::Map should be mapped

Returns:

the view into the cv::Mat via an Eigen::Map object

template<typename T>
inline auto L1Distance(const Eigen::Block<T, Eigen::Dynamic, Eigen::Dynamic> &patch1, const Eigen::Block<T, Eigen::Dynamic, Eigen::Dynamic> &patch2)

Computes the L1 distance between two blocks (patches) of eigen matrices.

Template Parameters:

T – The type of the underlying matrix

Parameters:
  • patch1 – the first patch

  • patch2 – the second patch

Returns:

the L1 distance between the two patches

template<typename T, int32_t MAP_OPTIONS, typename STRIDE>
inline auto L1Distance(const Eigen::Map<T, MAP_OPTIONS, STRIDE> &m1, const Eigen::Map<T, MAP_OPTIONS, STRIDE> &m2)

Computes the L1 distance between two matrices

See also

Eigen::Map::MapOptions

Template Parameters:
  • T – The type of the underlying matrix

  • MAP_OPTIONS – The options for the underlying matrix.

  • STRIDE – The stride of the underlying matrix

Parameters:
  • m1 – the first matrix

  • m2 – the second matrix

Returns:

the L1 distance between the two matrices

template<typename T>
inline auto L1Distance(const Eigen::Matrix<T, Eigen::Dynamic, Eigen::Dynamic> &m1, const Eigen::Matrix<T, Eigen::Dynamic, Eigen::Dynamic> &m2)

Computes the L1 distance between two matrices

Template Parameters:

T – The type of the underlying matrix

Parameters:
  • m1 – the first matrix

  • m2 – the second matrix

Returns:

the L1 distance between the two matrices

inline auto L1Distance(const cv::Mat &m1, const cv::Mat &m2)

Computes the L1 distance between two matrices

Parameters:
  • m1 – the first matrix

  • m2 – the second matrix

Returns:

the L1 distance between the two matrices

template<typename T>
inline auto pearsonCorrelation(const Eigen::Block<T, Eigen::Dynamic, Eigen::Dynamic> &patch1, const Eigen::Block<T, Eigen::Dynamic, Eigen::Dynamic> &patch2)

Computes the Pearson Correlation between two blocks (patches) of eigen matrices.

Template Parameters:

T – The type of the underlying matrix

Parameters:
  • patch1 – the first patch

  • patch2 – the second patch

Returns:

the Pearson Correlation between the two patches

template<typename T, int32_t MAP_OPTIONS, typename STRIDE>
inline auto pearsonCorrelation(const Eigen::Map<T, MAP_OPTIONS, STRIDE> &m1, const Eigen::Map<T, MAP_OPTIONS, STRIDE> &m2)

Computes the Pearson Correlation between two matrices

See also

Eigen::Map::MapOptions

Template Parameters:
  • T – The type of the underlying matrix

  • MAP_OPTIONS – The options for the underlying matrix.

  • STRIDE – The stride of the underlying matrix

Parameters:
  • m1 – the first matrix

  • m2 – the second matrix

Returns:

the Pearson Correlation between the two matrices

template<typename T>
inline auto pearsonCorrelation(const Eigen::Matrix<T, Eigen::Dynamic, Eigen::Dynamic> &m1, const Eigen::Matrix<T, Eigen::Dynamic, Eigen::Dynamic> &m2)

Computes the Pearson Correlation between two matrices

Template Parameters:

T – The type of the underlying matrix

Parameters:
  • m1 – the first matrix

  • m2 – the second matrix

Returns:

the Pearson Correlation between the two matrices

inline auto pearsonCorrelation(const cv::Mat &m1, const cv::Mat &m2)

Computes the Pearson Correlation between two matrices

Parameters:
  • m1 – the first matrix

  • m2 – the second matrix

Returns:

the Pearson Correlation between the two matrices

template<typename T>
inline auto cosineDistance(const Eigen::Block<T, Eigen::Dynamic, Eigen::Dynamic> &patch1, const Eigen::Block<T, Eigen::Dynamic, Eigen::Dynamic> &patch2)

Computes the Cosine Distance between two blocks (patches) of eigen matrices.

Template Parameters:

T – The type of the underlying matrix

Parameters:
  • patch1 – the first patch

  • patch2 – the second patch

Returns:

the Cosine Distance between the two patches

template<typename T>
inline auto cosineDistance(const Eigen::Matrix<T, Eigen::Dynamic, Eigen::Dynamic> &m1, const Eigen::Matrix<T, Eigen::Dynamic, Eigen::Dynamic> &m2)

Computes the Cosine Distance between two matrices

Template Parameters:

T – The type of the underlying matrix

Parameters:
  • m1 – the first matrix

  • m2 – the second matrix

Returns:

the Cosine Distance between the two matrices

template<typename T, int32_t MAP_OPTIONS, typename STRIDE>
inline auto cosineDistance(const Eigen::Map<T, MAP_OPTIONS, STRIDE> &m1, const Eigen::Map<T, MAP_OPTIONS, STRIDE> &m2)

Computes the Cosine Distance between two matrices

See also

Eigen::Map::MapOptions

Template Parameters:
  • T – The type of the underlying matrix

  • MAP_OPTIONS – The options for the underlying matrix.

  • STRIDE – The stride of the underlying matrix

Parameters:
  • m1 – the first matrix

  • m2 – the second matrix

Returns:

the Cosine Distance between the two matrices

inline auto cosineDistance(const cv::Mat &m1, const cv::Mat &m2)

Computes the Cosine Distance between two matrices

Template Parameters:

T – The type of the underlying matrix

Parameters:
  • m1 – the first matrix

  • m2 – the second matrix

Returns:

the Cosine Distance between the two matrices

namespace imu
namespace io

Typedefs

using DataReadVariant = std::variant<dv::EventStore, dv::Frame, std::vector<dv::IMU>, std::vector<dv::Trigger>, DataReadHandler::OutputFlag>

Enums

enum class ModeFlags : uint8_t

Values:

enumerator READ
enumerator WRITE
enum class WriteFlags : uint8_t

Values:

enumerator NONE
enumerator TRUNCATE
enumerator APPEND
enum class SeekFlags : int

Values:

enumerator START
enumerator CURRENT
enumerator END

Functions

inline ModeFlags operator|(const ModeFlags lhs, const ModeFlags rhs)
inline ModeFlags &operator|=(ModeFlags &lhs, const ModeFlags rhs)
inline bool operator&(const ModeFlags lhs, const ModeFlags rhs)
inline WriteFlags operator|(const WriteFlags lhs, const WriteFlags rhs)
inline WriteFlags &operator|=(WriteFlags &lhs, const WriteFlags rhs)
inline bool operator&(const WriteFlags lhs, const WriteFlags rhs)
namespace camera

Typedefs

using CameraPtr = std::unique_ptr<CameraInputBase>
using SyncCameraPtr = std::unique_ptr<SyncCameraInputBase>

Enums

enum class CameraModel : uint8_t

Values:

enumerator DVS128
enumerator DAVIS
enumerator DVXPLORER
enumerator DVXPLORER_M
enumerator DVXPLORER_S
enumerator DAVIS_GEN2
enumerator DVXPLORER_GEN2
enum class USBDeviceType : uint8_t

Values:

enumerator FX2
enumerator FX3_MB
enumerator FX3_BLUE
enumerator FX3_RED
enumerator CX3_MIPI
enumerator FX3_GEN2

Functions

inline std::ostream &operator<<(std::ostream &os, const CameraModel &var)
inline std::ostream &operator<<(std::ostream &os, const USBDeviceType &var)
static std::vector<USBDevice::DeviceDescriptor> discover()

Discover all compatible cameras connected to this system.

Returns:

list of device descriptor structures

static CameraPtr open(const USBDevice::DeviceDescriptor &descriptor, const USBDevice::LogLevel deviceLogLevel = USBDevice::LogLevel::LVL_WARNING)

Open the device with the specified descriptor structure. Throws if device described cannot be opened. The generic pointer returned only supports some functionality that all cameras implement. For more specific functionality, open the cameras directly via their class constructors, or up-cast with std::dynamic_cast.

Parameters:
  • descriptor – device descriptor to try opening

  • deviceLogLevel – initial log-level

Returns:

generic pointer to camera

static CameraPtr open(const std::string_view serialNumber, const USBDevice::LogLevel deviceLogLevel = USBDevice::LogLevel::LVL_WARNING)

Open the device with the specified serial number. Throws if device described cannot be opened. The generic pointer returned only supports some functionality that all cameras implement. For more specific functionality, open the cameras directly via their class constructors, or up-cast with std::dynamic_cast.

Parameters:
  • serialNumber – device serial number to try opening

  • deviceLogLevel – initial log-level

Returns:

generic pointer to camera

static CameraPtr open(const USBDevice::LogLevel deviceLogLevel = USBDevice::LogLevel::LVL_WARNING)

Open the first device that can be found. Throws if device cannot be opened. The generic pointer returned only supports some functionality that all cameras implement. For more specific functionality, open the cameras directly via their class constructors, or up-cast with std::dynamic_cast.

Parameters:

deviceLogLevel – initial log-level

Returns:

generic pointer to camera

static SyncCameraPtr openSync(const USBDevice::DeviceDescriptor &descriptor, const USBDevice::LogLevel deviceLogLevel = USBDevice::LogLevel::LVL_WARNING)

Open the device with the specified descriptor structure. Device must support clock synchronization. Throws if device described cannot be opened. The generic pointer returned only supports some functionality that all cameras implement. For more specific functionality, open the cameras directly via their class constructors, or up-cast with std::dynamic_cast.

Parameters:
  • descriptor – device descriptor to try opening

  • deviceLogLevel – initial log-level

Returns:

generic pointer to camera

static SyncCameraPtr openSync(const std::string_view serialNumber, const USBDevice::LogLevel deviceLogLevel = USBDevice::LogLevel::LVL_WARNING)

Open the device with the specified serial number. Device must support clock synchronization. Throws if device described cannot be opened. The generic pointer returned only supports some functionality that all cameras implement. For more specific functionality, open the cameras directly via their class constructors, or up-cast with std::dynamic_cast.

Parameters:
  • serialNumber – device serial number to try opening

  • deviceLogLevel – initial log-level

Returns:

generic pointer to camera

inline void synchronizeAny(const std::span<SyncCameraInputBase*> cameras)

Synchronize any number of cameras with each-other. Only one can be a master clock camera.

Parameters:

cameras – cameras to synchronize.

inline void synchronizeAnyTwo(SyncCameraInputBase *first, SyncCameraInputBase *second)

Synchronize two cameras with each-other. Only one can be a master clock camera.

Parameters:
  • first – camera to synchronize.

  • second – camera to synchronize.

inline void synchronizeAnyTwo(SyncCameraInputBase &first, SyncCameraInputBase &second)

Synchronize two cameras with each-other. Only one can be a master clock camera.

Parameters:
  • first – camera to synchronize.

  • second – camera to synchronize.

inline void synchronizeAnyTwo(const SyncCameraPtr &first, const SyncCameraPtr &second)

Synchronize two cameras with each-other. Only one can be a master clock camera.

Parameters:
  • first – camera to synchronize.

  • second – camera to synchronize.

namespace imu

Enums

enum class ImuModel : uint8_t

List of supported IMU models.

Values:

enumerator IMU_NONE
enumerator IMU_INVENSENSE_6050_6150
enumerator IMU_INVENSENSE_6500_9250
enumerator IMU_BOSCH_BMI_160
enumerator IMU_BOSCH_BMI_270
enum class InvensenseAccelRange : uint8_t

List of accelerometer scale settings for InvenSense IMUs.

Values:

enumerator RANGE_2G
enumerator RANGE_4G
enumerator RANGE_8G
enumerator RANGE_16G
enum class InvensenseGyroRange : uint8_t

List of gyroscope scale settings for InvenSense IMUs.

Values:

enumerator RANGE_250DPS
enumerator RANGE_500DPS
enumerator RANGE_1000DPS
enumerator RANGE_2000DPS
enum class BoschBMI160AccelRange : uint8_t

List of accelerometer scale settings for Bosch IMU.

Values:

enumerator RANGE_2G
enumerator RANGE_4G
enumerator RANGE_8G
enumerator RANGE_16G
enum class BoschBMI160AccelDataRate : uint8_t

List of accelerometer data rate settings for Bosch IMU.

Values:

enumerator RATE_12_5HZ
enumerator RATE_25HZ
enumerator RATE_50HZ
enumerator RATE_100HZ
enumerator RATE_200HZ
enumerator RATE_400HZ
enumerator RATE_800HZ
enumerator RATE_1600HZ
enum class BoschBMI160AccelFilter : uint8_t

List of accelerometer filter settings for Bosch IMU.

Values:

enumerator FILTER_OSR4
enumerator FILTER_OSR2
enumerator FILTER_NORMAL
enum class BoschBMI160GyroRange : uint8_t

List of gyroscope scale settings for Bosch IMU.

Values:

enumerator RANGE_2000DPS
enumerator RANGE_1000DPS
enumerator RANGE_500DPS
enumerator RANGE_250DPS
enumerator RANGE_125DPS
enum class BoschBMI160GyroDataRate : uint8_t

List of gyroscope data rate settings for Bosch IMU.

Values:

enumerator RATE_25HZ
enumerator RATE_50HZ
enumerator RATE_100HZ
enumerator RATE_200HZ
enumerator RATE_400HZ
enumerator RATE_800HZ
enumerator RATE_1600HZ
enumerator RATE_3200HZ
enum class BoschBMI160GyroFilter : uint8_t

List of gyroscope filter settings for Bosch IMU.

Values:

enumerator FILTER_OSR4
enumerator FILTER_OSR2
enumerator FILTER_NORMAL

Functions

inline std::ostream &operator<<(std::ostream &os, const ImuModel &var)
static float boschBMI160CalculateIMUAccelScale(const uint8_t imuAccelScale)
static float boschBMI160CalculateIMUGyroScale(const uint8_t imuGyroScale)
static int16_t safeFlip16(const int16_t value)
namespace parser

Typedefs

using ParserLoggerCallback = std::function<void(bool debug, std::string_view message)>
using ParserTimeInitCallback = std::function<void()>
using ParserDataCommitCallback = std::function<void(ParsedData buffers)>
namespace DAVIS

Enums

enum class SensorModel

Values:

enumerator DAVIS240A
enumerator DAVIS240B
enumerator DAVIS240C
enumerator DAVIS346
enumerator DAVIS640
enumerator CDAVIS
enum class ColorMode

Values:

enumerator DEFAULT
enumerator GRAYSCALE
enumerator ORIGINAL
enumerator ORIGINAL_SPLIT

Functions

inline std::ostream &operator<<(std::ostream &os, const SensorModel &var)
inline std::ostream &operator<<(std::ostream &os, const ColorMode &var)
namespace DVS128

Variables

static constexpr int16_t WIDTH = {128}
static constexpr int16_t HEIGHT = {128}
namespace DVXplorer
namespace S5K231Y

Variables

static constexpr int16_t WIDTH = {640}
static constexpr int16_t HEIGHT = {480}
namespace S5KRC1S

Variables

static constexpr int16_t WIDTH = {960}
static constexpr int16_t HEIGHT = {720}
namespace compression

Functions

static std::unique_ptr<CompressionSupport> createCompressionSupport(const CompressionType type)
static std::unique_ptr<DecompressionSupport> createDecompressionSupport(const CompressionType type)
namespace network

Typedefs

using asioUNIX = asioLocal::stream_protocol
using asioTCP = asioIP::tcp
namespace encrypt

Functions

inline asioSSL::context createEncryptionContext(asioSSL::context::method method, const std::filesystem::path &certificateChain, const std::filesystem::path &privateKey, const std::optional<std::filesystem::path> &CAFile = std::nullopt)

Create an encryption context.

Parameters:
  • method – Encryption mode.

  • certificateChain – Path to certificate chain.

  • privateKey – Path to a private key.

  • CAFile – Path to CAFile, if a std::nullopt is provided, peer verification is disabled. Can be an empty path, in that case the context will use CA from default locations, the peers will be verified.

Returns:

Encryption context.

inline asioSSL::context defaultEncryptionServer(const std::filesystem::path &certificateChain, const std::filesystem::path &privateKey, const std::filesystem::path &CAFile)

Create an encryption server context with default configuration: TLSv1.2 encryption algorithm, provided certificate chain, server private key and certificate authority CAFile which is used to verify client certificate.

Parameters:
  • certificateChain – Server certificate chain.

  • privateKey – Server private key.

  • CAFile – CAFile for client verification.

Returns:

SSL context that can be used for encrypted network connections.

inline asioSSL::context defaultEncryptionClient(const std::filesystem::path &certificateChain, const std::filesystem::path &privateKey)

Create an encrypted client context with default configuration: TLSv1.2 encryption algorithm, provided client certificate chain and client private key. Server is always considered trusted and server certificate is not verified, the server will verify the client and can reject the connection during handshake if certificate verification fails.

Parameters:
  • certificateChain – Client certificate chain.

  • privateKey – Client private key.

Returns:

SSL context that can be used with encrypted network connections.

namespace support

Typedefs

using TypeResolver = dv::std_function_exact<const dv::types::Type*(const uint32_t)>
using VariantValueOwning = std::variant<bool, int32_t, int64_t, float, double, std::string>

Functions

static inline const dv::types::Type *defaultTypeResolver(const uint32_t typeId)
template<class PacketType>
inline std::shared_ptr<dv::types::TypedObject> packetToObject(PacketType &&packet, const TypeResolver &resolver = defaultTypeResolver)

Variables

static constexpr std::string_view AEDAT4_FILE_EXTENSION = {".aedat4"}
static constexpr std::string_view AEDAT4_HEADER_VERSION = {"#!AER-DAT4.0\r\n"}
namespace kinematics

Typedefs

typedef LinearTransformer<float> LinearTransformerf

LinearTransformer using single precision float operations

typedef LinearTransformer<double> LinearTransformerd

LinearTransformer using double precision float operations

typedef Transformation<float> Transformationf

Transformation using single precision float operations

typedef Transformation<double> Transformationd

Transformation using double precision float operations

namespace measurements
namespace noise

Enums

enum class FrequencyFilterType

Values:

enumerator PASS
enumerator CUT

Functions

inline std::ostream &operator<<(std::ostream &os, const FrequencyFilterType &var)
namespace optimization
namespace packets

Enums

enum class Timestamp

Values:

enumerator START
enumerator END

Functions

template<class ElementType>
inline int64_t getTimestamp(const ElementType &element)

Template method that retrieves timestamp from a Timestamped structure.

Template Parameters:

ElementType – Type of the element

Parameters:

element – Instance of the element

Returns:

Timestamp of this element

template<class PacketType>
inline bool isPacketEmpty(const PacketType &packet)

Check if a packet is empty.

Template Parameters:

PacketType

Parameters:

packet

Returns:

True if the given packet is empty, false otherwise.

template<class PacketType>
inline size_t getPacketSize(const PacketType &packet)

Get packet size. This utility template method can be used to generically get size of a EventStore, data packet or any container satisfying the iterable concept.

Template Parameters:

PacketType

Parameters:

packet

Returns:

Size of the given packet

template<class PacketType>
inline auto getPacketBegin(const PacketType &packet)

Generic getter of a begin iterator of a packet.

Template Parameters:

PacketType

Parameters:

packet

Returns:

template<class PacketType>
inline auto getPacketEnd(const PacketType &packet)

Generic getter of an end iterator of a packet.

Template Parameters:

PacketType

Parameters:

packet

Returns:

template<Timestamp startTime, class PacketType>
inline int64_t getPacketTimestamp(const PacketType &packet)

Retrieve packet start or end timestamp using template generation.

Template Parameters:
  • startTime – Use enum to select whether you want start or end timestamp.

  • PacketType – Packet type, inferred from argument type.

Parameters:

packet – Non-empty data packet.

Throws:

InvalidArgument – exception is thrown if the packet is empty.

Returns:

Timestamp of the first or last element in the packet.

template<class PacketType>
inline dv::TimeWindow getPacketTimeWindow(const PacketType &packet)

Get time window for a given packet.

Template Parameters:

PacketType

Parameters:

packet – Non-empty data packet.

Throws:

InvalidArgument – exception is thrown if the packet is empty.

Returns:

Time window with start and end timestamps of this packet.

namespace types

Typedefs

using PackFuncPtr = std::add_pointer_t<uint32_t(void *toFlatBufferBuilder, const void *fromObject)>
using UnpackFuncPtr = std::add_pointer_t<void(void *toObject, const void *fromFlatBuffer)>
using ConstructPtr = std::add_pointer_t<void*(const size_t sizeOfObject)>
using DestructPtr = std::add_pointer_t<void(void *object)>
using TimeElementExtractorPtr = std::add_pointer_t<void(const void *object, TimeElementExtractor *rangeOut)>
using TimeRangeExtractorPtr = std::add_pointer_t<void(void *toObject, const void *fromObject, const TimeElementExtractor *rangeIn, uint32_t *commitNowOut, uint32_t *exceedsTimeRangeOut)>

Functions

constexpr uint32_t IdentifierStringToId(const std::string_view id) noexcept
constexpr std::array<char, 5> IdToIdentifierString(const uint32_t id) noexcept
template<typename ObjectAPIType>
inline uint32_t Packer(void *toFlatBufferBuilder, const void *fromObject)
template<typename ObjectAPIType>
inline void Unpacker(void *toObject, const void *fromFlatBuffer)
template<typename ObjectAPIType, typename SubObjectAPIType>
inline void TimeElementExtractorDefault(const void *object, TimeElementExtractor *rangeOut) noexcept
template<typename ObjectAPIType, typename SubObjectAPIType>
inline void TimeRangeExtractorDefault(void *toObject, const void *fromObject, const TimeElementExtractor *rangeIn, uint32_t *commitNowOut, uint32_t *exceedsTimeRangeAndKeepPacketOut)
template<typename ObjectAPIType, typename SubObjectAPIType>
constexpr Type makeTypeDefinition()
namespace visualization
namespace colors

Functions

inline cv::Scalar someNeonColor(const int32_t someNumber)

Variables

static const cv::Scalar black = cv::Scalar(0, 0, 0)
static const cv::Scalar darkGray = cv::Scalar(43, 43, 43)
static const cv::Scalar gray = cv::Scalar(128, 128, 128)
static const cv::Scalar silver = cv::Scalar(192, 192, 192)
static const cv::Scalar white = cv::Scalar(255, 255, 255)
static const cv::Scalar red = cv::Scalar(0, 0, 255)
static const cv::Scalar lime = cv::Scalar(0, 255, 0)
static const cv::Scalar blue = cv::Scalar(255, 0, 0)
static const cv::Scalar yellow = cv::Scalar(0, 255, 255)
static const cv::Scalar cyan = cv::Scalar(255, 255, 0)
static const cv::Scalar magenta = cv::Scalar(255, 0, 255)
static const cv::Scalar maroon = cv::Scalar(0, 0, 128)
static const cv::Scalar green = cv::Scalar(0, 128, 0)
static const cv::Scalar navy = cv::Scalar(128, 0, 0)
static const cv::Scalar iniBlue = cv::Scalar(183, 93, 0)
static const std::vector<cv::Scalar> neonPalette = {cv::Scalar(255, 111, 0), cv::Scalar(239, 244, 19), cv::Scalar(0, 255, 104), cv::Scalar(0, 255, 250), cv::Scalar(0, 191, 255), cv::Scalar(0, 191, 255), cv::Scalar(92, 0, 255)}
namespace dv_capture_node
namespace dv_runtime
namespace flatbuffers
namespace fmt
namespace std
file calibration_set.hpp
#include “../core/utils.hpp
#include <boost/algorithm/string.hpp>
#include <boost/property_tree/json_parser.hpp>
#include <boost/property_tree/ptree.hpp>
#include <opencv2/core.hpp>
#include <iostream>
#include <map>
#include <regex>
#include <vector>
file camera_calibration.hpp
#include “../camera_geometry.hpp
#include <Eigen/Core>
#include <boost/property_tree/ptree.hpp>
#include <opencv2/core.hpp>
#include <optional>
#include <span>
file imu_calibration.hpp
#include “camera_calibration.hpp
file stereo_calibration.hpp
#include “camera_calibration.hpp
file camera_geometry.hpp
#include “../core/core.hpp
#include <Eigen/Core>
#include <opencv2/calib3d.hpp>
#include <opencv2/core.hpp>
#include <opencv2/core/eigen.hpp>
#include <cmath>
#include <span>
#include <vector>
file stereo_geometry.hpp
#include “../core/utils.hpp
#include “camera_geometry.hpp
#include <opencv2/imgproc.hpp>
file mean_shift.hpp
file kernel.hpp
#include <cmath>
#include <concepts>
file kd_tree.hpp
file eigen_matrix_adaptor.hpp
#include “kernel.hpp
#include <Eigen/Dense>
#include <optional>
#include <random>
#include <vector>
file eigen_matrix_adaptor.hpp
#include “../../external/nanoflann/nanoflann.hpp”
#include <Eigen/Dense>
#include <memory>
file event_store_adaptor.hpp
#include “kernel.hpp
#include <optional>
#include <random>
#include <vector>
file event_store_adaptor.hpp
#include “../../external/nanoflann/nanoflann.hpp”
#include “../../core/core.hpp
#include <opencv2/core.hpp>
#include <memory>
file concepts.hpp
#include “../data/event_base.hpp
#include “../data/frame_base.hpp
#include “../data/imu_base.hpp
#include “../data/pose_base.hpp
#include “../data/trigger_base.hpp
#include <Eigen/Core>
#include <opencv2/core.hpp>
#include <concepts>
#include <iterator>
#include <type_traits>
file core.hpp
#include “../data/event_base.hpp
#include “../data/frame_base.hpp
#include “stream_slicer.hpp
#include “utils.hpp
#include <Eigen/Dense>
#include <opencv2/core.hpp>
#include <opencv2/core/eigen.hpp>
#include <iostream>
#include <map>
#include <numeric>
#include <optional>

Functions

inline std::ostream &operator<<(std::ostream &os, const cv::Size &var)
inline std::ostream &operator<<(std::ostream &os, const cv::Point &var)
inline std::ostream &operator<<(std::ostream &os, const cv::Rect &var)
file dvassert.hpp
#include <boost/stacktrace.hpp>
#include “concepts.hpp
#include <fmt/ranges.h>
#include <fmt/std.h>
#include <cstdlib>
#include <filesystem>
#include <source_location>
#include <string_view>
file event.hpp
#include “core.hpp
#include “filters.hpp
file event_color.hpp
#include “../core/utils.hpp
#include “../data/event_base.hpp
file filters.hpp
#include “../core/frame.hpp
file frame.hpp
#include “frame/accumulator.hpp
file accumulator.hpp
#include “accumulator_base.hpp
file accumulator_base.hpp
#include “../core.hpp
file edge_map_accumulator.hpp
#include “accumulator_base.hpp
file multi_stream_slicer.hpp
#include “../data/frame_base.hpp
#include “../data/imu_base.hpp
#include “../data/trigger_base.hpp
#include “core.hpp
#include “stream_slicer.hpp
#include <unordered_map>
#include <variant>
file stereo_event_stream_slicer.hpp
#include “core.hpp
file stream_slicer.hpp
#include “concepts.hpp
#include “time_window.hpp
#include “utils.hpp
#include <functional>
#include <map>
file time.hpp
#include <chrono>
file time_window.hpp
#include “time.hpp
file boost_geometry_interop.hpp
#include “bounding_box_base.hpp
#include “event_base.hpp
#include “timed_keypoint_base.hpp
#include <boost/geometry/core/cs.hpp>
#include <boost/geometry/geometries/register/box.hpp>
#include <boost/geometry/geometries/register/point.hpp>
#include <boost/geometry/geometry.hpp>
#include <opencv2/core.hpp>
file bounding_box_base.hpp
#include “../external/flatbuffers/flatbuffers.h”
#include <fmt/chrono.h>
#include <fmt/format.h>
#include <fmt/ostream.h>
#include <fmt/ranges.h>

Variables

VT_TIMESTAMP   = 4
VT_TOPLEFTX   = 6
VT_TOPLEFTY   = 8
VT_BOTTOMRIGHTX   = 10
VT_BOTTOMRIGHTY   = 12
VT_CONFIDENCE   = 14
file depth_event_base.hpp
#include “../external/flatbuffers/flatbuffers.h”
#include <fmt/chrono.h>
#include <fmt/format.h>
#include <fmt/ostream.h>
#include <fmt/ranges.h>
file depth_frame_base.hpp
#include “../external/flatbuffers/flatbuffers.h”
#include <fmt/chrono.h>
#include <fmt/format.h>
#include <fmt/ostream.h>
#include <fmt/ranges.h>

Variables

VT_TIMESTAMP   = 4
VT_SIZEX   = 6
VT_SIZEY   = 8
VT_MINDEPTH   = 10
VT_MAXDEPTH   = 12
VT_STEP   = 14
file event_base.hpp
#include “../external/flatbuffers/flatbuffers.h”
#include <fmt/chrono.h>
#include <fmt/format.h>
#include <fmt/ostream.h>
#include <fmt/ranges.h>
file frame_base.hpp
#include “../external/flatbuffers/flatbuffers.h”
#include “../core/time.hpp
#include <fmt/chrono.h>
#include <fmt/format.h>
#include <fmt/ostream.h>
#include <fmt/ranges.h>
#include <opencv2/core/mat.hpp>
#include <ostream>

Variables

VT_TIMESTAMP   = 4
VT_TIMESTAMPSTARTOFFRAME   = 6
VT_TIMESTAMPENDOFFRAME   = 8
VT_TIMESTAMPSTARTOFEXPOSURE   = 10
VT_TIMESTAMPENDOFEXPOSURE   = 12
VT_FORMAT   = 14
VT_SIZEX   = 16
VT_SIZEY   = 18
VT_POSITIONX   = 20
VT_POSITIONY   = 22
VT_PIXELS   = 24
VT_EXPOSURE   = 26
file generate.hpp
#include “../core/core.hpp
#include <opencv2/imgproc.hpp>
#include <random>
file geometry_types_base.hpp
#include “../external/flatbuffers/flatbuffers.h”
file imu_base.hpp
#include “../external/flatbuffers/flatbuffers.h”
#include <Eigen/Core>
#include <fmt/chrono.h>
#include <fmt/format.h>
#include <fmt/ostream.h>
#include <fmt/ranges.h>
#include <numbers>
#include <ostream>

Variables

VT_TIMESTAMP   = 4
VT_TEMPERATURE   = 6
VT_ACCELEROMETERX   = 8
VT_ACCELEROMETERY   = 10
VT_ACCELEROMETERZ   = 12
VT_GYROSCOPEX   = 14
VT_GYROSCOPEY   = 16
VT_GYROSCOPEZ   = 18
VT_MAGNETOMETERX   = 20
VT_MAGNETOMETERY   = 22
file landmark_base.hpp
#include “../external/flatbuffers/flatbuffers.h”
#include “geometry_types_base.hpp
#include <fmt/chrono.h>
#include <fmt/format.h>
#include <fmt/ostream.h>
#include <fmt/ranges.h>

Variables

VT_TRACKID   = 4
VT_CAMERAID   = 6
VT_CAMERANAME   = 8
VT_PT   = 4
VT_ID   = 6
VT_TIMESTAMP   = 8
VT_DESCRIPTOR   = 10
VT_DESCRIPTORTYPE   = 12
VT_COVARIANCE   = 14
VT_ELEMENTS   = 4
file pose_base.hpp
#include “../external/flatbuffers/flatbuffers.h”
#include “geometry_types_base.hpp
#include <fmt/chrono.h>
#include <fmt/format.h>
#include <fmt/ostream.h>
#include <fmt/ranges.h>

Variables

VT_TIMESTAMP   = 4
VT_TRANSLATION   = 6
VT_ROTATION   = 8
VT_REFERENCEFRAME   = 10
file timed_keypoint_base.hpp
#include “../external/flatbuffers/flatbuffers.h”
#include “geometry_types_base.hpp
#include <fmt/chrono.h>
#include <fmt/format.h>
#include <fmt/ostream.h>
#include <fmt/ranges.h>

Variables

VT_PT   = 4
VT_SIZE   = 6
VT_ANGLE   = 8
VT_RESPONSE   = 10
VT_OCTAVE   = 12
VT_CLASS_ID   = 14
file trigger_base.hpp
#include “../external/flatbuffers/flatbuffers.h”
#include <fmt/chrono.h>
#include <fmt/format.h>
#include <fmt/ostream.h>
#include <fmt/ranges.h>

Variables

VT_TIMESTAMP   = 4
file types.hpp
#include “../external/flatbuffers/flatbuffers.h”
#include “../core/utils.hpp
file utilities.hpp
#include “../core/core.hpp
#include “depth_event_base.hpp
#include “depth_frame_base.hpp
#include “event_base.hpp
#include “pose_base.hpp
#include “timed_keypoint_base.hpp
#include <opencv2/core.hpp>
file semi_dense_stereo_matcher.hpp
#include “../core/concepts.hpp
#include “../core/frame.hpp
#include “utils.hpp
file sparse_event_block_matcher.hpp
#include “../core/filters.hpp
#include “../core/frame.hpp
#include <opencv2/imgproc.hpp>
file exception.hpp
file exception_base.hpp
#include <boost/stacktrace.hpp>
#include <boost/core/demangle.hpp>
#include <fmt/chrono.h>
#include <fmt/format.h>
#include <fmt/ostream.h>
#include <fmt/ranges.h>
#include <fmt/std.h>
#include <concepts>
#include <filesystem>
#include <source_location>
#include <stdexcept>
#include <string>
file directory_exceptions.hpp
#include “../exception_base.hpp
file file_exceptions.hpp
#include “../exception_base.hpp
file generic_exceptions.hpp
#include “../exception_base.hpp
file io_exceptions.hpp
#include “../exception_base.hpp
file type_exceptions.hpp
#include “../exception_base.hpp
file arc_corner_detector.hpp
#include “../core/concepts.hpp
#include “../core/core.hpp
#include <Eigen/Dense>
#include <opencv2/core.hpp>
file event_blob_detector.hpp
#include “../core/event.hpp
#include “../data/utilities.hpp
#include <opencv2/opencv.hpp>
#include <atomic>
#include <utility>
file event_combined_lk_tracker.hpp
#include “../core/core.hpp
#include “../core/frame.hpp
#include “../data/utilities.hpp
file event_feature_lk_tracker.hpp
#include “../core/frame.hpp
file feature_detector.hpp
#include “../core/concepts.hpp
#include “../core/core.hpp
#include “../data/utilities.hpp
#include “event_blob_detector.hpp
#include “image_pyramid.hpp
#include “keypoint_resampler.hpp
#include <opencv2/core.hpp>
#include <opencv2/features2d.hpp>
file feature_tracks.hpp
#include “../core/utils.hpp
#include “tracker_base.hpp
file image_feature_lk_tracker.hpp
#include “../data/utilities.hpp
#include “image_pyramid.hpp
#include “redetection_strategy.hpp
#include “tracker_base.hpp
#include <utility>
file image_pyramid.hpp
#include “../data/frame_base.hpp
#include <opencv2/core.hpp>
#include <opencv2/video.hpp>
#include <memory>
file keypoint_resampler.hpp
#include “../core/concepts.hpp
#include <boost/geometry/geometry.hpp>
#include <boost/geometry/index/rtree.hpp>
file mean_shift_tracker.hpp
#include “../core/core.hpp
#include “feature_detector.hpp
#include “redetection_strategy.hpp
#include “tracker_base.hpp
file redetection_strategy.hpp
#include “tracker_base.hpp
file tracker_base.hpp
#include “feature_detector.hpp
file imgproc.hpp
#include “../core/core.hpp
#include <Eigen/Dense>
#include <opencv2/core.hpp>
#include <opencv2/opencv.hpp>
#include <optional>
file rotation-integrator.hpp
#include “../core/concepts.hpp
#include “../data/imu_base.hpp
#include <Eigen/Geometry>
#include <numbers>
file camera_input_base.hpp
#include “../data_read_handler.hpp
#include “../input_base.hpp
#include “imu_support.hpp
#include “parsers/parser.hpp
#include <boost/circular_buffer.hpp>
file camera_model.hpp
#include “../../core/utils.hpp
file davis.hpp
#include “imu_support.hpp
#include “parsers/davis_parser.hpp
#include “usb_device.hpp
file discovery.hpp
#include “davis.hpp
#include “dvs128.hpp
#include “dvxplorer.hpp
#include “dvxplorer_m.hpp
file dvs128.hpp
#include “usb_device.hpp
file dvxplorer.hpp
#include “imu_support.hpp
#include “usb_device.hpp
file dvxplorer_m.hpp
#include “camera_input_base.hpp
#include “dvxplorer.hpp
#include “imu_support.hpp
#include “usb_device_nextgen.hpp
file imu_support.hpp
#include “../../core/utils.hpp
file davis_parser.hpp
#include “../imu_support.hpp
#include “parser.hpp
#include <boost/endian.hpp>
#include <opencv2/imgproc.hpp>
file dvs128_parser.hpp
#include “parser.hpp
#include <boost/endian.hpp>
file dvxplorer_parser.hpp
#include “../imu_support.hpp
#include “parser.hpp
#include <boost/endian.hpp>
file parser.hpp
#include “../../../core/utils.hpp
#include <functional>
#include <span>
file s5k231y_parser.hpp
#include “parser.hpp
#include <boost/endian.hpp>
file s5krc1s_parser.hpp
#include “parser.hpp
#include <boost/endian.hpp>
file sync_camera_input_base.hpp
#include “camera_input_base.hpp
file usb_device.hpp
#include “../../core/utils.hpp
#include “camera_model.hpp
#include <libusb.h>
#include <boost/endian.hpp>
#include <atomic>
#include <thread>

Defines

DV_LIBUSB_VERSION_1_0_23
DV_LIBUSB_VERSION_1_0_24
DV_LIBUSB_VERSION_1_0_25_26
DV_LIBUSB_VERSION_1_0_27_28
DV_LIBUSB_VERSION_1_0_29
file usb_device_nextgen.hpp
#include “usb_device.hpp
file compression_support.hpp
#include “../../core/utils.hpp
#include “../data/IOHeader.hpp
#include <lz4.h>
#include <lz4frame.h>
#include <lz4hc.h>
#include <memory>
#include <vector>
#include <zstd.h>

Defines

LZ4F_HEADER_SIZE_MAX
ZSTD_CLEVEL_DEFAULT
file decompression_support.hpp
#include “../../core/utils.hpp
#include “../data/IOHeader.hpp
#include <lz4.h>
#include <lz4frame.h>
#include <lz4hc.h>
#include <memory>
#include <vector>
#include <zstd.h>

Defines

LZ4F_HEADER_SIZE_MAX
ZSTD_CLEVEL_DEFAULT
file FileDataTable.hpp
#include “../../external/flatbuffers/flatbuffers.h”

Variables

VT_BYTEOFFSET   = 4
VT_PACKETINFO   = 6
VT_NUMELEMENTS   = 8
VT_TIMESTAMPSTART   = 10
file IOHeader.hpp
#include “../../external/flatbuffers/flatbuffers.h”

Variables

VT_COMPRESSION   = 4
VT_DATATABLEPOSITION   = 6
file data_read_handler.hpp
#include “../core/core.hpp
#include “../core/frame.hpp
#include “../data/imu_base.hpp
#include “../data/trigger_base.hpp
#include <functional>
#include <optional>
#include <variant>
file input_base.hpp
#include “../core/core.hpp
#include “../data/event_base.hpp
#include “../data/frame_base.hpp
#include “../data/imu_base.hpp
#include “../data/trigger_base.hpp
#include <optional>
#include <string>
file mono_camera_recording.hpp
#include “../core/frame.hpp
#include “data_read_handler.hpp
#include “input_base.hpp
#include “read_only_file.hpp
#include <functional>
#include <optional>
file mono_camera_writer.hpp
#include “../core/core.hpp
#include “../core/frame.hpp
#include “output_base.hpp
#include “reader.hpp
#include “support/utils.hpp
#include “write_only_file.hpp
file encrypt.hpp
#include <boost/asio/ssl.hpp>
#include <filesystem>
#include <optional>
file socket_base.hpp
#include <boost/asio.hpp>
file tcp_tls_socket.hpp
#include “encrypt.hpp
#include “socket_base.hpp
#include <deque>
#include <mutex>
#include <utility>
file unix_socket.hpp
#include “socket_base.hpp
#include <deque>
#include <mutex>
#include <utility>
file write_ordered_socket.hpp
#include “socket_base.hpp
#include <deque>
#include <functional>
#include <utility>
file network_reader.hpp
#include “input_base.hpp
#include “network/encrypt.hpp
#include “network/unix_socket.hpp
#include “reader.hpp
#include <boost/lockfree/spsc_queue.hpp>
file network_writer.hpp
#include “network/socket_base.hpp
#include “network/unix_socket.hpp
#include “output_base.hpp
#include “stream.hpp
#include “support/utils.hpp
#include “writer.hpp
#include <boost/lockfree/spsc_queue.hpp>
#include <utility>
file output_base.hpp
#include “../core/core.hpp
file read_only_file.hpp
#include “reader.hpp
#include “simplefile.hpp
file reader.hpp
#include “stream.hpp
#include <boost/endian.hpp>
#include <optional>
#include <unordered_map>
#include <utility>
file simplefile.hpp
#include “../core/utils.hpp
#include <boost/nowide/cstdio.hpp>
#include <algorithm>
#include <cstdio>
#include <filesystem>
#include <limits>
#include <span>
file stereo_camera_recording.hpp
file stereo_camera_writer.hpp
#include “mono_camera_writer.hpp
file stream.hpp
#include “support/utils.hpp
#include <opencv2/core.hpp>
#include <optional>
file io_data_buffer.hpp
#include <vector>
file io_statistics.hpp
#include <cstdint>
file thread_extra.hpp
#include <cstring>
#include <string>

Defines

PACKED_STRUCT(STRUCT_DECLARATION)
file utils.hpp
#include “concepts.hpp
#include “dvassert.hpp
#include “time.hpp
#include “time_window.hpp
#include <fmt/chrono.h>
#include <fmt/format.h>
#include <fmt/ostream.h>
#include <fmt/ranges.h>
#include <fmt/std.h>
#include <algorithm>
#include <array>
#include <cerrno>
#include <cinttypes>
#include <compare>
#include <cstddef>
#include <cstdint>
#include <cstdlib>
#include <cstring>
#include <filesystem>
#include <functional>
#include <memory>
#include <string>
#include <string_view>
#include <utility>
#include <vector>
file utils.hpp
#include <opencv2/calib3d.hpp>
file utils.hpp
#include “../../core/utils.hpp
#include “../../data/imu_base.hpp
#include “../../data/pose_base.hpp
#include “../../data/types.hpp
#include “../data/IOHeader.hpp
#include “io_data_buffer.hpp
#include “io_statistics.hpp
#include <string_view>
file xml_config_io.hpp
#include “../../core/utils.hpp
#include <boost/property_tree/ptree.hpp>
#include <boost/property_tree/xml_parser.hpp>
#include <map>
#include <sstream>
#include <variant>
file write_only_file.hpp
#include “simplefile.hpp
#include “writer.hpp
#include <atomic>
#include <mutex>
#include <queue>
#include <thread>
file writer.hpp
#include “support/utils.hpp
#include <iostream>
#include <memory>
file linear_transformer.hpp
#include “../core/dvassert.hpp
#include “transformation.hpp
#include <Eigen/Dense>
#include <Eigen/StdVector>
#include <boost/circular_buffer.hpp>
#include <optional>
file motion_compensator.hpp
#include “../core/concepts.hpp
#include “../core/frame.hpp
#include “linear_transformer.hpp
file pixel_motion_predictor.hpp
#include <utility>
file transformation.hpp
#include “../core/concepts.hpp
#include <Eigen/Core>
#include <opencv2/core/eigen.hpp>
file depth.hpp
#include <cstdint>
file background_activity_noise_filter.hpp
#include “../core/filters.hpp
file fast_decay_noise_filter.hpp
#include “../core/filters.hpp
file frequency_filters.hpp
#include “../core/filters.hpp
#include <optional>
file k_noise_filter.hpp
#include “../core/filters.hpp
#include <optional>
file contrast_maximization_rotation.hpp
#include “../core/core.hpp
file contrast_maximization_translation_and_depth.hpp
#include “../core/core.hpp
file contrast_maximization_wrapper.hpp
#include “../core/concepts.hpp
#include <memory>
#include <unsupported/Eigen/NonLinearOptimization>
#include <unsupported/Eigen/NumericalDiff>
file optimization_functor.hpp
#include <Eigen/Dense>
file processing.hpp
#include “cluster/mean_shift.hpp
#include “containers/kd_tree.hpp
#include “core/core.hpp
#include “core/event.hpp
#include “core/event_color.hpp
#include “core/filters.hpp
#include “core/frame.hpp
#include “core/stream_slicer.hpp
#include “core/time.hpp
#include “core/utils.hpp
#include “data/event_base.hpp
#include “data/frame_base.hpp
#include “data/generate.hpp
#include “data/imu_base.hpp
#include “data/landmark_base.hpp
#include “data/pose_base.hpp
#include “data/trigger_base.hpp
#include “data/types.hpp
#include “data/utilities.hpp
#include “exception/exception.hpp
#include “imgproc/imgproc.hpp
#include “io/camera/davis.hpp
#include “io/camera/discovery.hpp
#include “io/camera/dvs128.hpp
#include “io/camera/dvxplorer.hpp
#include “io/data_read_handler.hpp
#include “io/network_reader.hpp
#include “io/network_writer.hpp
#include “io/read_only_file.hpp
#include “io/reader.hpp
#include “io/simplefile.hpp
#include “io/write_only_file.hpp
#include “io/writer.hpp
#include “measurements/depth.hpp
#include “noise/k_noise_filter.hpp
#include “version.hpp
#include “visualization/colors.hpp
file version.hpp
#include <string_view>

Defines

DV_PROCESSING_VERSION_MAJOR

dv-processing version (MAJOR * 10000 + MINOR * 100 + PATCH).

DV_PROCESSING_VERSION_MINOR
DV_PROCESSING_VERSION_PATCH
DV_PROCESSING_VERSION
DV_PROCESSING_NAME_STRING

dv-processing name string.

DV_PROCESSING_VERSION_STRING

dv-processing version string.

file colors.hpp
#include <opencv2/core.hpp>
file event_visualizer.hpp
#include “../core/core.hpp
#include “../core/utils.hpp
#include “colors.hpp
file pose_visualizer.hpp
#include “../data/frame_base.hpp
#include “../data/utilities.hpp
#include <opencv2/imgproc.hpp>
#include <iostream>
#include <map>
dir /builds/inivation/dv/dv-processing/include/dv-processing/camera/calibrations
dir /builds/inivation/dv/dv-processing/include/dv-processing/camera
dir /builds/inivation/dv/dv-processing/include/dv-processing/io/camera
dir /builds/inivation/dv/dv-processing/include/dv-processing/cluster
dir /builds/inivation/dv/dv-processing/include/dv-processing/io/compression
dir /builds/inivation/dv/dv-processing/include/dv-processing/containers
dir /builds/inivation/dv/dv-processing/include/dv-processing/core
dir /builds/inivation/dv/dv-processing/include/dv-processing/data
dir /builds/inivation/dv/dv-processing/include/dv-processing/io/data
dir /builds/inivation/dv/dv-processing/include/dv-processing/depth
dir /builds/inivation/dv/dv-processing/include/dv-processing
dir /builds/inivation/dv/dv-processing/include/dv-processing/exception
dir /builds/inivation/dv/dv-processing/include/dv-processing/exception/exceptions
dir /builds/inivation/dv/dv-processing/include/dv-processing/features
dir /builds/inivation/dv/dv-processing/include/dv-processing/core/frame
dir /builds/inivation/dv/dv-processing/include/dv-processing/imgproc
dir /builds/inivation/dv/dv-processing/include/dv-processing/imu
dir /builds/inivation/dv/dv-processing/include
dir /builds/inivation/dv/dv-processing/include/dv-processing/io
dir /builds/inivation/dv/dv-processing/include/dv-processing/containers/kd_tree
dir /builds/inivation/dv/dv-processing/include/dv-processing/kinematics
dir /builds/inivation/dv/dv-processing/include/dv-processing/cluster/mean_shift
dir /builds/inivation/dv/dv-processing/include/dv-processing/measurements
dir /builds/inivation/dv/dv-processing/include/dv-processing/io/network
dir /builds/inivation/dv/dv-processing/include/dv-processing/noise
dir /builds/inivation/dv/dv-processing/include/dv-processing/optimization
dir /builds/inivation/dv/dv-processing/include/dv-processing/io/camera/parsers
dir /builds/inivation/dv/dv-processing/include/dv-processing/io/support
dir /builds/inivation/dv/dv-processing/include/dv-processing/visualization