lib.sedna.core.multi_edge_inference.components.reid

Module Contents

Classes

ReID

In MultiEdgeInference, the ReID component is deployed in the cloud

class lib.sedna.core.multi_edge_inference.components.reid.ReID(consumer_topics=[], producer_topics=[], plugins: List[sedna.core.multi_edge_inference.plugins.PluggableNetworkService] = [], models: List[sedna.core.multi_edge_inference.plugins.PluggableModel] = [], timeout=10, asynchronous=True)[source]

Bases: sedna.core.multi_edge_inference.components.BaseService, sedna.core.multi_edge_inference.components.FileOperations

In MultiEdgeInference, the ReID component is deployed in the cloud and it used to identify a target by compairing its features with the ones genereated from the Feature Extraction component.

Parameters:
  • consumer_topics (List) – Leave empty.

  • producer_topics (List) – Leave empty.

  • plugins (List) – A list of PluggableNetworkService. It can be left empty as the ReID component is already preconfigured to connect to the correct network services.

  • models (List) – A list of PluggableModel. In this case we abuse of the term model as the ReID doesn’t really use an AI model but rather a wrapper for the ReID functions.

  • timeout (int) – It sets a timeout condition to terminate the main fetch loop after the specified amount of seconds has passed since we received the last frame.

  • asynchronous (bool) – If True, the AI processing will be decoupled from the data acquisition step. If False, the processing will be sequential. In general, set it to True when ingesting a stream (e.g., RTSP) and to False when reading from disk (e.g., a video file).

Examples

model = ReIDWorker() # A class implementing the PluggableModel abstract class (example in pedestrian_tracking/reid/worker.py)

self.job = ReID(models=[model], asynchronous=False)

Notes

For the parameters described above, only ‘models’ has to be defined, while for others the default value will work in most cases.

process_data(ai, data, **kwargs)[source]

The user needs to implement this function to call the main processing function of the AI model and decide what to do with the result.

update_operational_mode(status)[source]

The user needs to trigger updates to the AI model, if necessary.

get_target_features(ldata)[source]