lib.sedna.core.multi_edge_inference.components.detector
¶
Module Contents¶
Classes¶
In MultiEdgeInference, the Object Detection/Tracking component |
- class lib.sedna.core.multi_edge_inference.components.detector.ObjectDetector(consumer_topics=['enriched_object'], producer_topics=['object_detection'], plugins: List[sedna.core.multi_edge_inference.plugins.PluggableNetworkService] = [], models: List[sedna.core.multi_edge_inference.plugins.PluggableModel] = [], timeout=10, asynchronous=False)[source]¶
Bases:
sedna.core.multi_edge_inference.components.BaseService
,sedna.core.multi_edge_inference.components.FileOperations
In MultiEdgeInference, the Object Detection/Tracking component is deployed as a service at the edge and it used to detect or track objects (for example, pedestrians) and send the result to the cloud for further processing using Kafka or REST API.
- Parameters:
consumer_topics (List) – A list of Kafka topics used to communicate with the Feature Extraction service (to receive data from it). This is accessed only if the Kafka backend is in use.
producer_topics (List) – A list of Kafka topics used to communicate with the Feature Extraction service (to send data to it). This is accessed only if the Kafka backend is in use.
plugins (List) – A list of PluggableNetworkService. It can be left empty as the ObjectDetector service is already preconfigured to connect to the correct network services.
models (List) – A list of PluggableModel. By passing a specific instance of the model, it is possible to customize the ObjectDetector to, for example, track different objects as long as the PluggableModel interface is respected.
timeout (int) – It sets a timeout condition to terminate the main fetch loop after the specified amount of seconds has passed since we received the last frame.
asynchronous (bool) – If True, the AI processing will be decoupled from the data acquisition step. If False, the processing will be sequential. In general, set it to True when ingesting a stream (e.g., RTSP) and to False when reading from disk (e.g., a video file).
Examples
model = ByteTracker() # A class implementing the PluggableModel abstract class (example in pedestrian_tracking/detector/model/bytetracker.py) objecttracking_service = ObjectDetector(models=[model], asynchronous=True)
Notes
For the parameters described above, only ‘models’ has to be defined, while for others the default value will work in most cases.
- process_data(ai, data, **kwargs)[source]¶
The user needs to implement this function to call the main processing function of the AI model and decide what to do with the result.