People Fighting Recognition
Last updated
Last updated
Nodeflux People Fighting Detection addresses the action recognition instances of people fighting and monitors the emergence of riots event, promoting improved public security and surveillance within monitored environments. This solution integrates advanced Large Vision Models and Visual Transformers directly into the Nodeflux Visionaire platform. The analytics system is capable of automatically detecting individuals fighting each other, through the inference process from multiple sequential frames, in real-time scenarios.
Nodeflux People Fighting Detection is primarily intended for use in law enforcement and surveillance applications. It monitors the occurrence of individuals fighting each other and aids in mitigating or even preventing potential risks or threats associated with riot events that may be impacting in mortality cases.
Disclaimer: In a sense of large vision models and visual transformers technology, the performance of this analytic might slightly differ in your environment, depending on several variables such as camera specs, camera height, camera angle, weather conditions, etc. We highly recommend you to test our analytics and run benchmarks on your own images, with ground truth or your quality expectations prepared beforehand. Please contact us for more info.
Visionaire People Fighting Detection utilizes a combination of other services:
Postgres - For database,
Docker Snapshot - For action recognition related,
Visionaire Docker Stream (must be v4.57.11 and above) - For video stream processing,
Visionaire Dashboard (Optional) - Built-in dashboard for visualization.
Since Nodeflux People Fighting Detection utilizes the snapshot platform, an additional docker-compose.yml
file needs to be deployed alongside the other aforementioned services. The following directive link outlines the installation process for deploying the docker-compose.yml
file specific to the Large Visual Language Model (VM) pipeline of snapshot platform:
When you are assigning this analytics into the stream through the dashboard, you should configure some parameters setting below:
Parameter | Explanation |
---|---|
address | The IP address and its port where the VM snapshot service is deployed |
images_num | Number of sequential frames/snapshots that generated to be analyze in the inference process. Value range between 2 to 6 images |
dump_interval | How frequent you want the stream as the multiple sequential frames/snapshots to get the inference result. Unit in second |
interval_capture | Interval time between 2 frames/snapshots within the same sequential frames/snapshot. Unit in second |