Visionaire
Search…
Changelogs

[4.21.2]

Added

    Add test driver analytic and clustering.

[4.21.1]

Changed

    Change neural network for LPR, from plate detection only to plate + vehicle detection

[4.21.0]

Fixed

    Fixing GPU Memory loading model

Changed

    Update Vortex Runtime 0.3.0 version, (loading *.so at runtime)

[4.20.3]

Added

    New pipeline for License Plate Recognition, replacing counting line with global sampling and clustering.

[4.20.2]

Changed

    Zoom out face image and dump size to 200 x 200 px in the Face Recognition pipeline

[4.20.1]

Fixed

    Fix app crash using public CCTV url

[4.20.0]

    Released clustering functionalities MVP

[4.14.1]

Added

    New LPR pipeline prototype

[4.14.0]

Changed

    Update model vehicle detection highway with active learning dataset for VC-HW pipeline

[4.13.1]

Changed
    Fix where Docker Stream cannot be installed in GPU below GTX 1060

[4.13.0]

Changed
    Update model vehicle detection highway for VC-HW pipeline

[4.12.0]

Added
    Provide CPU and RAM monitor in resource_stats endpoint
    Add visioanrie4 version field in analytic_list endpoint
    Add credential bundle using .csv format
    Add message and log for face candidates (face_recognition pipeline)
Changed
    update composer.sh (upload grafana.json dashboard cli)

[4.11.2]

Added
    Add test driver LPR script

[4.11.1]

Added
    Add script for benchmarking with ground truth

[4.11.0]

Changed

    Update for crowd estimation service

Added

    Add crowd estimation pipeline
    Add semantic segmentation handler

[4.10.0]

Added

    Add VMS-client to stream to handle vendor VMS URL
    Expose license serial number and deployment_key per analytic by the API endpoint analytic_list and /streams/<node_id>/<stream_id>

Changed

    Change credential license using access-key and secret-key, license-username and license-password has no compatibility anymore

[4.9.5]

Fixed

    Double dump on vehicle counting (highway)

Added

    Add sampling mechanism for a certain amount of time to prevent double dump
    Add sampling time parameter as dumping_sampling_time field on JSON POST when creating a new pipeline
    Add default value sampling_time_threshold for each analytics (not tuned yet)

[4.9.4]

Changed

    Update dump event mechanism uses the highest similarity from the most occurence of face_id with timeout 1.5s and dump if track_id lost

[4.9.3]

    Internal revamp

[4.9.2]

Fixed

    Healthcheck terminate the app when there is a pipeline which seat is not granted
    More smooth/asynchronous service scale-up mechanism, instead of synchronously tied to pipeline spawn, it uses a regular scheduler like scale down mechanism
    Fix bug where create pipeline request is being repeatedly hit(trigger main thread health check terminate)

[4.9.0]

Added

    Add PING handle, so if the client send PING to check connection liveness, we reply it with PONG

[4.8.3]

Fixed

    Prevent pipeline health check trigger when RTSP/stream not working, use stream recovery instead(less disruptive)

[4.8.2]

Added

    Add NFV4-VC-HW as Vehicle Counting Highway case as new pipeline
    Add grid tracker as tracker preprocess
    Add moving weighted average in trajectory generator
    Add remove overlap label to reduce double detection as tracker preprocess

[4.8.1]

Changed

    Remove master_base class

[4.8.0]

Fixed

    Multi GPU racing condition, where GPU device map is being used without first initialized.

Added

    Terminate the entire service if there's an unresponsive pipeline(so docker auto-restart can recover)
    Add metrics node's running age/time (seconds)

[4.7.0]

Added

    Ability to spawn services to multiple GPUs
    Service can spawn in a specific location of GPU / gpu_id
    Add visibility of device GPU as API endpoint call

Under [4.6.6]

    Publish release image to Dockerhub
Last modified 1d ago