VLM Installation
This page informs how to deploy the docker compose of snapshot platform as the VisionAIre Stream analytics requirements for every analytics run by VLM deployment.
How to Setup the Deployment?
Create a dedicated folder for this installation to help organize your deployment.
1. Configuration of docker-compose.yml
version: '3.3'
services:
node1:
image: "${VLM_IMAGE}"
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
pid: host
network_mode: host
cap_add:
- SYS_PTRACE
command: [
httpserver,
--listen-port, "${FREMIS1_LISTEN_PORT}",
--listen-port-monitoring, "${FREMIS1_LISTEN_PORT_MONITORING}",
--verbose,
]
healthcheck:
test: ["CMD", "curl", "-f", "http://0.0.0.0:${FREMIS1_LISTEN_PORT}/healthcheck"]
interval: 5s
timeout: 3s
retries: 20
coordinator:
image: "${VLM_IMAGE}"
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
pid: host
network_mode: host
cap_add:
- SYS_PTRACE
command: [
coordinator,
--listen-port, "${COORDINATOR_LISTEN_PORT}",
--listen-port-monitoring, "${COORDINATOR_LISTEN_PORT_MONITORING}",
--config-path, "/etc/nodeflux/config.yml",
--verbose,
]
volumes:
- ${PWD}/config.yml:/etc/nodeflux/config.yml
depends_on:
node1:
condition: service_healthy
2. Settings of config.yml
version: "v1"
nodes:
- address: "0.0.0.0:5051"
analytic_id: "NFFS-VLM"
3. Keep the analytics image and port addresses into .env file
export VLM_IMAGE=registry.gitlab.com/nodefluxio/cloud/analytics/pipelines/vlm-pipeline:on-premise-0.1.0 ## for float16
export VLM_IMAGE=registry.gitlab.com/nodefluxio/cloud/analytics/pipelines/vlm-pipeline:on-premise-0.1.0-full ## for full precision
export COORDINATOR_LISTEN_PORT=4013
export COORDINATOR_LISTEN_PORT_MONITORING=5013
export FREMIS1_LISTEN_PORT=5051
export FREMIS1_LISTEN_PORT_MONITORING=6061
4. Run Image
docker-compose up -d --build
Last updated
Was this helpful?