# my global configglobal:scrape_interval:5s# Set the scrape interval to every 15 seconds. Default is every 1 minute.evaluation_interval:5s# Evaluate rules every 15 seconds. The default is every 1 minute.# scrape_timeout is set to the global default (10s).# Alertmanager configurationalerting:alertmanagers: - static_configs: - targets:# - alertmanager:9093# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.rule_files:# - "first_rules.yml"# - "second_rules.yml"# A scrape configuration containing exactly one endpoint to scrape:# Here it's Prometheus itself.scrape_configs:# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config. - job_name:"prometheus"# metrics_path defaults to '/metrics'# scheme defaults to 'http'.static_configs: - targets: ["localhost:9090","localhost:5021","localhost:5022","localhost:5023","localhost:5024","localhost:5001","localhost:5005","localhost:5006"]
4. Make a .env file to store analytic image and port addresses
If you use NVIDIA Ampere and Ada Lovelace architecture GPUs please use image below: