-
Notifications
You must be signed in to change notification settings - Fork 77
Add benchmarking mode to loitering-detection and smart-parking #793
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: Sairam Pillai <[email protected]>
Signed-off-by: Sairam Pillai <[email protected]>
...te/metro-vision-ai-app-recipe/loitering-detection/src/dlstreamer-pipeline-server/config.json
Outdated
Show resolved
Hide resolved
...ai-suite/metro-vision-ai-app-recipe/smart-parking/src/dlstreamer-pipeline-server/config.json
Show resolved
Hide resolved
...ai-suite/metro-vision-ai-app-recipe/smart-parking/src/dlstreamer-pipeline-server/config.json
Show resolved
Hide resolved
|
Please setup meeting with me to describe your benchmark design: How it is working and how your KPIs are calculated. This is critical to ensure the right information is delivered. |
Signed-off-by: Sairam Pillai <[email protected]>
Signed-off-by: Sairam Pillai <[email protected]>
| "name": "smart_parking_benchmarking", | ||
| "source": "gstreamer", | ||
| "queue_maxsize": 50, | ||
| "pipeline": "multifilesrc location=/home/pipeline-server/videos/new_video_1.mp4 loop=true ! parsebin ! vah264dec ! vapostproc ! video/x-raw(memory:VAMemory) ! gvadetect name=detection ! queue ! gvaclassify name=classification ! queue ! gvapython module=/home/pipeline-server/models/colorcls2/process class=Process function=process_frame ! queue ! gvawatermark ! videoconvertscale ! video/x-raw,format=I420 ! gvafpscounter ! appsink sync=false async=false", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please replace "parsebin ! vah264dec ! vapostproc ! video/x-raw(memory:VAMemory)" with "decodebin3"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please also remove "videoconvertscale ! video/x-raw,format=I420" - we should not need to copy output buffer to CPU-side memory
| "detection-properties": { | ||
| "model": "/home/pipeline-server/models/intel/pedestrian-and-vehicle-detector-adas-0001/FP16-INT8/pedestrian-and-vehicle-detector-adas-0001.xml", | ||
| "device": "GPU", | ||
| "inference-interval": 5, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we set different inference-interval for GPU (5) and CPU (3)?
| "inference-region": 0, | ||
| "batch-size": 1, | ||
| "nireq": 4, | ||
| "pre-process-backend": "va-surface-sharing", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please use "va" or NPU;
"va-surface-sharing" is not supported, and there is internal fallback to "va" inside DLStreamer - let's show end customers the actual backend used
|
This is a dup of #875 ? |
Description
Add Benchmarking mode for Metro AI Vision apps loitering-detection and smart-parking.
How Has This Been Tested?
Tested using end to end loitering-detection and smart-parking pipeline.
Checklist: