Skip to content

Conversation

@sairampillai
Copy link
Contributor

Description

Add Benchmarking mode for Metro AI Vision apps loitering-detection and smart-parking.

How Has This Been Tested?

Tested using end to end loitering-detection and smart-parking pipeline.

Checklist:

  • I agree to use the APACHE-2.0 license for my code changes.
  • I have not introduced any 3rd party components incompatible with APACHE-2.0.
  • I have not included any company confidential information, trade secret, password or security token.
  • I have performed a self-review of my code.

@sairampillai sairampillai self-assigned this Oct 13, 2025
@xwu2intel
Copy link
Contributor

Please setup meeting with me to describe your benchmark design: How it is working and how your KPIs are calculated. This is critical to ensure the right information is delivered.

@sairampillai sairampillai requested a review from a team as a code owner October 14, 2025 14:32
"name": "smart_parking_benchmarking",
"source": "gstreamer",
"queue_maxsize": 50,
"pipeline": "multifilesrc location=/home/pipeline-server/videos/new_video_1.mp4 loop=true ! parsebin ! vah264dec ! vapostproc ! video/x-raw(memory:VAMemory) ! gvadetect name=detection ! queue ! gvaclassify name=classification ! queue ! gvapython module=/home/pipeline-server/models/colorcls2/process class=Process function=process_frame ! queue ! gvawatermark ! videoconvertscale ! video/x-raw,format=I420 ! gvafpscounter ! appsink sync=false async=false",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please replace "parsebin ! vah264dec ! vapostproc ! video/x-raw(memory:VAMemory)" with "decodebin3"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please also remove "videoconvertscale ! video/x-raw,format=I420" - we should not need to copy output buffer to CPU-side memory

"detection-properties": {
"model": "/home/pipeline-server/models/intel/pedestrian-and-vehicle-detector-adas-0001/FP16-INT8/pedestrian-and-vehicle-detector-adas-0001.xml",
"device": "GPU",
"inference-interval": 5,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we set different inference-interval for GPU (5) and CPU (3)?

"inference-region": 0,
"batch-size": 1,
"nireq": 4,
"pre-process-backend": "va-surface-sharing",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please use "va" or NPU;
"va-surface-sharing" is not supported, and there is internal fallback to "va" inside DLStreamer - let's show end customers the actual backend used

@xwu2intel
Copy link
Contributor

This is a dup of #875 ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants