Skip to content

Commit 5dae1e3

Browse files
committed
Updated Release Notes for DLStreamer 2025.2.0 Release
1 parent 6a35e48 commit 5dae1e3

File tree

1 file changed

+81
-0
lines changed

1 file changed

+81
-0
lines changed

libraries/dl-streamer/RELEASE_NOTES.md

Lines changed: 81 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,86 @@
11
# Deep Learning Streamer (DL Streamer) Pipeline Framework Release Notes
22

3+
## Deep Learning Streamer (DL Streamer) Pipeline Framework Release 2025.2.0
4+
5+
Intel® Deep Learning Streamer (Intel® DL Streamer) Pipeline Framework is a streaming media analytics framework, based on GStreamer* multimedia framework, for creating complex media analytics pipelines. It ensures pipeline interoperability and provides optimized media, and inference operations using Intel® Distribution of OpenVINO™ Toolkit Inference Engine backend, across Intel® architecture, CPU, discrete GPU, integrated GPU and NPU.
6+
The complete solution leverages:
7+
8+
- Open source GStreamer\* framework for pipeline management
9+
- GStreamer* plugins for input and output such as media files and real-time streaming from camera or network
10+
- Video decode and encode plugins, either CPU optimized plugins or GPU-accelerated plugins based on VAAPI
11+
- Deep Learning models converted from training frameworks TensorFlow\*, Caffe\* etc.
12+
- The following elements in the Pipeline Framework repository:
13+
14+
| Element | Description |
15+
|---|---|
16+
| [gvadetect](./docs/source/elements/gvadetect.md) | Performs object detection on a full-frame or region of interest (ROI) using object detection models such as YOLOv4-v11, MobileNet SSD, Faster-RCNN etc. Outputs the ROI for detected objects. |
17+
| [gvaclassify](./docs/source/elements/gvaclassify.md) | Performs object classification. Accepts the ROI as an input and outputs classification results with the ROI metadata. |
18+
| [gvainference](./docs/source/elements/gvainference.md) | Runs deep learning inference on a full-frame or ROI using any model with an RGB or BGR input. |
19+
| [gvatrack](./docs/source/elements/gvatrack.md) | Performs object tracking using zero-term, or imageless tracking algorithms. Assigns unique object IDs to the tracked objects. |
20+
| [gvaaudiodetect](./docs/source/elements/gvaaudiodetect.md) | Performs audio event detection using AclNet model. |
21+
| [gvagenai](./docs/source/elements/gvagenai.md) | Performs inference with Vision Language Models using OpenVINO™ GenAI, accepts video and text prompt as an input, and outputs text description. It can be used to generate text summarization from video. |
22+
| [gvaattachroi](./docs/source/elements/gvaattachroi.md) | Adds user-defined regions of interest to perform inference on, instead of full frame. |
23+
| [gvafpscounter](./docs/source/elements/gvafpscounter.md) | Measures frames per second across multiple streams in a single process. |
24+
| [gvametaaggregate](./docs/source/elements/gvametaaggregate.md) | Aggregates inference results from multiple pipeline branches |
25+
| [gvametaconvert](./docs/source/elements/gvametaconvert.md) | Converts the metadata structure to the JSON format. |
26+
| [gvametapublish](./docs/source/elements/gvametapublish.md) | Publishes the JSON metadata to MQTT or Kafka message brokers or files. |
27+
| [gvapython](./docs/source/elements/gvapython.md) | Provides a callback to execute user-defined Python functions on every frame. Can be used for metadata conversion, inference post-processing, and other tasks. |
28+
| [gvarealsense](./docs/source/elements/gvarealsense.md) | Provides integration with Intel RealSense cameras, enabling video and depth stream capture for use in GStreamer pipelines. |
29+
| [gvawatermark](./docs/source/elements/gvawatermark.md) | Overlays the metadata on the video frame to visualize the inference results. |
30+
31+
For the details on supported platforms, please refer to [System Requirements](./get_started/system_requirements.md).
32+
For installing Pipeline Framework with the prebuilt binaries or Docker\* or to build the binaries from the open source, refer to [Intel® DL Streamer Pipeline Framework installation guide](./get_started/install/install_guide_index.md).
33+
34+
### New in this Release
35+
| Element | Description |
36+
|---|---|
37+
| CPU/GPU configurations | Fixed issue with segmentation fault and early exit for testing scenarios with mixed GPU/CPU device combinations. |
38+
| Documentation | Updated documentation for latency tracer. |
39+
| DeepSort | Fixed DeepSORT feature performance issue |
40+
| Motion detection | Fixed issue low performance of motion detection feature |
41+
| NPU/CPU | Fixed issue where NPU inference required inefficient CPU color processing |
42+
| gvawatermark/gvametaconvert/gvaclassify | Fixed memory leaks on Windows OS |
43+
| License Plate Recognition| Fixed sporadic hang on license-plate-recognition sample on GPU on LNL |
44+
| Model-proc | Improved model proc check logic for va backend |
45+
| Video Analytics | Fixed issue with service crashes |
46+
| gvagenai | Enabled memory mapper and add support for prompt-path |
47+
| gvawatermark | Fixed keypoints metadata processing issue |
48+
| gvarealsense | Fixed issue with missed element in dlstreamer image |
49+
|Arc B580 and Flex 170 | Enabled License Plate Recognition Sample on Arc B580 and Flex 170|
50+
| General| Fixed issue for scenario when vacompositor scale-method option didn't take affect|
51+
| Documentation | Fixed bug in the installation guide |
52+
| Models | Fixed warning message "Model quantization runtime does not match." for older models |
53+
| MQTT | Fixed connection to MQTT |
54+
| BMG/NPU | Fixed support for iGPU, BMG and NPU |
55+
| BMG/latency | Fixed issue with scheduling-policy=latency for BMG NX2 |
56+
| gvapython | Fixed issue with same name for many python modules used by gvapython |
57+
| Sample| Fixed issue with draw_face_attributes sample (cpp) on TGL Ubuntu 24|
58+
| Pose estimation | Fixed wrong pose estimation on ARL GPU with yolo11s-pose |
59+
| ARL| Fixed inconsistent timestamp for vehicle_pedestrian_tracking sample on ARL |
60+
| va-surface | Verified if 2nd and 3rd dGPUs (B580) are paired and working on va-surface sharing|
61+
| License Plate Recognition | Fixed application crash |
62+
| ARL | Enabled YoloV11 INT8 for ARL platform |
63+
| model-instance-id | Fixed hangs for pipelines with model-instance-id configured |
64+
| qsvh264dec | Fixed missing element 'qsvh264dec' in Ubuntu24 docker images |
65+
| GETI2.7 | Enabled support for GETI2.7 detection models on LNL |
66+
| GETI2.7 | Enabled support for GETI2.7 va-surface-sharing on GPU on MTL |
67+
68+
69+
70+
### Known Issues
71+
72+
| Issue | Issue Description |
73+
|---|---|
74+
| VAAPI memory with `decodebin` | If you are using `decodebin` in conjunction with `vaapi-surface-sharing` preprocessing backend you should set caps filter using `""video/x-raw(memory:VASurface)""` after `decodebin` to avoid issues with pipeline initialization |
75+
| Artifacts on `sycl_meta_overlay` | Running inference results visualization on GPU via `sycl_meta_overlay` may produce some partially drawn bounding boxes and labels |
76+
| Preview Architecture 2.0 Samples | Preview Arch 2.0 samples have known issues with inference results. |
77+
| Sporadic hang on `vehicle_pedestrian_tracking_20_cpu` sample | Using Tiger Lake CPU to run this sample may lead to sporadic hang at 99.9% of video processing. Rerun the sample as W/A or use GPU instead. |
78+
| Simplified installation process for option 2 via script | In certain configurations, users may encounter visible errors |
79+
| Error when using legacy YoloV5 models: Dynamic resize: Model width dimension shall be static | To avoid the issue, modify `samples/download_public_models.sh` by inserting the following snippet at lines 273 and 280: |
80+
| | python3 - <<EOF ""${MODEL_NAME}""<br>import sys, os<br>from openvino.runtime import Core<br>from openvino.runtime import save_model<br>model_name = sys.argv[1]<br>core = Core()<br>os.rename(f""{model_name}_openvino_model"", f""{model_name}_openvino_modelD"")<br>model = core.read_model(f""{model_name}_openvino_modelD/{model_name}.xml"")<br>model.reshape([-1, 3, 640, 640]) |
81+
82+
83+
384
## Deep Learning Streamer (DL Streamer) Pipeline Framework Release 2025.1.2
485

586
Intel® Deep Learning Streamer (Intel® DL Streamer) Pipeline Framework is a streaming media analytics framework, based on GStreamer* multimedia framework, for creating complex media analytics pipelines. It ensures pipeline interoperability and provides optimized media, and inference operations using Intel® Distribution of OpenVINO™ Toolkit Inference Engine backend, across Intel® architecture, CPU, discrete GPU, integrated GPU and NPU.

0 commit comments

Comments
 (0)