From ff5ec4876454c6c7200baaf0bb632460b29b7b42 Mon Sep 17 00:00:00 2001 From: Iwawi <160403457+Iwawi@users.noreply.github.com> Date: Sun, 30 Nov 2025 19:45:51 +0100 Subject: [PATCH 1/8] DOCS add link blocks to index files.md Update README.md [DOCS] add link blocks in metro [DOCS] translate to md [DOCS] add link blocks, pass 1 translate to md Remove defunct rst after md migration [DOCS] metro add link blocks [DOCS] metro add link blocks format fix [DOCS] educ add link blocks, pass 1 translate to md [DOCS] manuf add link blocks --- .../smart-classroom/docs/user-guide/index.md | 33 ++++++ .../smart-classroom/docs/user-guide/index.rst | 20 ---- .../docs/user-guide/{index.rst => index.md} | 72 ++++++------ .../user-guide/weld-defect-detection/index.md | 2 +- .../weld-anomaly-detection/index.md | 10 +- .../wind-turbine-anomaly-detection/index.md | 9 +- .../{index.rst => index.md} | 55 +++++---- .../{index.rst => index.md} | 52 +++++---- .../weld-porosity/{index.rst => index.md} | 52 +++++---- .../{index.rst => index.md} | 54 +++++---- .../image-based-video-search/docs/toc.md | 8 ++ .../image-based-video-search/docs/toc.rst | 5 - .../docs/user-guide/{index.rst => index.md} | 65 +++++------ .../README.md | 1 + .../docs/user-guide/index.md | 56 ++++++---- .../smart-nvr/docs/user-guide/index.md | 15 ++- .../docs/user-guide/{index.rst => index.md} | 104 +++++++++--------- 17 files changed, 347 insertions(+), 266 deletions(-) create mode 100644 education-ai-suite/smart-classroom/docs/user-guide/index.md delete mode 100644 education-ai-suite/smart-classroom/docs/user-guide/index.rst rename manufacturing-ai-suite/hmi-augmented-worker/docs/user-guide/{index.rst => index.md} (74%) rename manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pallet-defect-detection/{index.rst => index.md} (72%) rename manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pcb-anomaly-detection/{index.rst => index.md} (71%) rename manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/weld-porosity/{index.rst => index.md} (72%) rename manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/worker-safety-gear-detection/{index.rst => index.md} (73%) create mode 100644 metro-ai-suite/image-based-video-search/docs/toc.md delete mode 100644 metro-ai-suite/image-based-video-search/docs/toc.rst rename metro-ai-suite/image-based-video-search/docs/user-guide/{index.rst => index.md} (70%) rename metro-ai-suite/visual-search-question-and-answering/docs/user-guide/{index.rst => index.md} (54%) diff --git a/education-ai-suite/smart-classroom/docs/user-guide/index.md b/education-ai-suite/smart-classroom/docs/user-guide/index.md new file mode 100644 index 000000000..2449816b0 --- /dev/null +++ b/education-ai-suite/smart-classroom/docs/user-guide/index.md @@ -0,0 +1,33 @@ +# Smart Classroom + + + +The Smart Classroom project is a modular, extensible framework designed to process and summarize educational content using advanced AI models. It supports transcription, summarization, and future capabilities like video understanding and real-time analysis. + +The main features are as follows: + +- **Audio transcription** with ASR models (e.g., Whisper, Paraformer) +- **Summarization** using powerful LLMs (e.g., Qwen, LLaMA) +- **Plug-and-play architecture** for integrating new ASR and LLM models +- **API-first design** ready for frontend integration +- **Extensible roadmap** for real-time streaming, diarization, translation, and video analysis + + diff --git a/education-ai-suite/smart-classroom/docs/user-guide/index.rst b/education-ai-suite/smart-classroom/docs/user-guide/index.rst deleted file mode 100644 index 038f6948c..000000000 --- a/education-ai-suite/smart-classroom/docs/user-guide/index.rst +++ /dev/null @@ -1,20 +0,0 @@ -Smart Classroom -============================================ -The Smart Classroom project is a modular, extensible framework designed to process and summarize educational content using advanced AI models. It supports transcription, summarization, and future capabilities like video understanding and real-time analysis. - -The main features are as follows: - -- **Audio file processing and transcription** with ASR models (e.g., Whisper, Paraformer) -- **Summarization** using powerful LLMs (e.g., Qwen, LLaMA) -- **Plug-and-play architecture** for integrating new ASR and LLM models -- **API-first design** ready for frontend integration -- Ready-to-extend for real-time streaming, diarization, translation, and video analysis - -.. toctree:: - :hidden: - - system-requirements - how-it-works - get-started - application-flow - release-notes diff --git a/manufacturing-ai-suite/hmi-augmented-worker/docs/user-guide/index.rst b/manufacturing-ai-suite/hmi-augmented-worker/docs/user-guide/index.md similarity index 74% rename from manufacturing-ai-suite/hmi-augmented-worker/docs/user-guide/index.rst rename to manufacturing-ai-suite/hmi-augmented-worker/docs/user-guide/index.md index 47d168659..14af7fe97 100644 --- a/manufacturing-ai-suite/hmi-augmented-worker/docs/user-guide/index.rst +++ b/manufacturing-ai-suite/hmi-augmented-worker/docs/user-guide/index.md @@ -1,5 +1,15 @@ -HMI Augmented Worker -============================================ +# HMI Augmented Worker + + The HMI Augmented Worker is a RAG enabled HMI application deployed on Type-2 hypervisors. Deploying RAG-enabled HMI applications in a Type-2 hypervisor setup allows flexible and @@ -8,10 +18,10 @@ a single physical machine. In this architecture, the HMI application operates within a Windows® virtual machine managed by a Type-2 hypervisor such as -`EMT `__. +[EMT](https://github.com/open-edge-platform/edge-microvisor-toolkit). The Retrieval-Augmented Generation (RAG) pipeline and supporting AI services are deployed natively on a host system, which is EMT in this implementation. -`Chat Question-and-Answer Core `__ +[Chat Question-and-Answer Core](https://github.com/open-edge-platform/edge-ai-libraries/tree/release-2025.2.0/sample-applications/chat-question-and-answer-core) provides the RAG capability. This separation ensures robust isolation between the HMI and AI components, enabling independent scaling, maintenance, and updates. The setup leverages the strengths of both @@ -29,15 +39,13 @@ productivity for machine operators. In this sample application, the focus is on an RAG pipeline in a Type-2 Hypervisor-based setup. There is no reference HMI used and the user is expected to do the HMI integration using the RAG pipeline APIs provided. -How it works -############ +## How it works This section highlights the high-level architecture of the sample application. -High-Level Architecture -+++++++++++++++++++++++ +### High-Level Architecture -The system has a RAG pipeline reusing ``Chat Question and Answer Core`` application +The system has a RAG pipeline reusing `Chat Question and Answer Core` application running on the host alongside a typical HMI application which is executing on the Windows® Guest VM (virtual machine). A knowledge base is initialized by using the contents from a pre-configured folder. The folder contains the knowledge base like user @@ -51,12 +59,9 @@ and runs independently from the HMI application. The HMI application is responsi providing the required interface along with associated user experience to enable the operator to access this knowledge base. +![HMI augmented worker architecture diagram](./_images/hmi-augmented-worker-architecture.png) -.. image:: ./_images/hmi-augmented-worker-architecture.png - :alt: HMI Augmented Worker Architecture Diagram - -Chat Question-and-Answer Core (ChatQnA Core) -++++++++++++++++++++++++++++++++++++++++++++ +### Chat Question-and-Answer Core (ChatQnA Core) The 'ChatQnA Core' sample application serves as a basic Retrieval Augmented Generation (RAG) pipeline, allowing users to pose questions and obtain answers, even from their @@ -64,14 +69,13 @@ private data corpus. This sample application illustrates the construction of RAG It is designed for minimal memory usage, being developed as a single, monolithic application with the complete RAG pipeline integrated into one microservice. -The 'ChatQnA Core` application should be setup on the host system. For further details, -visit `Chat Question-and-Answer Core Sample Application Overview `__. +The `ChatQnA Core` application should be setup on the host system. For further details, +visit [Chat Question-and-Answer Core Sample Application Overview](https://github.com/open-edge-platform/edge-ai-libraries/blob/release-2025.2.0/sample-applications/chat-question-and-answer-core/docs/user-guide/overview.md). The application is used as is without any changes. The configurable parameters like the LLM model, Embedding model, Reranker model, or Retriever model are setup based on the HMI application requirement. -File Watcher Service -++++++++++++++++++++ +### File Watcher Service The File Watcher Service runs alongside with HMI application on the Windows environment, consistently observing file system activities like creation, modification, and deletion. @@ -80,11 +84,9 @@ When changes are detected, it sends the pertinent file data over the network to Retrieval-Augmented Generation (RAG) workflows. The watcher service logic is shown in the following flow diagram: -.. image:: ./_images/file-watcher-implementation-logic.png - :alt: File Watcher Service Implementation Logic Flow +![file watcher service implementation logic flow](./_images/file-watcher-implementation-logic.png) -Human Machine Interface(HMI) Application -++++++++++++++++++++++++++++++++++++++++ +### Human Machine Interface(HMI) Application A Human-Machine Interface(HMI) can vary depending on the use case or the creator. While HMIs generally serve as interface connecting users to machines, systems, or @@ -96,17 +98,19 @@ an accurate summary to state that this sample application illustrates how the `C RAG pipeline can be executed in a Type-2 Hypervisor setup enabling applications like HMI to benefit from it. -Supporting Resources -#################### +## Supporting Resources For more comprehensive guidance on beginning, see the -:doc:`Getting Started Guide <./get-started>`. - -.. toctree:: - :hidden: - - system-requirements - get-started - release-notes - how-to-build-from-source - Source Code +[Getting Started Guide](./get-started). + + diff --git a/manufacturing-ai-suite/industrial-edge-insights-multimodal/docs/user-guide/weld-defect-detection/index.md b/manufacturing-ai-suite/industrial-edge-insights-multimodal/docs/user-guide/weld-defect-detection/index.md index 6bc171f6e..7809e53a8 100644 --- a/manufacturing-ai-suite/industrial-edge-insights-multimodal/docs/user-guide/weld-defect-detection/index.md +++ b/manufacturing-ai-suite/industrial-edge-insights-multimodal/docs/user-guide/weld-defect-detection/index.md @@ -11,7 +11,7 @@ Let's discuss how this architecture translates to data flow in the weld defect d ### 1. **Weld Data Simulator** -The Weld Data Simulator uses the sets of time synchronized .avi and .csv files from the `edge-ai-suites/manufacturing-ai-suite/industrial-edge-insights-multimodal/weld-data-simulator/simulation-data/`, subset of test dataset coming from [Intel_Robotic_Welding_Multimodal_Dataset](https://huggingface.co/datasets/amr-lopezjos/Intel_Robotic_Welding_Multimodal_Dataset). +The Weld Data Simulator uses sets of time synchronized .avi and .csv files from the `edge-ai-suites/manufacturing-ai-suite/industrial-edge-insights-multimodal/weld-data-simulator/simulation-data/` subset of test dataset coming from [Intel_Robotic_Welding_Multimodal_Dataset](https://huggingface.co/datasets/amr-lopezjos/Intel_Robotic_Welding_Multimodal_Dataset). It ingests the .avi files as RTSP streams via the **mediamtx** server. This enables real-time video ingestion, simulating camera feeds for weld defect detection. Similarly, it ingests the .csv files as data points into **Telegraf** using the **MQTT** protocol. diff --git a/manufacturing-ai-suite/industrial-edge-insights-time-series/docs/user-guide/weld-anomaly-detection/index.md b/manufacturing-ai-suite/industrial-edge-insights-time-series/docs/user-guide/weld-anomaly-detection/index.md index b8a865afc..6ffe58042 100644 --- a/manufacturing-ai-suite/industrial-edge-insights-time-series/docs/user-guide/weld-anomaly-detection/index.md +++ b/manufacturing-ai-suite/industrial-edge-insights-time-series/docs/user-guide/weld-anomaly-detection/index.md @@ -1,5 +1,13 @@ # Weld Anomaly Detection + + This sample app demonstrates how AI-driven analytics enable edge devices to monitor weld quality. It detects anomalous weld patterns and alerts operators for timely intervention, ensuring proactive maintenance, safety, and operational efficiency. No more failures @@ -81,7 +89,6 @@ detect the anomalous power generation data points relative to wind speed. **Note**: Please note, CatBoost models doesn't run on Intel GPUs. - ##### **`tick_scripts/`** The TICKScript `weld_anomaly_detector.tick` determines processing of the input data coming in. @@ -92,4 +99,3 @@ By default, it is configured to publish the alerts to **MQTT**. The `weld_anomaly_detector.cb` is a model built using the CatBoostClassifier Algo of CatBoost ML library. - diff --git a/manufacturing-ai-suite/industrial-edge-insights-time-series/docs/user-guide/wind-turbine-anomaly-detection/index.md b/manufacturing-ai-suite/industrial-edge-insights-time-series/docs/user-guide/wind-turbine-anomaly-detection/index.md index fb636df2f..c5d0cd514 100644 --- a/manufacturing-ai-suite/industrial-edge-insights-time-series/docs/user-guide/wind-turbine-anomaly-detection/index.md +++ b/manufacturing-ai-suite/industrial-edge-insights-time-series/docs/user-guide/wind-turbine-anomaly-detection/index.md @@ -1,5 +1,13 @@ # Wind Turbine Anomaly Detection + + This sample app demonstrates a time series use case by detecting anomalous power generation patterns in wind turbines, relative to wind speed. By identifying deviations, it helps optimize maintenance schedules and prevent potential turbine failures, enhancing @@ -11,7 +19,6 @@ If you want to start working with it, instead, check out the [Get Started Guide](../get-started.md) or [How-to Guides](../how-to-guides/index.md) for Time-series applications. - ## App Architecture As seen in the following architecture diagram, the sample app at a high-level comprises of data simulators(can act as data destinations if configured) - these in the real world would be the physical devices, the generic Time Series AI stack based on **TICK Stack** comprising of Telegraf, InfluxDB, Time Series Analytics microservice using Kapacitor and Grafana. diff --git a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pallet-defect-detection/index.rst b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pallet-defect-detection/index.md similarity index 72% rename from manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pallet-defect-detection/index.rst rename to manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pallet-defect-detection/index.md index 5f9abf859..a142faff6 100644 --- a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pallet-defect-detection/index.rst +++ b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pallet-defect-detection/index.md @@ -1,10 +1,19 @@ -Pallet Defect Detection -============================== +# Pallet Defect Detection + + Automated quality control with AI-driven vision systems. -Overview -######## +## Overview This Sample Application enables real-time pallet condition monitoring by running inference workflows across multiple AI models. It connects multiple video streams from warehouse @@ -12,8 +21,7 @@ cameras to AI-powered pipelines, all operating efficiently on a single industria This solution enhances logistics efficiency and inventory management by detecting defects before they impact operations. -How It Works -############ +## How It Works This sample application consists of the following microservices: DL Streamer Pipeline Server, Model Registry Microservice(MRaaS), MediaMTX server, @@ -31,39 +39,35 @@ be seen on Prometheus UI. Any desired AI model from the Model Registry Microserv (which can interact with Postgres, Minio and Geti Server for getting the model) can be pulled into DL Streamer Pipeline Server and used for inference in the sample application. -.. figure:: ./images/industrial-edge-insights-vision-architecture.drawio.svg - :alt: Architecture and high-level representation of the flow of data through the architecture - - Figure 1: Architecture diagram +![architecture and high-level representation of the flow of data through the architecture](./images/industrial-edge-insights-vision-architecture.drawio.svg) This sample application is built with the following Intel Edge AI Stack Microservices: -- `DL Streamer Pipeline Server `__ +- [DL Streamer Pipeline Server](https://docs.openedgeplatform.intel.com/2025.2/edge-ai-libraries/dlstreamer-pipeline-server/index.html) is an interoperable containerized microservice based on Python for video ingestion and deep learning inferencing functions. -- `Model Registry Microservice `__ +- [Model Registry Microservice](https://docs.openedgeplatform.intel.com/2025.2/edge-ai-libraries/model-registry/index.html) provides a centralized repository that facilitates the management of AI models It also consists of the below Third-party microservices: -- `Nginx `__ +- [Nginx](https://hub.docker.com/_/nginx) is a high-performance web server and reverse proxy that provides TLS termination and unified HTTPS access. -- `MediaMTX Server `__ +- [MediaMTX Server](https://hub.docker.com/r/bluenviron/mediamtx) is a real-time media server and media proxy that allows to publish webrtc stream. -- `Coturn Server `__ +- [Coturn Server](https://hub.docker.com/r/coturn/) is a media traffic NAT traversal server and gateway. -- `Open telemetry Collector `__ +- [Open telemetry Collector](https://hub.docker.com/r/otel/opentelemetry-collector-contrib) is a set of receivers, exporters, processors, connectors for Open Telemetry. -- `Prometheus `__ +- [Prometheus](https://hub.docker.com/r/prom/prometheus) is a systems and service monitoring system used for viewing Open Telemetry. -- `Postgres `__ +- [Postgres](https://hub.docker.com/_/postgres) is object-relational database system that provides reliability and data integrity. -- `Minio `__ +- [Minio](https://hub.docker.com/r/minio/minio) is high performance object storage that is API compatible with Amazon S3 cloud storage service. -Features -######## +## Features This sample application offers the following features: @@ -73,9 +77,9 @@ This sample application offers the following features: - Interconnected warehouses deliver analytics for quick and informed tracking and decision making. - -.. toctree:: - :hidden: + diff --git a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pcb-anomaly-detection/index.rst b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pcb-anomaly-detection/index.md similarity index 71% rename from manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pcb-anomaly-detection/index.rst rename to manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pcb-anomaly-detection/index.md index 9ad6c3311..e4ab5134b 100644 --- a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pcb-anomaly-detection/index.rst +++ b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pcb-anomaly-detection/index.md @@ -1,10 +1,19 @@ -PCB Anomaly Detection -============================== +# PCB Anomaly Detection + + Automated quality control with AI-driven vision systems. -Overview -######## +## Overview This Sample Application enables real-time anomaly detection in printed circuit boards (PCB) by running inference workflows across multiple AI models. It connects multiple @@ -12,8 +21,7 @@ video streams from different cameras to AI-powered pipelines, all operating effi on a single industrial PC. This solution improves PCB production and compliance by anomalies before they can impact operations. -How It Works -############ +## How It Works This sample application consists of the following microservices: DL Streamer Pipeline Server, Model Registry Microservice(MRaaS), MediaMTX server, @@ -31,39 +39,35 @@ can be seen on Prometheus UI. Any desired AI model from the Model Registry Micro (which can interact with Postgres, Minio and Geti Server for getting the model) can be pulled into DL Streamer Pipeline Server and used for inference in the sample application. -.. figure:: ./images/industrial-edge-insights-vision-architecture.drawio.svg - :alt: Architecture and high-level representation of the flow of data through the architecture - - Figure 1: Architecture diagram +![architecture and high-level representation of the flow of data through the architecture](./images/industrial-edge-insights-vision-architecture.drawio.svg) This sample application is built with the following Intel Edge AI Stack Microservices: -- `DL Streamer Pipeline Server `__ +- [DL Streamer Pipeline Server](https://docs.openedgeplatform.intel.com/2025.2/edge-ai-libraries/dlstreamer-pipeline-server/index.html) is an interoperable containerized microservice based on Python for video ingestion and deep learning inferencing functions. -- `Model Registry Microservice `__ +- [Model Registry Microservice](https://docs.openedgeplatform.intel.com/2025.2/edge-ai-libraries/model-registry/index.html) provides a centralized repository that facilitates the management of AI models It also consists of the below Third-party microservices: -- `Nginx `__ +- [Nginx](https://hub.docker.com/_/nginx) is a high-performance web server and reverse proxy that provides TLS termination and unified HTTPS access. -- `MediaMTX Server `__ +- [MediaMTX Server](https://hub.docker.com/r/bluenviron/mediamtx) is a real-time media server and media proxy that allows to publish webrtc stream. -- `Coturn Server `__ +- [Coturn Server](https://hub.docker.com/r/coturn/) is a media traffic NAT traversal server and gateway. -- `Open telemetry Collector `__ +- [Open telemetry Collector](https://hub.docker.com/r/otel/opentelemetry-collector-contrib) is a set of receivers, exporters, processors, connectors for Open Telemetry. -- `Prometheus `__ +- [Prometheus](https://hub.docker.com/r/prom/prometheus) is a systems and service monitoring system used for viewing Open Telemetry. -- `Postgres `__ +- [Postgres](https://hub.docker.com/_/postgres) is object-relational database system that provides reliability and data integrity. -- `Minio `__ +- [Minio](https://hub.docker.com/r/minio/minio) is high performance object storage that is API compatible with Amazon S3 cloud storage service. -Features -######## +## Features This sample application offers the following features: @@ -71,7 +75,8 @@ This sample application offers the following features: - AI-assisted anomaly detection in PCBs. - On-premise data processing for data privacy and efficient use of bandwidth. -.. toctree:: + diff --git a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/weld-porosity/index.rst b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/weld-porosity/index.md similarity index 72% rename from manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/weld-porosity/index.rst rename to manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/weld-porosity/index.md index ed2bbef0d..8d094cc44 100644 --- a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/weld-porosity/index.rst +++ b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/weld-porosity/index.md @@ -1,10 +1,19 @@ -Weld Porosity Detection -============================== +# Weld Porosity Detection + + Prevent defects in real time with AI-powered monitoring. -Overview -######## +## Overview AI and machine vision enable real-time detection of welding defects, ensuring immediate corrective action before issues escalate. By leveraging the right camera and computing @@ -12,8 +21,7 @@ hardware, a trained AI model continuously monitors the weld, halting the process the moment a defect is detected. Deep learning AI processes video data at frame rates far beyond human capability, delivering unmatched precision and reliability. -How It Works -############ +## How It Works This sample application consists of the following microservices: DL Streamer Pipeline Server, Model Registry Microservice(MRaaS), MediaMTX server, @@ -32,39 +40,35 @@ Model Registry Microservice (which can interact with Postgres, Minio and Geti Se getting the model) can be pulled into DL Streamer Pipeline Server and used for inference in the sample application. -.. figure:: ./images/industrial-edge-insights-vision-architecture.drawio.svg - :alt: Architecture and high-level representation of the flow of data through the architecture - - Figure 1: Architecture diagram +![architecture and high-level representation of the flow of data through the architecture](./images/industrial-edge-insights-vision-architecture.drawio.svg) This sample application is built with the following Intel Edge AI Stack Microservices: -- `DL Streamer Pipeline Server `__ +- [DL Streamer Pipeline Server](https://docs.openedgeplatform.intel.com/2025.2/edge-ai-libraries/dlstreamer-pipeline-server/index.html) is an interoperable containerized microservice based on Python for video ingestion and deep learning inferencing functions. -- `Model Registry Microservice `__ +- [Model Registry Microservice](https://docs.openedgeplatform.intel.com/2025.2/edge-ai-libraries/model-registry/index.html) provides a centralized repository that facilitates the management of AI models It also consists of the below Third-party microservices: -- `Nginx `__ +- [Nginx](https://hub.docker.com/_/nginx) is a high-performance web server and reverse proxy that provides TLS termination and unified HTTPS access. -- `MediaMTX Server `__ +- [MediaMTX Server](https://hub.docker.com/r/bluenviron/mediamtx) is a real-time media server and media proxy that allows to publish webrtc stream. -- `Coturn Server `__ +- [Coturn Server](https://hub.docker.com/r/coturn/coturn) is a media traffic NAT traversal server and gateway. -- `Open telemetry Collector `__ +- [Open telemetry Collector](https://hub.docker.com/r/otel/opentelemetry-collector-contrib) is a set of receivers, exporters, processors, connectors for Open Telemetry. -- `Prometheus `__ +- [Prometheus](https://hub.docker.com/r/prom/prometheus) is a systems and service monitoring system used for viewing Open Telemetry. -- `Postgres `__ +- [Postgres](https://hub.docker.com/_/postgres) is object-relational database system that provides reliability and data integrity. -- `Minio `__ +- [Minio](https://hub.docker.com/r/minio/minio) is high performance object storage that is API compatible with Amazon S3 cloud storage service. -Features -######## +## Features This sample application offers the following features: @@ -74,7 +78,8 @@ This sample application offers the following features: - Interconnected welding setups deliver analytics for quick and informed tracking and decision making. -.. toctree:: + diff --git a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/worker-safety-gear-detection/index.rst b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/worker-safety-gear-detection/index.md similarity index 73% rename from manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/worker-safety-gear-detection/index.rst rename to manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/worker-safety-gear-detection/index.md index 60de33beb..bd0423e31 100644 --- a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/worker-safety-gear-detection/index.rst +++ b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/worker-safety-gear-detection/index.md @@ -1,10 +1,19 @@ -Worker Safety Gear Detection -============================== +# Worker Safety Gear Detection + + Automated quality control with AI-driven vision systems. -Overview -######## +## Overview This Sample Application enables real-time safety gear monitoring of workers by running inference workflows across multiple AI models. It connects multiple video streams from @@ -12,8 +21,7 @@ construction site cameras to AI-powered pipelines, all operating efficiently on industrial PC. This solution improves construction site safety and compliance by detecting safety gear related risks before they can impact operations. -How It Works -############ +## How It Works This sample application consists of the following microservices: DL Streamer Pipeline Server, Model Registry Microservice(MRaaS), MediaMTX server, @@ -32,41 +40,35 @@ Model Registry Microservice (which can interact with Postgres, Minio and Geti Se getting the model) can be pulled into DL Streamer Pipeline Server and used for inference in the sample application. -.. figure:: ./images/industrial-edge-insights-vision-architecture.drawio.svg - :alt: Architecture and high-level representation of the flow of data through the architecture - - - Figure 1: Architecture diagram +![architecture and high-level representation of the flow of data through the architecture](./images/industrial-edge-insights-vision-architecture.drawio.svg) This sample application is built with the following Intel Edge AI Stack Microservices: -- `DL Streamer Pipeline Server `__ +- [DL Streamer Pipeline Server](https://docs.openedgeplatform.intel.com/2025.2/edge-ai-libraries/dlstreamer-pipeline-server/index.html) is an interoperable containerized microservice based on Python for video ingestion and deep learning inferencing functions. -- `Model Registry Microservice `__ +- [Model Registry Microservice](https://docs.openedgeplatform.intel.com/2025.2/edge-ai-libraries/model-registry/index.html) provides a centralized repository that facilitates the management of AI models It also consists of the below Third-party microservices: -- `Nginx `__ +- [Nginx](https://hub.docker.com/_/nginx) is a high-performance web server and reverse proxy that provides TLS termination and unified HTTPS access. -- `MediaMTX Server `__ +- [MediaMTX Server](https://hub.docker.com/r/bluenviron/mediamtx) is a real-time media server and media proxy that allows to publish webrtc stream. -- `Coturn Server `__ +- [Coturn Server](https://hub.docker.com/r/coturn/coturn) is a media traffic NAT traversal server and gateway. -- `Open telemetry Collector `__ +- [Open telemetry Collector](https://hub.docker.com/r/otel/opentelemetry-collector-contrib) is a set of receivers, exporters, processors, connectors for Open Telemetry. -- `Prometheus `__ +- [Prometheus](https://hub.docker.com/r/prom/prometheus) is a systems and service monitoring system used for viewing Open Telemetry. -- `Postgres `__ +- [Postgres](https://hub.docker.com/_/postgres) is object-relational database system that provides reliability and data integrity. -- `Minio `__ +- [Minio](https://hub.docker.com/r/minio/minio) is high performance object storage that is API compatible with Amazon S3 cloud storage service. - -Features -######## +## Features This sample application offers the following features: @@ -77,7 +79,8 @@ This sample application offers the following features: - Interconnected construction site cameras deliver analytics for quick and informed tracking and decision making. -.. toctree:: + diff --git a/metro-ai-suite/image-based-video-search/docs/toc.md b/metro-ai-suite/image-based-video-search/docs/toc.md new file mode 100644 index 000000000..da8acc43e --- /dev/null +++ b/metro-ai-suite/image-based-video-search/docs/toc.md @@ -0,0 +1,8 @@ +Image-Based Video Search Sample Application + + \ No newline at end of file diff --git a/metro-ai-suite/image-based-video-search/docs/toc.rst b/metro-ai-suite/image-based-video-search/docs/toc.rst deleted file mode 100644 index 0901749dc..000000000 --- a/metro-ai-suite/image-based-video-search/docs/toc.rst +++ /dev/null @@ -1,5 +0,0 @@ -Image-Based Video Search Sample Application - -.. toctree:: - - user-guide/index \ No newline at end of file diff --git a/metro-ai-suite/image-based-video-search/docs/user-guide/index.rst b/metro-ai-suite/image-based-video-search/docs/user-guide/index.md similarity index 70% rename from metro-ai-suite/image-based-video-search/docs/user-guide/index.rst rename to metro-ai-suite/image-based-video-search/docs/user-guide/index.md index 4d7f61758..7f8822c70 100644 --- a/metro-ai-suite/image-based-video-search/docs/user-guide/index.rst +++ b/metro-ai-suite/image-based-video-search/docs/user-guide/index.md @@ -1,12 +1,20 @@ -Image-Based Video Search Sample Application -=========================================== +# Image-Based Video Search Sample Application + + Performs near real-time analysis and image-based search to detect and retrieve objects of interest in large video datasets. -Overview -######## - +## Overview This sample application lets users search live or recorded camera feeds by providing an image and view matching objects with location, timestamp, @@ -20,18 +28,13 @@ You can use this foundation to build solutions for diverse use cases, including city infrastructure monitoring and security applications, helping operators quickly locate objects of interest across large video datasets. -How it Works -############ +## How it Works The application workflow has three stages: inputs, processing, and outputs. -.. figure:: ./_images/architecture.svg - :alt: Diagram illustrating the components and interactions within the Image-Based Video Search system, including inputs, processing, and outputs. - - Figure 1: Diagram illustrating the components and interactions within the Image-Based Video Search system, including inputs, processing, and outputs. +![architectural diagram](./_images/architecture.svg) -Inputs -###### +## Inputs - Video files or live camera streams (simulated or real time) - User-provided images or images captured from video for search @@ -39,8 +42,7 @@ Inputs The application includes a demonstration video for testing. The video loops continuously and appears in the UI as soon as the application starts. -Processing -########## +## Processing - **Nginx reverse proxy server**: All interactions with user happens via Nginx server. It protects IBVS app by handling SSL/TLS encryption, filtering and validating requests and making the app directly inaccessible from external access. - **Video analysis with Deep Learning Streamer Pipeline Server and MediaMTX**: @@ -63,26 +65,25 @@ Processing ImageIngestor, processes them with DL Streamer Pipeline Server, and matches them against stored feature vectors in MilvusDB. -Outputs -####### +## Outputs - Matched search results, including metadata, timestamps, confidence scores, and frames -.. figure:: ./_images/imagesearch2.png - :alt: Screenshot of the Image-Based Video Search sample application interface displaying search input and matched results. +![application interface screenshot](./_images/imagesearch2.png) +*Screenshot of the Image-Based Video Search sample application interface displaying search input and matched results* - Figure 2: Screenshot of the Image-Based Video Search sample application interface displaying search input and matched results + diff --git a/metro-ai-suite/sensor-fusion-for-traffic-management/README.md b/metro-ai-suite/sensor-fusion-for-traffic-management/README.md index be30ee0f7..2f267a2f1 100644 --- a/metro-ai-suite/sensor-fusion-for-traffic-management/README.md +++ b/metro-ai-suite/sensor-fusion-for-traffic-management/README.md @@ -3,6 +3,7 @@ Unlock the future of traffic management with the Intel® software reference implementation of the Metro AI Suite Sensor Fusion for Traffic Management. This implementation integrates AI inferencing with sensor fusion technology, utilizing multi-modal sensors such as cameras and radars to deliver unparalleled performance. A traffic management system leveraging the fusion of camera and radar/lidar sensors offers superior accuracy and reliability over camera-only solutions. Cameras capture high-resolution visual data, while radar/lidar sensors precisely measure speed and distance, even under challenging conditions like fog, rain, or darkness. This integration ensures a more robust and comprehensive approach to traffic monitoring and decision-making, enhancing overall system performance and safety. ## Learn More + - [Overview](./docs/user-guide/index.md) - [Get started guide](./docs/user-guide/get-started-guide.md) - [Advanced user guide](./docs/user-guide/advanced-user-guide.md) diff --git a/metro-ai-suite/sensor-fusion-for-traffic-management/docs/user-guide/index.md b/metro-ai-suite/sensor-fusion-for-traffic-management/docs/user-guide/index.md index f49f71ddb..29521abf1 100644 --- a/metro-ai-suite/sensor-fusion-for-traffic-management/docs/user-guide/index.md +++ b/metro-ai-suite/sensor-fusion-for-traffic-management/docs/user-guide/index.md @@ -1,32 +1,42 @@ # Sensor Fusion for Traffic Management + + A multi-modal reference implementation for traffic management, enabling partners to blend camera and radar/lidar sensor inputs to accurately monitor traffic conditions. ## Overview Metro AI Suite Sensor Fusion for Traffic Management is a reference implementation of an AI -system integrated with sensor fusion technology. It utilizes multi-modal sensors such as -cameras and radars/lidars to deliver traffic-management focused on performance, accuracy, and +system integrated with sensor fusion technology. It utilizes multi-modal sensors, such as +cameras and radars/lidars, to deliver traffic-management focused on performance, accuracy, and reliability superseding those of camera-only solutions. Cameras capture high-resolution visual data, while radar/lidar sensors precisely measure speed and distance, even under challenging -conditions such as fog, rain, or darkness. This integration ensures a more robust and +conditions, such as fog, rain, or darkness. This integration ensures a more robust and comprehensive approach to traffic monitoring and decision-making, enhancing overall system performance and safety. This sample features multiple pipelines tailored to specific sensor fusion use cases, combining cameras with either radar or lidar: -- One camera paired with one mmWave radar (1C+1R), -- Four cameras paired with four mmWave radars (4C+4R), -- two cameras paired with one mmWave radar (2C+1R), -- sixteen cameras paired with four mmWave radars (16C+4R). -- Two cameras paired with one lidar (2C+1L), -- Four cameras paired with two lidars (4C+2L), -- Twelve cameras paired with two lidars (12C+2L), -- Eight cameras paired with four lidars (8C+4L), -- Twelve cameras paired with four lidars (12C+4L), - +- One camera paired with one mmWave radar (1C+1R) +- Two cameras paired with one mmWave radar (2C+1R) +- Four cameras paired with four mmWave radars (4C+4R) +- Sixteen cameras paired with four mmWave radars (16C+4R) +- Two cameras paired with one lidar (2C+1L) +- Four cameras paired with two lidars (4C+2L) +- Twelve cameras paired with two lidars (12C+2L) +- Eight cameras paired with four lidars (8C+4L) +- Twelve cameras paired with four lidars (12C+4L) ## Key Features @@ -37,20 +47,18 @@ and cost-efficient solution, leverage the Intel-powered Whether you are developing a comprehensive traffic management system or showcasing your hardware platform's capabilities, this reference implementation serves as the perfect foundation. -* Powerful and scalable CPU, built-in GPU (iGPU), dGPU configurations that deliver heterogeneous computing capabilities for sensor fusion-based AI inferencing. -* Low power consumption package with a wide temperature range, compact fanless design, and enhanced vibration resistance. -* Processors designed for industrial and embedded conditions, ensuring high system reliability. -* Optimized software reference implementation based on open-source code to support performance evaluation, rapid prototyping, and quick time-to-market. -* Rugged and compact PC design to withstand harsh in-vehicle environmental conditions. +- Powerful and scalable CPU, built-in GPU (iGPU), dGPU configurations that deliver heterogeneous computing capabilities for sensor fusion-based AI inferencing. +- Low power consumption package with a wide temperature range, compact fanless design, and enhanced vibration resistance. +- Processors designed for industrial and embedded conditions, ensuring high system reliability. +- Optimized software reference implementation based on open-source code to support performance evaluation, rapid prototyping, and quick time-to-market. +- Rugged and compact PC design to withstand harsh in-vehicle environmental conditions. ## Benefits -* **Enhanced AI Performance**: Achieve superior AI performance with our recommended optimization techniques, rigorously tested on industry-leading AI models and sensor fusion workloads. -* **Accelerated Time to Market**: Speed up your development process by leveraging our pre-validated SDK and Intel-powered qualified AI Systems, ensuring a quicker path from concept to deployment. -* **Cost Efficiency**: Lower your development costs with royalty-free developer tools and cost-effective hardware platforms, ideal for prototyping, development, and validation of edge AI traffic solutions. -* **Simplified Development**: Reduce complexity with our best-known methods and streamlined approach, making it easier to build an intelligent traffic management system. - - +- **Enhanced AI Performance**: Achieve superior AI performance with our recommended optimization techniques, rigorously tested on industry-leading AI models and sensor fusion workloads. +- **Accelerated Time to Market**: Speed up your development process by leveraging our pre-validated SDK and Intel-powered qualified AI Systems, ensuring a quicker path from concept to deployment. +- **Cost Efficiency**: Lower your development costs with royalty-free developer tools and cost-effective hardware platforms, ideal for prototyping, development, and validation of edge AI traffic solutions. +- **Simplified Development**: Reduce complexity with our best-known methods and streamlined approach, making it easier to build an intelligent traffic management system. + The sample application showcases the use of GenAI-powered vision analytics to transform a traditional NVR into a Smart NVR, unlocking advanced insights and automation at the edge. It is designed to help developers understand the architecture, setup, and @@ -65,7 +76,7 @@ The diagram shows the key components of the Smart NVR application. The descripti - [How to Use the Application](./how-to-use-application.md): Explore the application's features and verify its functionality. - [Support and Troubleshooting](./troubleshooting.md): Find solutions to common issues and troubleshooting steps. - + diff --git a/metro-ai-suite/visual-search-question-and-answering/docs/user-guide/index.rst b/metro-ai-suite/visual-search-question-and-answering/docs/user-guide/index.md similarity index 54% rename from metro-ai-suite/visual-search-question-and-answering/docs/user-guide/index.rst rename to metro-ai-suite/visual-search-question-and-answering/docs/user-guide/index.md index 756f1f9a5..a0a46bc30 100644 --- a/metro-ai-suite/visual-search-question-and-answering/docs/user-guide/index.rst +++ b/metro-ai-suite/visual-search-question-and-answering/docs/user-guide/index.md @@ -1,11 +1,20 @@ -Visual Search and QA -==================== +# Visual Search and QA + + Combination of a multi-modal search engine and a visual Q&A assistant, allowing users to add search results as context for more related answers. -Overview -######## +## Overview We deliver a Reference Implementation, named "Visual Search and QA". It is mainly composed of three parts: a multi-modal search engine, a multi-modal visual QnA chatbot which can @@ -13,9 +22,9 @@ answer questions based on the search results, and a fronted web UI which allows interact with and examine the search engine and chatbot. The search engine is equipped with a data preparation microservice and a retriever -microservice. Together they support a typical workflow: images and videos data are -processed and stored into a database, then users can start a query with text description, -the images and videos that fit the description would be found in the database and returned +microservice. Together they support a typical workflow: images and video data are +processed and stored into a database, then users can start a query with a text description, +and the images and videos that fit the description are found in the database and returned to users. The visual QnA chatbot is a large vision language model that can take text and/or visual @@ -25,41 +34,34 @@ questions based on the context. - **Programming Language:** Python -How It Works -############ +## How It Works The high-level architecture is shown below -.. figure:: ./_images/visual_search_qa_design.png - :alt: Architecture +![architecture diagram](./_images/visual_search_qa_design.png) - Figure 1: Architecture Diagram - -Dataprep -++++++++ +### Dataprep The dataprep microservice processes images and videos, extracts their embeddings using the image encoder from the CLIP model, and stores them in a vector database. -Video Processing: ------------------ +#### Video Processing - Extract frames at configurable intervals. -Image/Frame Processing: ------------------------ +#### Image/Frame Processing - Resize, convert colors, normalize, and apply object detection with cropping. -.. note:: - - Object detection and cropping improve retrieval performance for large-scale scene - images (e.g., high-resolution surveillance images with multiple objects). - Since the image encoder input size is 224x224, resizing may render some objects - (e.g., humans, vehicles) unrecognizable. - Object detection and cropping preserve these objects as clear targets in separate - cropped images. Metadata links the original image to its cropped versions. - During retrieval, if a cropped image matches, the original image is returned. +> **Note** +> +> Object detection and cropping improve retrieval performance for large-scale scene +> images (e.g., high-resolution surveillance images with multiple objects). +> Since the image encoder input size is 224x224, resizing may render some objects +> (e.g., humans, vehicles) unrecognizable. +> Object detection and cropping preserve these objects as clear targets in separate +> cropped images. Metadata links the original image to its cropped versions. +> During retrieval, if a cropped image matches, the original image is returned. Instead of uploading data, users can specify directories on the host machine as data sources. This approach is more efficient for large datasets, which are common in the @@ -68,59 +70,51 @@ certain access to the server. Then users know where the files are stored on the machine, and can provide the file directory as input so that the microservice can process one-after-another or in batches. -Retriever -+++++++++ +### Retriever The retriever microservice consists of a local multi-modal embedding model (same as the dataprep microservice) and a vector DB search engine. -Workflow: ---------- +#### Workflow 1. The embedding model generates text embeddings for input descriptions (e.g., "traffic jam"). 2. The search engine searches the vector database for the top-k most similar matches. -Model Serving -+++++++++++++ +### Model Serving Check the -`model serving doc `__ +[model serving doc](https://github.com/open-edge-platform/edge-ai-libraries/tree/release-2025.2.0/microservices) for more details. -Web UI -++++++ +### Web UI -The UI, built with ``streamlit``, allows users to: +The UI, built with `streamlit`, allows users to: - Enter search queries. - View matched results. - Interact with the LVM in a chatbox with upload tools. -Visual Search and QA UI Initial Interface: ------------------------------------------- +#### Visual Search and QA UI Initial Interface -.. figure:: ./_images/web_ui.png - :alt: Visual Search and QA UI Init Interface +![initial web UI image](./_images/web_ui.png) - Figure 2: Initial Web UI + Figure 1: Initial Web UI -Visual Search and QA UI Example: --------------------------------- +#### Visual Search and QA UI Example -.. figure:: ./_images/web_ui_res.png - :alt: Visual Search and QA UI Example +![web UI with example](./_images/web_ui_res.png) - Figure 3: Web UI with an example + Figure 2: Web UI with an example -Learn More -########## +## Learn More -- Check the :doc:`System requirements <./system-requirements>`. -- Start with the :doc:`Get Started <./get-started>`. -- Deploy with :doc:`Helm chart <./deploy-with-helm>`. +- Check the [System requirements](./system-requirements). +- Start with the [Get Started](./get-started). +- Deploy with [Helm chart](./deploy-with-helm). -.. toctree:: + From a891c5867d0b61f3877b6c7c7ee390079df0c096 Mon Sep 17 00:00:00 2001 From: Iwawi Date: Mon, 8 Dec 2025 13:32:51 +0100 Subject: [PATCH 2/8] DOCS-fix-toc-port pass 2 --- .../smart-classroom/docs/user-guide/index.md | 6 +-- .../pallet-defect-detection/index.md | 52 +++++++++--------- .../user-guide/pcb-anomaly-detection/index.md | 46 ++++++++-------- .../docs/user-guide/weld-porosity/index.md | 46 ++++++++-------- .../worker-safety-gear-detection/index.md | 54 +++++++++---------- .../image-based-video-search/docs/toc.md | 2 +- .../docs/user-guide/index.md | 16 +++--- 7 files changed, 111 insertions(+), 111 deletions(-) diff --git a/education-ai-suite/smart-classroom/docs/user-guide/index.md b/education-ai-suite/smart-classroom/docs/user-guide/index.md index 2449816b0..981ae3f8b 100644 --- a/education-ai-suite/smart-classroom/docs/user-guide/index.md +++ b/education-ai-suite/smart-classroom/docs/user-guide/index.md @@ -25,9 +25,9 @@ The main features are as follows: :::{toctree} :hidden: - system-requirements - get-started - release- +system-requirements +get-started +release- ::: hide_directive--> diff --git a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pallet-defect-detection/index.md b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pallet-defect-detection/index.md index a142faff6..78ea54ede 100644 --- a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pallet-defect-detection/index.md +++ b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pallet-defect-detection/index.md @@ -81,32 +81,32 @@ This sample application offers the following features: :::{toctree} :hidden: - overview-architecture - system-requirements - get-started - troubleshooting-guide - how-to-change-input-video-source - how-to-deploy-using-helm-charts - how-to-deploy-with-edge-orchestrator - how-to-enable-mlops - how-to-manage-pipelines - how-to-run-multiple-ai-pipelines - how-to-scale-video-resolution - how-to-use-an-ai-model-and-video-file-of-your-own - how-to-use-opcua-publisher - how-to-run-store-frames-in-s3 - how-to-view-telemetry-data - how-to-use-gpu-for-inference - how-to-start-mqtt-publisher - how-to-integrate-balluff-sdk - how-to-install-balluff-sdk-on-host - how-to-integrate-pylon-sdk - how-to-install-pylon-sdk-on-host.md - how-to-benchmark - api-reference - environment-variables - - release_notes/Overview +overview-architecture +system-requirements +get-started +troubleshooting-guide +how-to-change-input-video-source +how-to-deploy-using-helm-charts +how-to-deploy-with-edge-orchestrator +how-to-enable-mlops +how-to-manage-pipelines +how-to-run-multiple-ai-pipelines +how-to-scale-video-resolution +how-to-use-an-ai-model-and-video-file-of-your-own +how-to-use-opcua-publisher +how-to-run-store-frames-in-s3 +how-to-view-telemetry-data +how-to-use-gpu-for-inference +how-to-start-mqtt-publisher +how-to-integrate-balluff-sdk +how-to-install-balluff-sdk-on-host +how-to-integrate-pylon-sdk +how-to-install-pylon-sdk-on-host.md +how-to-benchmark +api-reference +environment-variables + +release_notes/Overview ::: hide_directive--> diff --git a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pcb-anomaly-detection/index.md b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pcb-anomaly-detection/index.md index e4ab5134b..626212b5b 100644 --- a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pcb-anomaly-detection/index.md +++ b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pcb-anomaly-detection/index.md @@ -77,29 +77,29 @@ This sample application offers the following features: diff --git a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/weld-porosity/index.md b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/weld-porosity/index.md index 8d094cc44..8f63f4cdb 100644 --- a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/weld-porosity/index.md +++ b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/weld-porosity/index.md @@ -80,29 +80,29 @@ This sample application offers the following features: diff --git a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/worker-safety-gear-detection/index.md b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/worker-safety-gear-detection/index.md index bd0423e31..f894b72ab 100644 --- a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/worker-safety-gear-detection/index.md +++ b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/worker-safety-gear-detection/index.md @@ -81,33 +81,33 @@ This sample application offers the following features: diff --git a/metro-ai-suite/image-based-video-search/docs/toc.md b/metro-ai-suite/image-based-video-search/docs/toc.md index da8acc43e..9585d208c 100644 --- a/metro-ai-suite/image-based-video-search/docs/toc.md +++ b/metro-ai-suite/image-based-video-search/docs/toc.md @@ -3,6 +3,6 @@ Image-Based Video Search Sample Application \ No newline at end of file diff --git a/metro-ai-suite/visual-search-question-and-answering/docs/user-guide/index.md b/metro-ai-suite/visual-search-question-and-answering/docs/user-guide/index.md index a0a46bc30..69cfa9138 100644 --- a/metro-ai-suite/visual-search-question-and-answering/docs/user-guide/index.md +++ b/metro-ai-suite/visual-search-question-and-answering/docs/user-guide/index.md @@ -115,13 +115,13 @@ The UI, built with `streamlit`, allows users to: From b90310504b3afdc7749bedbdb904a0d4b3ae32d3 Mon Sep 17 00:00:00 2001 From: Iwawi Date: Mon, 8 Dec 2025 13:46:30 +0100 Subject: [PATCH 3/8] DOCS text aligning fix --- education-ai-suite/smart-classroom/docs/user-guide/index.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/education-ai-suite/smart-classroom/docs/user-guide/index.md b/education-ai-suite/smart-classroom/docs/user-guide/index.md index 981ae3f8b..5351b11d3 100644 --- a/education-ai-suite/smart-classroom/docs/user-guide/index.md +++ b/education-ai-suite/smart-classroom/docs/user-guide/index.md @@ -15,11 +15,11 @@ The Smart Classroom project is a modular, extensible framework designed to proce The main features are as follows: -- **Audio transcription** with ASR models (e.g., Whisper, Paraformer) +- **Audio file processing and transcription** with ASR models (e.g., Whisper, Paraformer) - **Summarization** using powerful LLMs (e.g., Qwen, LLaMA) - **Plug-and-play architecture** for integrating new ASR and LLM models - **API-first design** ready for frontend integration -- **Extensible roadmap** for real-time streaming, diarization, translation, and video analysis +- Ready-to-extend for real-time streaming, diarization, translation, and video analysis From a165028e2f5a5a567cedbd8ae194119295fcfe26 Mon Sep 17 00:00:00 2001 From: Iwawi Date: Fri, 12 Dec 2025 12:36:43 +0100 Subject: [PATCH 6/8] [DOCS] fix link --- .../docs/user-guide/pallet-defect-detection/index.md | 3 +-- .../docs/user-guide/pcb-anomaly-detection/index.md | 2 +- 2 files changed, 2 insertions(+), 3 deletions(-) diff --git a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pallet-defect-detection/index.md b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pallet-defect-detection/index.md index 78ea54ede..01df858bb 100644 --- a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pallet-defect-detection/index.md +++ b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pallet-defect-detection/index.md @@ -55,7 +55,7 @@ It also consists of the below Third-party microservices: is a high-performance web server and reverse proxy that provides TLS termination and unified HTTPS access. - [MediaMTX Server](https://hub.docker.com/r/bluenviron/mediamtx) is a real-time media server and media proxy that allows to publish webrtc stream. -- [Coturn Server](https://hub.docker.com/r/coturn/) +- [Coturn Server](https://hub.docker.com/r/coturn/coturn) is a media traffic NAT traversal server and gateway. - [Open telemetry Collector](https://hub.docker.com/r/otel/opentelemetry-collector-contrib) is a set of receivers, exporters, processors, connectors for Open Telemetry. @@ -105,7 +105,6 @@ how-to-install-pylon-sdk-on-host.md how-to-benchmark api-reference environment-variables - release_notes/Overview ::: diff --git a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pcb-anomaly-detection/index.md b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pcb-anomaly-detection/index.md index 626212b5b..423b03c99 100644 --- a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pcb-anomaly-detection/index.md +++ b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pcb-anomaly-detection/index.md @@ -55,7 +55,7 @@ It also consists of the below Third-party microservices: is a high-performance web server and reverse proxy that provides TLS termination and unified HTTPS access. - [MediaMTX Server](https://hub.docker.com/r/bluenviron/mediamtx) is a real-time media server and media proxy that allows to publish webrtc stream. -- [Coturn Server](https://hub.docker.com/r/coturn/) +- [Coturn Server](https://hub.docker.com/r/coturn/coturn) is a media traffic NAT traversal server and gateway. - [Open telemetry Collector](https://hub.docker.com/r/otel/opentelemetry-collector-contrib) is a set of receivers, exporters, processors, connectors for Open Telemetry. From 77c83958352c21ca189e7162043adc1cf52eb540 Mon Sep 17 00:00:00 2001 From: Iwawi Date: Fri, 12 Dec 2025 12:45:17 +0100 Subject: [PATCH 7/8] [DOCS] remove defunct toc --- metro-ai-suite/image-based-video-search/docs/toc.md | 8 -------- 1 file changed, 8 deletions(-) delete mode 100644 metro-ai-suite/image-based-video-search/docs/toc.md diff --git a/metro-ai-suite/image-based-video-search/docs/toc.md b/metro-ai-suite/image-based-video-search/docs/toc.md deleted file mode 100644 index 9585d208c..000000000 --- a/metro-ai-suite/image-based-video-search/docs/toc.md +++ /dev/null @@ -1,8 +0,0 @@ -Image-Based Video Search Sample Application - - \ No newline at end of file From 25a9386e1fbf8804dcd1e76ac71aa257265a7663 Mon Sep 17 00:00:00 2001 From: Iwawi Date: Fri, 12 Dec 2025 12:56:53 +0100 Subject: [PATCH 8/8] [DOCS] fixing links --- .../docs/user-guide/index.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/metro-ai-suite/visual-search-question-and-answering/docs/user-guide/index.md b/metro-ai-suite/visual-search-question-and-answering/docs/user-guide/index.md index 69cfa9138..557ca0f50 100644 --- a/metro-ai-suite/visual-search-question-and-answering/docs/user-guide/index.md +++ b/metro-ai-suite/visual-search-question-and-answering/docs/user-guide/index.md @@ -109,19 +109,19 @@ The UI, built with `streamlit`, allows users to: ## Learn More -- Check the [System requirements](./system-requirements). -- Start with the [Get Started](./get-started). -- Deploy with [Helm chart](./deploy-with-helm). +- Check the [System requirements](./system-requirements.md). +- Start with the [Get Started](./get-started.md). +- Deploy with [Helm chart](./deploy-with-helm.md).