Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
35 changes: 35 additions & 0 deletions education-ai-suite/smart-classroom/docs/user-guide/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
# Smart Classroom

<!--hide_directive
<div class="component_card_widget">
<a class="icon_github" href="https://github.com/open-edge-platform/edge-ai-suites/tree/release-2025.2.0/education-ai-suite/smart-classroom">
GitHub project
</a>
<a class="icon_document" href="https://github.com/open-edge-platform/edge-ai-suites/blob/release-2025.2.0/education-ai-suite/smart-classroom/README.md">
Readme
</a>
</div>
hide_directive-->

The Smart Classroom project is a modular, extensible framework designed to process and summarize educational content using advanced AI models. It supports transcription, summarization, and future capabilities like video understanding and real-time analysis.

The main features are as follows:

- **Audio file processing and transcription** with ASR models (e.g., Whisper, Paraformer)
- **Summarization** using powerful LLMs (e.g., Qwen, LLaMA)
- **Plug-and-play architecture** for integrating new ASR and LLM models
- **API-first design** ready for frontend integration
- Ready-to-extend for real-time streaming, diarization, translation, and video analysis

<!--hide_directive
:::{toctree}
:hidden:

system-requirements
how-it-works
get-started
application-flow
release-notes

:::
hide_directive-->
20 changes: 0 additions & 20 deletions education-ai-suite/smart-classroom/docs/user-guide/index.rst

This file was deleted.

Original file line number Diff line number Diff line change
@@ -1,5 +1,15 @@
HMI Augmented Worker
============================================
# HMI Augmented Worker

<!--hide_directive
<div class="component_card_widget">
<a class="icon_github" href="https://github.com/open-edge-platform/edge-ai-suites/tree/release-2025.2.0/manufacturing-ai-suite/hmi-augmented-worker">
GitHub project
</a>
<a class="icon_document" href="https://github.com/open-edge-platform/edge-ai-suites/blob/release-2025.2.0/manufacturing-ai-suite/hmi-augmented-worker/README.md">
Readme
</a>
</div>
hide_directive-->

The HMI Augmented Worker is a RAG enabled HMI application deployed on Type-2 hypervisors.
Deploying RAG-enabled HMI applications in a Type-2 hypervisor setup allows flexible and
Expand All @@ -8,10 +18,10 @@ a single physical machine.

In this architecture, the HMI application operates within a Windows® virtual machine managed
by a Type-2 hypervisor such as
`EMT <https://github.com/open-edge-platform/edge-microvisor-toolkit>`__.
[EMT](https://github.com/open-edge-platform/edge-microvisor-toolkit).
The Retrieval-Augmented Generation (RAG) pipeline and supporting AI services are deployed
natively on a host system, which is EMT in this implementation.
`Chat Question-and-Answer Core <https://github.com/open-edge-platform/edge-ai-libraries/tree/release-2025.2.0/sample-applications/chat-question-and-answer-core>`__
[Chat Question-and-Answer Core](https://github.com/open-edge-platform/edge-ai-libraries/tree/release-2025.2.0/sample-applications/chat-question-and-answer-core)
provides the RAG capability.
This separation ensures robust isolation between the HMI and AI components, enabling
independent scaling, maintenance, and updates. The setup leverages the strengths of both
Expand All @@ -29,15 +39,13 @@ productivity for machine operators. In this sample application, the focus is on
an RAG pipeline in a Type-2 Hypervisor-based setup. There is no reference HMI used and the
user is expected to do the HMI integration using the RAG pipeline APIs provided.

How it works
############
## How it works

This section highlights the high-level architecture of the sample application.

High-Level Architecture
+++++++++++++++++++++++
### High-Level Architecture

The system has a RAG pipeline reusing ``Chat Question and Answer Core`` application
The system has a RAG pipeline reusing `Chat Question and Answer Core` application
running on the host alongside a typical HMI application which is executing on
the Windows® Guest VM (virtual machine). A knowledge base is initialized by using the
contents from a pre-configured folder. The folder contains the knowledge base like user
Expand All @@ -51,27 +59,23 @@ and runs independently from the HMI application. The HMI application is responsi
providing the required interface along with associated user experience to enable
the operator to access this knowledge base.

![HMI augmented worker architecture diagram](./_images/hmi-augmented-worker-architecture.png)

.. image:: ./_images/hmi-augmented-worker-architecture.png
:alt: HMI Augmented Worker Architecture Diagram

Chat Question-and-Answer Core (ChatQnA Core)
++++++++++++++++++++++++++++++++++++++++++++
### Chat Question-and-Answer Core (ChatQnA Core)

The 'ChatQnA Core' sample application serves as a basic Retrieval Augmented Generation
(RAG) pipeline, allowing users to pose questions and obtain answers, even from their
private data corpus. This sample application illustrates the construction of RAG pipelines.
It is designed for minimal memory usage, being developed as a single, monolithic application
with the complete RAG pipeline integrated into one microservice.

The 'ChatQnA Core` application should be setup on the host system. For further details,
visit `Chat Question-and-Answer Core Sample Application Overview <https://github.com/open-edge-platform/edge-ai-libraries/blob/release-2025.2.0/sample-applications/chat-question-and-answer-core/docs/user-guide/overview.md>`__.
The `ChatQnA Core` application should be setup on the host system. For further details,
visit [Chat Question-and-Answer Core Sample Application Overview](https://github.com/open-edge-platform/edge-ai-libraries/blob/release-2025.2.0/sample-applications/chat-question-and-answer-core/docs/user-guide/overview.md).
The application is used as is without any changes.
The configurable parameters like the LLM model, Embedding model, Reranker model, or
Retriever model are setup based on the HMI application requirement.

File Watcher Service
++++++++++++++++++++
### File Watcher Service

The File Watcher Service runs alongside with HMI application on the Windows environment,
consistently observing file system activities like creation, modification, and deletion.
Expand All @@ -80,11 +84,9 @@ When changes are detected, it sends the pertinent file data over the network to
Retrieval-Augmented Generation (RAG) workflows. The watcher service logic is shown in
the following flow diagram:

.. image:: ./_images/file-watcher-implementation-logic.png
:alt: File Watcher Service Implementation Logic Flow
![file watcher service implementation logic flow](./_images/file-watcher-implementation-logic.png)

Human Machine Interface(HMI) Application
++++++++++++++++++++++++++++++++++++++++
### Human Machine Interface(HMI) Application

A Human-Machine Interface(HMI) can vary depending on the use case or the creator.
While HMIs generally serve as interface connecting users to machines, systems, or
Expand All @@ -96,17 +98,19 @@ an accurate summary to state that this sample application illustrates how the `C
RAG pipeline can be executed in a Type-2 Hypervisor setup enabling applications like HMI
to benefit from it.

Supporting Resources
####################
## Supporting Resources

For more comprehensive guidance on beginning, see the
:doc:`Getting Started Guide <./get-started>`.

.. toctree::
:hidden:

system-requirements
get-started
release-notes
how-to-build-from-source
Source Code <https://github.com/open-edge-platform/edge-ai-suites/tree/release-2025.2.0/manufacturing-ai-suite/hmi-augmented-worker>
[Getting Started Guide](./get-started).

<!--hide_directive
:::{toctree}
:hidden:

system-requirements
get-started
release-notes
how-to-build-from-source
Source Code <https://github.com/open-edge-platform/edge-ai-suites/tree/release-2025.2.0/manufacturing-ai-suite/hmi-augmented-worker>
:::
hide_directive-->
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Let's discuss how this architecture translates to data flow in the weld defect d

### 1. **Weld Data Simulator**

The Weld Data Simulator uses the sets of time synchronized .avi and .csv files from the `edge-ai-suites/manufacturing-ai-suite/industrial-edge-insights-multimodal/weld-data-simulator/simulation-data/`, subset of test dataset coming from [Intel_Robotic_Welding_Multimodal_Dataset](https://huggingface.co/datasets/amr-lopezjos/Intel_Robotic_Welding_Multimodal_Dataset).
The Weld Data Simulator uses sets of time synchronized .avi and .csv files from the `edge-ai-suites/manufacturing-ai-suite/industrial-edge-insights-multimodal/weld-data-simulator/simulation-data/` subset of test dataset coming from [Intel_Robotic_Welding_Multimodal_Dataset](https://huggingface.co/datasets/amr-lopezjos/Intel_Robotic_Welding_Multimodal_Dataset).
It ingests the .avi files as RTSP streams via the **mediamtx** server. This enables real-time video ingestion, simulating camera feeds for weld defect detection.
Similarly, it ingests the .csv files as data points into **Telegraf** using the **MQTT** protocol.

Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,13 @@
# Weld Anomaly Detection

<!--hide_directive
<div class="component_card_widget">
<a class="icon_github" href="https://github.com/open-edge-platform/edge-ai-suites/tree/release-2025.2.0/manufacturing-ai-suite/industrial-edge-insights-time-series/apps/weld-anomaly-detection">
GitHub project
</a>
</div>
hide_directive-->

This sample app demonstrates how AI-driven analytics enable edge devices to monitor weld quality.
It detects anomalous weld patterns and alerts operators for timely intervention,
ensuring proactive maintenance, safety, and operational efficiency. No more failures
Expand Down Expand Up @@ -81,7 +89,6 @@ detect the anomalous power generation data points relative to wind speed.

**Note**: Please note, CatBoost models doesn't run on Intel GPUs.


##### **`tick_scripts/`**

The TICKScript `weld_anomaly_detector.tick` determines processing of the input data coming in.
Expand All @@ -92,4 +99,3 @@ By default, it is configured to publish the alerts to **MQTT**.

The `weld_anomaly_detector.cb` is a model built using the CatBoostClassifier Algo of CatBoost ML
library.

Original file line number Diff line number Diff line change
@@ -1,5 +1,13 @@
# Wind Turbine Anomaly Detection

<!--hide_directive
<div class="component_card_widget">
<a class="icon_github" href="https://github.com/open-edge-platform/edge-ai-suites/tree/release-2025.2.0/manufacturing-ai-suite/industrial-edge-insights-time-series/apps/wind-turbine-anomaly-detection">
GitHub project
</a>
</div>
hide_directive-->

This sample app demonstrates a time series use case by detecting anomalous power generation
patterns in wind turbines, relative to wind speed. By identifying deviations, it helps
optimize maintenance schedules and prevent potential turbine failures, enhancing
Expand All @@ -11,7 +19,6 @@ If you want to start working with it, instead, check out the
[Get Started Guide](../get-started.md) or [How-to Guides](../how-to-guides/index.md)
for Time-series applications.


## App Architecture

As seen in the following architecture diagram, the sample app at a high-level comprises of data simulators(can act as data destinations if configured) - these in the real world would be the physical devices, the generic Time Series AI stack based on **TICK Stack** comprising of Telegraf, InfluxDB, Time Series Analytics microservice using Kapacitor and Grafana.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,19 +1,27 @@
Pallet Defect Detection
==============================
# Pallet Defect Detection

<!--hide_directive
<div class="component_card_widget">
<a class="icon_github" href="https://github.com/open-edge-platform/edge-ai-suites/tree/release-2025.2.0/manufacturing-ai-suite/industrial-edge-insights-vision/apps/pallet-defect-detection">
GitHub project
</a>
<a class="icon_document" href="https://github.com/open-edge-platform/edge-ai-suites/blob/release-2025.2.0/manufacturing-ai-suite/industrial-edge-insights-vision/apps/pallet-defect-detection/README.md">
Readme
</a>
</div>
hide_directive-->

Automated quality control with AI-driven vision systems.

Overview
########
## Overview

This Sample Application enables real-time pallet condition monitoring by running inference
workflows across multiple AI models. It connects multiple video streams from warehouse
cameras to AI-powered pipelines, all operating efficiently on a single industrial PC.
This solution enhances logistics efficiency and inventory management by detecting
defects before they impact operations.

How It Works
############
## How It Works

This sample application consists of the following microservices:
DL Streamer Pipeline Server, Model Registry Microservice(MRaaS), MediaMTX server,
Expand All @@ -31,39 +39,35 @@ be seen on Prometheus UI. Any desired AI model from the Model Registry Microserv
(which can interact with Postgres, Minio and Geti Server for getting the model) can be
pulled into DL Streamer Pipeline Server and used for inference in the sample application.

.. figure:: ./images/industrial-edge-insights-vision-architecture.drawio.svg
:alt: Architecture and high-level representation of the flow of data through the architecture

Figure 1: Architecture diagram
![architecture and high-level representation of the flow of data through the architecture](./images/industrial-edge-insights-vision-architecture.drawio.svg)

This sample application is built with the following Intel Edge AI Stack Microservices:

- `DL Streamer Pipeline Server <https://docs.openedgeplatform.intel.com/2025.2/edge-ai-libraries/dlstreamer-pipeline-server/index.html>`__
- [DL Streamer Pipeline Server](https://docs.openedgeplatform.intel.com/2025.2/edge-ai-libraries/dlstreamer-pipeline-server/index.html)
is an interoperable containerized microservice based on Python for video ingestion
and deep learning inferencing functions.
- `Model Registry Microservice <https://docs.openedgeplatform.intel.com/2025.2/edge-ai-libraries/model-registry/index.html>`__
- [Model Registry Microservice](https://docs.openedgeplatform.intel.com/2025.2/edge-ai-libraries/model-registry/index.html)
provides a centralized repository that facilitates the management of AI models

It also consists of the below Third-party microservices:

- `Nginx <https://hub.docker.com/_/nginx>`__
- [Nginx](https://hub.docker.com/_/nginx)
is a high-performance web server and reverse proxy that provides TLS termination and unified HTTPS access.
- `MediaMTX Server <https://hub.docker.com/r/bluenviron/mediamtx>`__
- [MediaMTX Server](https://hub.docker.com/r/bluenviron/mediamtx)
is a real-time media server and media proxy that allows to publish webrtc stream.
- `Coturn Server <https://hub.docker.com/r/coturn/coturn>`__
- [Coturn Server](https://hub.docker.com/r/coturn/coturn)
is a media traffic NAT traversal server and gateway.
- `Open telemetry Collector <https://hub.docker.com/r/otel/opentelemetry-collector-contrib>`__
- [Open telemetry Collector](https://hub.docker.com/r/otel/opentelemetry-collector-contrib)
is a set of receivers, exporters, processors, connectors for Open Telemetry.
- `Prometheus <https://hub.docker.com/r/prom/prometheus>`__
- [Prometheus](https://hub.docker.com/r/prom/prometheus)
is a systems and service monitoring system used for viewing Open Telemetry.
- `Postgres <https://hub.docker.com/_/postgres>`__
- [Postgres](https://hub.docker.com/_/postgres)
is object-relational database system that provides reliability and data integrity.
- `Minio <https://hub.docker.com/r/minio/minio>`__
- [Minio](https://hub.docker.com/r/minio/minio)
is high performance object storage that is API compatible with
Amazon S3 cloud storage service.

Features
########
## Features

This sample application offers the following features:

Expand All @@ -73,33 +77,35 @@ This sample application offers the following features:
- Interconnected warehouses deliver analytics for quick and informed tracking and
decision making.


.. toctree::
:hidden:

overview-architecture
system-requirements
get-started
troubleshooting-guide
how-to-change-input-video-source
how-to-deploy-using-helm-charts
how-to-deploy-with-edge-orchestrator
how-to-enable-mlops
how-to-manage-pipelines
how-to-run-multiple-ai-pipelines
how-to-scale-video-resolution
how-to-use-an-ai-model-and-video-file-of-your-own
how-to-use-opcua-publisher
how-to-run-store-frames-in-s3
how-to-view-telemetry-data
how-to-use-gpu-for-inference
how-to-start-mqtt-publisher
how-to-integrate-balluff-sdk
how-to-install-balluff-sdk-on-host
how-to-integrate-pylon-sdk
how-to-install-pylon-sdk-on-host.md
how-to-benchmark
api-reference
environment-variables

release_notes/Overview
<!--hide_directive
:::{toctree}
:hidden:

overview-architecture
system-requirements
get-started
troubleshooting-guide
how-to-change-input-video-source
how-to-deploy-using-helm-charts
how-to-deploy-with-edge-orchestrator
how-to-enable-mlops
how-to-manage-pipelines
how-to-run-multiple-ai-pipelines
how-to-scale-video-resolution
how-to-use-an-ai-model-and-video-file-of-your-own
how-to-use-opcua-publisher
how-to-run-store-frames-in-s3
how-to-view-telemetry-data
how-to-use-gpu-for-inference
how-to-start-mqtt-publisher
how-to-integrate-balluff-sdk
how-to-install-balluff-sdk-on-host
how-to-integrate-pylon-sdk
how-to-install-pylon-sdk-on-host.md
how-to-benchmark
api-reference
environment-variables
release_notes/Overview

:::
hide_directive-->
Loading
Loading